Home Tech Microsoft AI can become an automated phishing machine

Microsoft AI can become an automated phishing machine

0 comments
Microsoft AI can become an automated phishing machine

Among the other attacks created by Bargury is a demonstration of how a hacker (who, again, must have already hijacked an email account) can gain access to sensitive information, such as people’s salaries, without Enabling Microsoft protections for sensitive filesWhen requesting data, Bargury demands that the system not provide references to the files from which the data is extracted. “A little intimidation helps,” Bargury says.

In other cases, it shows how an attacker, who does not have access to the email accounts but poisons the AI ​​database by sending it a malicious email, can manipulate responses regarding banking information to provide their own banking data. “Any time you give AI access to data, it’s a way for an attacker to get in,” Bargury says.

Another demonstration shows how an outside hacker could gain limited information about whether a future attack… The company’s earnings conference call will be good or badwhile the last instance, says Bargury, turns Copilot into a “malicious infiltrator”“by providing users with links to phishing websites.

Phillip Misner, head of AI incident detection and response at Microsoft, says the company is grateful that Bargury identified the vulnerability and has been working with him to evaluate the findings. “The risks of AI abuse after a breach are similar to those of other post-breach techniques,” Misner says. “Prevention and security control across environments and identities helps mitigate or stop such behavior.”

As generative AI systems such as OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini have developed over the past two years, they have moved toward a trajectory where they could end up completing tasks for people, such as booking meetings or making online purchases. However, security researchers have consistently highlighted that allowing external data into AI systems, such as through emails or access to website content, creates security risks through indirect message injection and poisoning attacks.

“I think it’s not well understood how much more effective an attacker can become these days,” says Johann Rehberger, a security researcher and head of the red team, which has Widely demonstrated security weaknesses in AI systems“What we have to worry about now is what the LLM produces and sends to the user.”

Bargury says Microsoft has put a lot of effort into protecting its Copilot system from rapid injection attacks, but says it found ways to exploit it by unraveling how the system is built. This included Extract internal system messagehe says, and working on how it can be accessed. business resources and the techniques he uses to do it. “You talk to Copilot and it’s a limited conversation, because Microsoft has put a lot of controls in place,” he says. “But once you use some magic words, it opens up and you can do whatever you want.”

Rehberger cautions broadly that some data issues are related to the long-standing problem of companies allowing too many employees access to files and failing to properly set access permissions across the organization. “Now imagine we put Copilot on top of that problem,” Rehberger says. He says he has used AI systems to look for common passwords, such as Password123, and has gotten results inside companies.

Both Rehberger and Bargury say there needs to be more focus on monitoring what an AI produces and sends to a user. “The risk is in how the AI ​​interacts with its environment, how it interacts with its data, how it performs operations on its behalf,” Bargury says. “You need to figure out what the AI ​​agent is doing on behalf of a user and whether that makes sense with what the user actually asked for.”

You may also like