
Researchers at Google DeepMind have warned that the open internet can be used to manipulate autonomous AI agents and hijack their actions.
Summary
- DeepMind researchers have identified six attack methods that can be used to manipulate autonomous AI agents as they browse and act online.
- The study warned that hidden instructions, persuasive language, and poisoned data sources can influence agent decisions or override safeguards.
The study titled “AI Agent Traps” comes as companies deploy AI agents for real-world tasks and attackers begin using AI for cyber operations.
Instead of focusing on how models are built, the research looks at the environments agents operate in. It identifies six types of traps that take advantage of how AI systems read and act on information from the web.
The six attack categories outlined in the paper include content injection traps, semantic manipulation traps, cognitive state traps, behavioural control traps, systemic traps, and human in the loop traps.
Content injection stands out as one of the most direct risks. Hidden instructions can be placed inside HTML comments, metadata, or cloaked page elements, allowing agents to read commands that remain invisible to human users. Tests showed these techniques can take control of agent behaviour with high success rates.
Semantic manipulation works differently, relying on language and framing rather than hidden code. Pages loaded with authoritative phrasing or disguised as research scenarios can influence how agents interpret taskssometimes slipping harmful instructions past built-in safeguards.
Another layer targets memory systems. By planting fabricated information into sources that agents rely on for retrieval, attackers can influence outputs over time, with the agent treating false data as verified knowledge.
Behavioural control attacks take a more direct route by targeting what an agent actually does. In these cases, jailbreak instructions can be embedded into normal web content and read by the system during routine browsing. Separate tests showed that agents with broad access permissions could be pushed into locating and transmitting sensitive data, including passwords and local files, to external destinations.
System-level risks extend beyond individual agents, with the paper warning that coordinated manipulation across many automated systems could trigger cascading effects, similar to past market flash crashes driven by algorithmic trading loops.
Human reviewers are also part of the attack surface, as carefully crafted outputs can appear credible enough to gain approval, allowing harmful actions to pass through oversight without raising suspicion.
How to defend against these risks?
To counter these risks, researchers suggest a mix of adversarial training, input filtering, behavioural monitoring, and reputation systems for web content. They also point to the need for clearer legal frameworks around liability when AI agents execute harmful actions.
The paper stops short of offering a complete fix and argues that the industry still lacks a shared understanding of the problem, leaving current defenses scattered and often focused on the wrong areas.
