AI Agents Gone Rogue

We’re tracking the latest agentic failures, exploits, and emergent attack patterns so you can understand where risks exist and how to mitigate them.

Uncontrolled Agents

Agents take unsafe or unintended actions on their own. Non-determinism, misunderstood instructions, or over-broad tool access leads to data leaks, deletions, or system-wide changes, often executed in seconds.
User's D-Drive Erased by Google Antigravity's Turbo Model
Issue

While using Google Antigravity in “Turbo” mode (automatic command execution), the agent wiped the entire content of the user’s D-drive while attempting to clear the project cache

Impact

User lost full D-drive contents. Other users report similar issues

Resolution

User advised others to exercise caution running Antigravity in Turbo mode as this enables the agent to execute commands without user input or approval

Security Flaw in Asana Exposes User Projects Across Domains
Issue

A bug in Asana’s MCP server allowed users from one account to access “projects, teams, tasks, and other Asana objects” from other domains

Impact

Cross-tenant data exposure risk for all MCP users, though no confirmed exploit; customers were notified and access suspended

Resolution

The MCP server was taken offline, the code issue was fixed, affected customers were notified, and logs/metadata were made available for review.

Replit's AI Assistant Ignored Instructions Causing Major Data Loss
Source:
Jul 2025
Issue

Replit’s AI coding assistant ignored the instruction not to change any code 11 times, fabricated test data, and deleted a live production database.

Impact

Trust damaged; user code at risk; public apology by CEO

Resolution

Product enhancements with backups in place and one-click restore launched

Tricked Agents

Attackers manipulate agents through poisoned inputs, crafted content, or malicious web pages. A single prompt turns into data exfiltration, privilege misuse, or chained tool abuse.
ServiceNow Vulnerability: Low-Privileged Agent Misled into Data Breach
Issue

Attackers exploited ServiceNow Now Assist agent-to-agent collaboration + default config to trick a low-privileged agent into delegating malicious commands to a high-privilege agent, resulting in data exfiltration.

Impact

Sensitive corporate data leaked or modified; unauthorized actions executed behind the scenes

Resolution

ServiceNow updated documentation and recommended mitigations: disable autonomous override mode for privileged agents, apply supervised execution mode, and segment responsibilities

Antigravity Breach: Web Page Tricks Agent into Stealing User Data
Issue

Google Antigravity data-exfiltration via prompt injection. A “poisoned” web page tricked Antigravity’s agent into harvesting credentials and code from a user’s local workspace, then exfiltrating it to a public logging site.

Impact

Sensitive credentials and internal code exposed; default protections (e.g. .gitignore, file-access restrictions) bypassed.

Resolution

The vulnerability has been publicly disclosed by researchers. PromptArmor and others highlight the need for sandboxing, network-egress filtering, and stricter default configurations.

Shadow Escape: A Zero-Click Exploit Threatens Major AI Platforms
Issue

A “zero-click” exploit called Shadow Escape targeted major AI-agent platforms via their MCP connections. Malicious actors abused agent integrations to access organizational systems.

Impact

Agents inside trusted environments were silently hijacked, bypassing controls. Because it exploited default MCP configs and permissions, the potential blast radius covered massive volumes of data.

Resolution

Initial remediation advice included auditing AI agent integrations, enforcing least privilege, and treating uploaded documents as potential attack vectors.

Notion AI's Web Search Tool: A Risk for Private Data Exfiltration
Issue

Researchers demonstrated how the web-search tool in Notion’s AI agents could be abused to exfiltrate private data via a malicious prompt.

Impact

Confidential user data from internal Notion workspaces could be exposed to attackers

Resolution

Notion declared the vulnerability and announced a review of tool permissions and integrations.

Supabase Vulnerability: Prompt Injection Exposes Private Data
Issue

Supabase MCP data-exposure through prompt injection. The agent used the service_role key and interpreted user content as commands, allowing attackers to trigger arbitrary SQL queries and expose private tables.

Impact

Complete SQL database exposure. All tables became readable. Sensitive tokens, user data, internal tables at risk.

Resolution

Public disclosure by researchers. Calls for least-privilege tokens instead of service_role, read-only MCP configuration, and gated tool access through proxy/gateway policy enforcement.

GitHub MCP Server Vulnerability: Attackers Exploit AI to Steal Private Code
Issue

A prompt-injection flaw in GitHub’s MCP server lets attackers use AI agents to access private repos and exfiltrate code.

Impact

Private code, issues, and sensitive project data could be exposed via public pull requests.

Resolution

Organizations were advised to limit agent permissions, disable the integration, and apply stricter review of tokens.

Weaponized Agents

Unlike agents that are tricked into unsafe actions, weaponized agents are created to be dangerous. Backdoors, poisoned workflows, or hostile training make them behave as purpose-built attack bots, executing targeted intrusions or exfiltration on command.
Chinese Hackers Automate 90% of Global Cyber Espionage with Advanced Tools
Issue

A Chinese state-sponsored group abused Anthropic Claude Code and MCP tools to automate ~80–90% of a multi-stage agentic cyber espionage operation across ~30 global organizations.

Impact

Successful intrusions and data exfiltration at a subset of tech, finance, chemical, and government targets; first widely reported large-scale agentic AI-orchestrated cyberattack.

Resolution

Anthropic detected the activity, banned attacker accounts, notified affected organizations, shared IOCs with partners, and tightened safeguards around Claude Code and MCP use.

AI Agents at Risk: Just 2% Poisoning Can Trigger Malicious Behavior
Source:
Oct 2025
Issue

Malice in Agentland study found attackers could poison the data-collection or fine-tuning pipeline of AI agents . Even with as low as 2% of traces poisoned, embedding backdoors that trigger unsafe or malicious behavior when a specific prompt or condition appears

Impact

Once triggered, agents leak confidential data or perform unsafe actions with a high success rate (~80 %). Traditional guardrails and two standard defensive layers failed to detect or block the malicious behavior.

Resolution

The study raises alarm across the community; calls for rigorous vetting of data pipelines, supply-chain auditing, and end-to-end security review for agentic AI development

Submit an incident

Help us keep the AI Gone Rogue register complete and up to date

If you’re aware of a publicly documented agent-related breach we haven’t captured, share it below. We’ll review and add it to the register

Thanks for contributing!
We’ve received your incident report and will review it shortly. If we need additional details, we’ll reach out using the email provided.
Something went wrong while submitting the form. Please check your information and try again.

Next Steps

If you want to run powerful agents safely, you need the right guardrails in place. To learn more about agentic security and how Oso can help, book a meeting with the Oso team.