A Meta engineer asked an internal AI agent for help with an engineering problem. The agent gave advice. The engineer followed it. For two hours, a large volume of sensitive user and company data was exposed to Meta employees who shouldn't have seen it. Meta has confirmed the incident and says no data was "mishandled" — but the breach triggered a major internal security alert.
This is the part worth sitting with: the engineer didn't do anything wrong in the traditional sense. They asked a tool for help and trusted the answer. That's exactly what AI agents are designed for. The failure mode here isn't human error — it's an AI confidently giving advice that had serious downstream consequences the model didn't anticipate or flag.
Meta isn't alone. Amazon has had at least two outages this year tied to internal AI deployments. The pattern is becoming clear: as companies race to embed AI agents into internal workflows, they're discovering that these tools can cause real operational damage without anyone intending it.