The US Treasury pulled together the heads of America's most systemically important banks this week for an emergency sit-down in Washington. Jerome Powell showed up too. The reason: Anthropic's new Claude Mythos model, which the company itself has described as capable of finding and exploiting software vulnerabilities at a level that surpasses almost every human on the planet. That is not a marketing line buried in fine print. Anthropic published that warning themselves after a code leak forced their hand.
What makes this meeting significant is the guest list. These are not just big banks. These are the institutions regulators have specifically designated as too critical to fail. If their systems get compromised, the knock-on effects touch everything from payroll processing to mortgage markets to the basic plumbing of how money moves. The fact that the Fed chair was in the room tells you this conversation has moved well past theoretical.
The uncomfortable truth here is that AI companies have been racing to ship capability and then figuring out the risk profile on the way out the door. Anthropic at least flagged the danger publicly, which is more than most. But when your own model's code leaks and your official response is basically "yes this thing can break into almost anything," the people responsible for financial stability are going to want answers fast.