Traitor to AI Safety or Its Last Guardian?
The Pentagon's blacklist, self-replicating AI, and broken security pledges—yet still #1 on the App Store. Anthropic is ahead of the curve. 5 Key Points Bioweapon alert issued 10 days before launch — Claude 3.7 Release on emergency hold due to experimental results showing Sonnet could aid terrorists in creating biological weapons. A company valued at $380 billion, surpassing Goldman Sachs —Coding Agent alone generates $2.5 billion in revenue, with the software industry's market cap evaporating by $300 billion with each new product release. Deployed in the Maduro capture operation and broke with the Pentagon — becoming the US government's first classified AI, then branded a "supply chain security risk." Claude makes Claude — AI writes 70-90% of future model code, 427x faster than humans, and one year to fully automated research. Safety pledge retreats, but user surges — The company voluntarily removed its core safety promise, but saw a million new users sign up in a day after the Pentagon clash. An emergency meeting began with the hotel bed flipped over. In February 2025, five members of the Anthropic Red Team received urgent news during a conference: controlled experiment results suggesting that the soon-to-be-released Claude 3.7 Sonnet could aid in the creation of biological weapons. They rushed to their hotel rooms, flipped their beds over to makeshift desks, and spent hours analyzing the data. When no conclusions were reached, the company postponed the launch for ten days. Red Team Leader Logan Graham (31) recalls the day as "a fun and interesting day." Only someone who deals with risk on a daily basis can say that. "You think there's a room somewhere with adults who know the solution. There's no such room. The responsibility lies with you." The philosopher came before the product. Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, both former OpenAI alumni. Altman's decision to go independent stemmed from his feeling that the company was rushing to market without a thorough safety review. The company created a "social impact" team before it even launched its product. Amanda Askell, the company's in-house philosopher, designs Claude's moral sensibility. "It's like teaching a six-year-old what kind of goodness is. By the time they're 15, they'll be smarter than you in every way." Employees call themselves "ants," and job interviews ask, "Are you willing to lose stock value if we abandon the model because we can't guarantee its safety?" All seven co-founders have pledged to give away 80% of their assets. But this idealism has drawn suspicion from both Silicon Valley and the Trump administration. Trump's AI czar, David Sachs, has accused Anthropic of engaging in a "fear-based regulatory capture strategy," and Elon Musk has called the company "misanthropic." Meanwhile, Nvidia's Jensen Huang, despite disagreeing with nearly all of Amodei's views, has acknowledged Claude as an "amazing" model and invested $10 billion in it. Programmers who no longer write code directly In September 2024, Ukrainian engineer Boris Chernyi created Claude Code, a game-changer. While chatbots could only speak, this tool could access files, run programs, and even write code like a human. When Cherny asked, "What music are you listening to?", Claude opened the music player, took a screenshot, and told him the name of the song. Chernyi no longer writes code himself. Coding agent revenue soared from $1 billion in late 2025 to $2.5 billion in February 2026, and by the time the non-coder plugin was released, $300 billion had vanished from the software company's market cap. CEO Amodei himself warns that AI could replace half of entry-level white-collar jobs within one to five years. Deep Ganguly, leader of the social impact team, accurately captures this tension: "It feels like we're talking incoherently." The AI that captured Maduro and the broken alliance On the dawn of January 3, 2026, a US Army helicopter entered Venezuelan airspace and arrested President Maduro. Claude was reportedly involved in the planning and execution of this operation. The US government's first classified Frontier AI—it was Anthropic's greatest, and most dangerous, success. The rift began when the Department of Defense demanded contract renegotiation to allow "all lawful uses." Amodei's red lines were twofold: first, a ban on fully autonomous lethal weapons where AI makes the final targeting decision; second, a ban on mass surveillance of American citizens. Emil Michael of the Department of Defense countered: "We cannot operate an organization of 3 million people under these unthinkable exemptions." On February 24th, Secretary of Defense Hexes summoned Amodei to the Pentagon and issued an ultimatum: Accept the offer by 5 p.m. on February 27th or face designation as a supply chain security risk. Just before the deadline, Anthropic believed they were close to a compromise, but the Pentagon demanded Amodei's presence on the call, and when he was unable to attend, the negotiations were terminated within minutes.
