logo

ChainThink

Stay ahead, master crypto insights

Palantir CEO Unnamed Criticism of Anthropic: "It's Insane for Them to Let the Military Down," Amodei Memo Fires Back: Palantir's Security Layer Is All Show, No Substance

Palantir CEO Unnamed Criticism of Anthropic: "It's Insane for Them to Let the Military Down," Amodei Memo Fires Back: Palantir's Security Layer Is All Show, No Substance

2026-03-05 12:42

View Original

As monitored by 1M AI News, the U.S. Department of Defense's designation of Anthropic as a "supply chain risk" is now impacting critical government procurement channels. Over the past year, Anthropic has delivered AI services to U.S. defense and intelligence agencies via Palantir Technologies, with the Pentagon deploying AWS-hosted Claude models integrated with Palantir’s software to identify patterns within vast volumes of classified data and support decision-making. If the supply chain risk classification takes effect, Palantir will be forced to discontinue using Claude in its defense operations. Approximately 42% of Palantir’s revenue—nearly half of its $4.5 billion annual run-rate—comes from U.S. government contracts. Certain Palantir software was specifically optimized for Claude, and transitioning to alternative model providers is estimated to take around two weeks. Sources indicate that even after switching, Palantir expects to maintain roughly equivalent contract revenues. This segment represents only a minor portion of Anthropic’s total revenue, which is projected to reach up to $18 billion this year.

On Tuesday, Palantir CEO Alex Karp delivered an unname-dropped critique of Anthropic at the Washington Defense Tech Summit hosted by Andreessen Horowitz. He warned Silicon Valley against antagonizing the U.S. military: “If you believe you can seize all white-collar jobs and then betray the military, and don’t think this will eventually lead to the nationalization of our technology, then you’re absolutely retarded. That’s exactly where this path ends.”

Amodei’s internal memo, meanwhile, criticized Palantir from another angle. He revealed that during Pentagon negotiations, Palantir pitched Anthropic a “classifier” system claiming it could use machine learning to detect when a model crosses ethical red lines. Amodei dismissed this as “about 20% real, 80% safety theater”—a term borrowed from security discourse referring to performative measures that create the illusion of safety without actual efficacy. His reasoning: models cannot contextualize their own broader operational environment—such as whether human operators are in the loop (autonomous weapons concerns), whether the analyzed data originates from foreign sources or U.S. citizens’ information, or whether it was obtained with consent or through gray-market channels (mass surveillance issues)—and given the frequency and ease of jailbreak attacks, such safeguards are fundamentally inadequate.

Amodei went further, asserting that Palantir’s proposed safety layer is “almost entirely cosmetic,” and interpreted Palantir’s stance toward Anthropic as: “You have some disgruntled employees; we’ll give you something to placate them—or make what’s happening invisible to them. That’s our service.” He emphasized that both the Pentagon, Palantir, and Anthropic’s political advisors “everyone” assumed the core issue Anthropic faced was merely employee sentiment management. Notably, OpenAI has not participated in Pentagon-related initiatives through Palantir.

Disclaimer: Contains third-party opinions, does not constitute financial advice

Recommended Reading
Claude is the core AI tool used by the U.S. military in operations against Iran, having targeted over a thousand objectives on the first day through the Maven system.
Claude is the core AI tool used by the U.S. military in operations against Iran, having targeted over a thousand objectives on the first day through the Maven system.
Altman faces backlash as rivalry between two AI giants escalates
Altman faces backlash as rivalry between two AI giants escalates
Trump Forced to Escalate Military Deployment in the Middle East, Conflict Window May Extend to 100 Days
Trump Forced to Escalate Military Deployment in the Middle East, Conflict Window May Extend to 100 Days
The U.S. Military States AI Tools Played a Crucial Role in Operations Against Iraq
The U.S. Military States AI Tools Played a Crucial Role in Operations Against Iraq
OpenAI Annual Revenue Exceeds $25 Billion, Anthropic Reports $19 Billion, Closing the Gap
OpenAI Annual Revenue Exceeds $25 Billion, Anthropic Reports $19 Billion, Closing the Gap
Anthropic CEO: Failure to Praise or Donate to Trump Is Root Cause of Tensions with Current U.S. Government
Anthropic CEO: Failure to Praise or Donate to Trump Is Root Cause of Tensions with Current U.S. Government
Anthropic CEO Has Resumed Talks with the U.S. Department of Defense on an AI Accord
Anthropic CEO Has Resumed Talks with the U.S. Department of Defense on an AI Accord