Stay ahead, master crypto insights
2026-03-05 12:42
View OriginalAs monitored by 1M AI News, the U.S. Department of Defense's designation of Anthropic as a "supply chain risk" is now impacting critical government procurement channels. Over the past year, Anthropic has delivered AI services to U.S. defense and intelligence agencies via Palantir Technologies, with the Pentagon deploying AWS-hosted Claude models integrated with Palantir’s software to identify patterns within vast volumes of classified data and support decision-making. If the supply chain risk classification takes effect, Palantir will be forced to discontinue using Claude in its defense operations. Approximately 42% of Palantir’s revenue—nearly half of its $4.5 billion annual run-rate—comes from U.S. government contracts. Certain Palantir software was specifically optimized for Claude, and transitioning to alternative model providers is estimated to take around two weeks. Sources indicate that even after switching, Palantir expects to maintain roughly equivalent contract revenues. This segment represents only a minor portion of Anthropic’s total revenue, which is projected to reach up to $18 billion this year.
On Tuesday, Palantir CEO Alex Karp delivered an unname-dropped critique of Anthropic at the Washington Defense Tech Summit hosted by Andreessen Horowitz. He warned Silicon Valley against antagonizing the U.S. military: “If you believe you can seize all white-collar jobs and then betray the military, and don’t think this will eventually lead to the nationalization of our technology, then you’re absolutely retarded. That’s exactly where this path ends.”
Amodei’s internal memo, meanwhile, criticized Palantir from another angle. He revealed that during Pentagon negotiations, Palantir pitched Anthropic a “classifier” system claiming it could use machine learning to detect when a model crosses ethical red lines. Amodei dismissed this as “about 20% real, 80% safety theater”—a term borrowed from security discourse referring to performative measures that create the illusion of safety without actual efficacy. His reasoning: models cannot contextualize their own broader operational environment—such as whether human operators are in the loop (autonomous weapons concerns), whether the analyzed data originates from foreign sources or U.S. citizens’ information, or whether it was obtained with consent or through gray-market channels (mass surveillance issues)—and given the frequency and ease of jailbreak attacks, such safeguards are fundamentally inadequate.
Amodei went further, asserting that Palantir’s proposed safety layer is “almost entirely cosmetic,” and interpreted Palantir’s stance toward Anthropic as: “You have some disgruntled employees; we’ll give you something to placate them—or make what’s happening invisible to them. That’s our service.” He emphasized that both the Pentagon, Palantir, and Anthropic’s political advisors “everyone” assumed the core issue Anthropic faced was merely employee sentiment management. Notably, OpenAI has not participated in Pentagon-related initiatives through Palantir.
Disclaimer: Contains third-party opinions, does not constitute financial advice







This column focuses on the real progress of Agents: technological evolution, application implementat
Tracking on-chain movements of the smart money and institutions
Spotlight on Frontier, trending projects, and breaking events
As the 2026 crypto bear market deepens, exit scams and project blowups are becoming increasingly fre
American Crypto Act – timely interpretations of policies worldwide
Selected potential airdrop opportunities to gain big with small investments