The Pentagon Wants AI with No Limits. Your Community Should Care.
A company that makes AI told the Pentagon no. The Pentagon responded by threatening to classify that company as a national security risk, the same designation reserved for foreign adversaries.
That's where we are.
Here's what happened and what it means for communities that already know they're on someone's list.
What Actually Happened
The timeline is short. The implications are not.
- July 2025: Anthropic, OpenAI, Google, and Elon Musk's xAI each received contracts worth up to $200M from the Department of Defense. The task: customize AI for military use.
- January 9, 2026: The DoD issued its AI Acceleration Strategy. It mandated "all lawful uses," meaning every contracted AI model must be available for any purpose the military considers legal. No restrictions.
- January 2026: Claude, Anthropic's AI model, was reportedly used in an operation to capture Venezuelan President Nicolás Maduro, routed through defense contractor Palantir. Anthropic said all uses fell within its policies.
- February 15-16: The Pentagon threatened to label Anthropic a "supply chain risk." That designation would require every DoD contractor, thousands of companies, to certify they don't use Claude.
- February 19: Pentagon CTO Emil Michael said Anthropic's usage limits were "not democratic."
- February 23: Anthropic CEO Dario Amodei met with Defense Secretary Pete Hegseth at the Pentagon. Hegseth gave Amodei until Friday to sign a document granting full, unrestricted military access to Claude.
- Amodei didn't sign.
- xAI agreed to the Pentagon's "all lawful use" terms. OpenAI and Google are still negotiating.
What Are the Red Lines?
Anthropic has two stated limits it won't cross. Both matter to this conversation.
The first is mass surveillance of Americans. Anthropic prohibits using Claude for non-consensual tracking or bulk monitoring of US citizens. No sweeping social media for targets, no aggregating location data on populations, no automated profiling at scale. This is the one the Pentagon most wants removed.
The second is fully autonomous weapons. Anthropic requires human oversight in military targeting decisions. Claude cannot be used to autonomously select and engage targets without a person in the loop. The DoD wants that restriction gone too.
These aren't philosophical positions. They're contractual limits the Pentagon is actively trying to eliminate.
Why Is the Pentagon Pushing Back?
The DoD's stated argument is consistency. They want every contracted AI model available for "all lawful use cases" without having to negotiate restrictions model by model.
Emil Michael's framing is worth sitting with: he said it's "not democratic" for a private company to decide what the military can and cannot do — that Congress sets those limits, not corporations.
That argument has a logic to it. It also conveniently ignores that Congress has not voted to authorize AI-powered mass surveillance of Americans. The "all lawful uses" framing lets the Pentagon define what's lawful in real time.
The Defense Production Act is reportedly on the table. That's the mechanism that lets the government compel companies to produce what it needs during a national emergency. Using it to force an AI company to drop civil liberties guardrails would be new territory.
The "supply chain risk" threat is economic pressure. If every DoD contractor has to avoid Claude, Anthropic loses a substantial share of its revenue base. The goal is to make the cost of saying no too high to sustain.
One note on Anthropic: they took $200M from the Pentagon. They are not a principled tech company standing up to power. They are a company whose self-interest, maintaining credibility with enterprise customers and regulators, currently lines up with a limit that also happens to protect communities from surveillance. Those are different things. Both can be true.
Why This Matters to Communities Like Ours
COINTELPRO ran from 1956 to 1971. The FBI used it to surveil, infiltrate, and disrupt Black civil rights organizations, anti-war groups, socialist and communist organizations, and LGBTQ+ communities. The program didn't require evidence of crimes. It required suspicion of dissent.
LGBTQ+ organizations were monitored for decades on the premise that homosexuality was a security threat. The FBI maintained files on gay rights organizations well into the 1980s. ICE has used facial recognition against immigrant communities, often pulling from databases built without consent. DHS fusion centers have monitored activist groups by aggregating their social media activity and communications.
None of this is conspiracy. It's documented. Congressional investigations, FOIA requests, and court cases have established the record.
Historical surveillance programs were limited by cost and labor. Monitoring a group required people: agents, informants, analysts. That friction didn't stop it, but it constrained scale.
AI removes that friction almost entirely.
A system that can scan social media, cross-reference location data, identify social networks, and flag community members for review can do in seconds what used to take weeks. It doesn't get tired. It doesn't need warrants for data that's already been aggregated. And it can operate at a scale that makes the COINTELPRO era look targeted by comparison.
The communities Tactical Snowflakes exists to serve, LGBTQ+ people, immigrants, people of color, political minorities, are the same communities that domestic surveillance programs have targeted first. Every time. That's not coincidence. It's the pattern.
This Isn't New — It's a Pattern
The surveillance infrastructure already exists. This fight is about whether AI gets bolted onto it with zero friction.
- Facial recognition has been deployed against protesters. Portland, Baltimore, and other cities used it during demonstrations before some imposed local bans.
- DHS fusion centers, joint federal-state intelligence operations, have monitored activists, journalists, and community organizers.
- The FBI's social media monitoring programs scan public posts for keywords tied to domestic extremism. The definition of that category shifts with each administration.
- Private data brokers sell location data, purchase history, and social graphs to government agencies without any requirement that the agency obtain a warrant.
Anthropic's mass surveillance limit is one contractual provision at one company. It is not a structural protection. If it gets removed through Pentagon pressure, regulatory action, or a future contract renegotiation, there is nothing else in place to fill the gap.
What You Can Do With This Information
This is not a call to contact your representative. This is practical.
Physical security and digital security are the same thing. The same communities that benefit from knowing how to protect themselves physically also benefit from understanding how they can be identified, tracked, and targeted before anything physical happens.
Think about what data you generate and who can access it. Your location history. Your social graph. What apps know where you go and who you're with. That is the same situational awareness that applies to any other security question.
Community defense is about what you carry and what they know. The communities most likely to need physical self-defense are the same communities most likely to be under digital surveillance. Those two facts are connected.
Anthropic didn't sign. That's good for now. Watch what happens Friday.