Anthropic's CEO articulates the company's ethical boundaries on Claude deployment to the U.S. Department of War, refusing to enable mass domestic surveillance or fully autonomous weapons despite government pressure and threats. The statement grounds its refusal in commitments to privacy, human liberty, and responsible technology governance—principles central to the UDHR.
This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.
I was reading halfway thru and one line struck a nerve with me:
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.
The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.
Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.
To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
I'd be amused beyond all reason if we saw this chain of events:
- Anthropic says "no"
- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)
- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."
Bonus points if its some of the hyperscalers like AWS.
Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.
Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Dividedhttps://notdivided.org/
-----
The Department of War is threatening to
- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
- Label the company a "supply chain risk"
All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.
The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.
They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.
We are the employees of Google and OpenAI, two of the top AI companies in the world.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Agree fully with the main points of this statement. Mass domestic surveillance is the hallmark of an authoritarian and undemocratic state. That such a state holds 'votes' regularly does not detract from the chilling effect on public discourse and politics caused by mass surveillance.
The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.
> mass surveillance presents serious, novel risks to our fundamental liberties.
Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.
Same as saying "Look I sold nukes to USA to protect democracy, but we put 2 rules about usage". Everyone got nukes and nobody can enforce the rules. Just whitewashing of pure business greed, using terms like national security, democracy etc.
Does US really have Department of War? Is this Antropics way to show how f&^^& up they are in Department of Defense, or did they rebranded it to the old WWI/II days?
Something feels off about this announcement. Anyone else?
Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.
On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.
What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.
This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.
While I agree the name change has not (yet) been made with the proper authority, I'm quite partial to the name and prefer to use it despite its prematurity. I think it does a better job of communicating the types of work actually done by the department and rightly gives people pause about their support of it. Though I'm sure that wasn't the administration's intention.
This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.
It SHOULD be called the Department of War, as it was originally, since it makes its function clear. We are a society that has euphemized everything and so we no longer understand anything.
It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.
I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.
> opted to sell priority access to their models to the Pentagon
The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.
This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.
It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.
The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.
AI was always particularly well suited to military use and mass surveillance. It can take huge amounts of raw data and parse it for your, provide useful information from that. And let's face it, companies exist for profit.
It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
I can imagine that this will be the logical conclusion for many companies, I thought the same thing too, if it's too hard in the USA, they will just move.
> this is a strong arm by the governemnt to allow any use
It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.
The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.
> The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.
I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.
> Or the models could be developed internally, after having requisitioned the data centers.
I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?
The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.
First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).
> Mass domestic surveillance.
Since when has DoD started getting involved with the internal affairs of the country?
The name is extremely off-putting, but I can see how they would want to be diplomatic toward the administration in using their chosen name. Save the push-back for where it really matters.
I read the statement twice. I can't understand how you landed on "take my money".
Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.
To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.
Being labeled a supply chain risk means that companies with government contracts cannot use Anthropic products _for those government contracts_, not that they have to cease all usage of Anthropic products. Reporters seem to be reporting on this incorrectly.
Statement centers on privacy as fundamental democratic right: explicitly states mass surveillance 'is incompatible with democratic values,' identifies AI-specific privacy threats (automated assembly of scattered data at scale), cites bipartisan Congressional opposition to warrantless collection, grounds refusal in 'serious, novel risks to our fundamental liberties.'
FW Ratio: 63%
Observable Facts
Page explicitly states 'mass domestic surveillance is incompatible with democratic values'
Page identifies 'AI-driven mass surveillance presents serious, novel risks to our fundamental liberties'
Page explains: 'Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale'
Page references 'bipartisan opposition in Congress' to warrantless data collection
Page notes Intelligence Community has 'acknowledged raises privacy concerns'
Inferences
Statement grounds privacy protection in explicit democratic values framework
Recognition of AI-specific privacy threats distinct from pre-AI surveillance models
Alignment with established democratic consensus (bipartisan Congressional opposition)
Statement's core argument: some uses 'undermine, rather than defend, democratic values' and fall outside bounds of what technology 'can safely and reliably do.' Explicitly argues against permitting uses that would destroy fundamental rights.
FW Ratio: 75%
Observable Facts
Page states 'in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values'
Page argues some uses are 'simply outside the bounds of what today's technology can safely and reliably do'
Page asserts mass surveillance and unrestricted autonomous weapons should be excluded from military contracts
Inferences
Clear articulation that some government demands would destroy fundamental rights and therefore should be refused
Statement explicitly prioritizes life, liberty, and security: argues fully autonomous weapons lack reliability for safe deployment, advocates human judgment in military decisions, frames these safeguards as necessary to 'defend democratic values.'
FW Ratio: 57%
Observable Facts
Page states 'frontier AI systems are simply not reliable enough to power fully autonomous weapons'
Page argues 'fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit'
Page proposes 'proper guardrails' and 'oversight' for autonomous weapons deployment
Page frames mass surveillance as incompatible with 'fundamental liberties'
Inferences
Company prioritizes human life and liberty over deployment of unreliable systems
Recognition that security includes protecting citizens from both external threats and government overreach
Structural practice of refusing deployment aligns with liberty-protective values
Statement explicitly asserts corporate duty to refuse harmful uses: 'we cannot in good conscience accede to their request.' Frames corporate responsibility as duty-bearing beyond legal obligation.
FW Ratio: 60%
Observable Facts
Page states 'we cannot in good conscience accede to their request'
Page reports Anthropic 'chose to forgo several hundred million dollars in revenue' to prevent CCP-designated Military Companies from accessing Claude
Page emphasizes corporate agency: 'Anthropic has therefore worked proactively' and 'acted to defend America's lead in AI'
Inferences
Company articulates explicit duty to refuse harmful uses even when economically costly
Recognition that private entities bear responsibilities beyond legal compliance
Statement references 'defending democratic values' and 'fundamental liberties,' aligning with Preamble's emphasis on human dignity and freedom-based order.
FW Ratio: 50%
Observable Facts
Page asserts Anthropic's commitment to 'defending United States and other democracies'
Page explicitly invokes 'fundamental liberties' as justification for refusing surveillance deployment
Inferences
Framing corporate ethical position as defense of Preamble's foundational values
Implicit acknowledgment that technology governance must serve human dignity
Statement explicitly identifies 'associations' as protected interest threatened by mass surveillance, and frames this as incompatible with democratic values.
FW Ratio: 67%
Observable Facts
Page states surveillance of 'associations' is problematic and should require legal safeguards
Page identifies monitoring of 'associations' as incompatible with democratic values
Inferences
Clear recognition that surveillance threatens freedom of association specifically
Statement's concern about fully autonomous weapons relates implicitly to preventing indiscriminate harm and cruel treatment through unreliable targeting.
FW Ratio: 50%
Observable Facts
Page argues autonomous weapons lack 'proper guardrails' and reliable judgment for selecting targets
Inferences
Implicit concern that automation removes safeguards against indiscriminate harm
Statement identifies mass surveillance as threat to equal treatment: notes government can collect detailed records 'without obtaining a warrant,' implying unequal application of law protection.
FW Ratio: 50%
Observable Facts
Page notes 'government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant'
Inferences
Statement recognizes that unwarranted data collection violates equal protection principles
Statement identifies surveillance of 'associations' as privacy threat, which relates to freedom of conscience: ability to hold beliefs without monitoring.
FW Ratio: 50%
Observable Facts
Page identifies 'detailed records...associations' as explicit surveillance target
Inferences
Mass surveillance of associations enables monitoring of conscience and belief
Statement identifies mass surveillance as enabling mechanism for arbitrary detention: 'Powerful AI makes it possible to assemble this scattered data into a comprehensive picture of any person's life.'
FW Ratio: 50%
Observable Facts
Page describes mass surveillance capability to create 'comprehensive picture of any person's life—automatically and at massive scale'
Inferences
Recognition that surveillance infrastructure enables arbitrary state action
Statement's framing of technology governance as defense of 'democratic advantage' against 'autocratic adversaries' relates implicitly to international order protective of democratic rights.
FW Ratio: 50%
Observable Facts
Page references commitment to 'democratic advantage' and defeating 'autocratic adversaries'
Inferences
Implicit commitment to international order that protects democratic values
Anthropic's refusal to deploy mass surveillance capabilities demonstrates structural commitment to privacy protection. Company forewent revenue to prevent CCP access to Claude, preventing uses that could enable surveillance.
Anthropic's practice of forgoing 'several hundred million dollars in revenue' to prevent CCP access to Claude demonstrates structural commitment to ethical responsibilities over profit maximization.
Anthropic's refusal to deploy unreliable autonomous weapons or enable mass surveillance demonstrates structural commitment to securing life and liberty over short-term revenue.
AI-driven mass surveillance presents serious, novel risks to our fundamental liberties; frontier AI systems are simply not reliable enough to power fully autonomous weapons
flag waving
defend the United States and other democracies, and to defeat our autocratic adversaries; we chose to forgo several hundred million dollars in revenue; defend America's lead in AI
loaded language
incompatible with democratic values; fundamental liberties; democratic advantage
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.