Anthropic issues a policy statement defending its refusal to enable two specific uses of Claude—mass domestic surveillance and fully autonomous weapons—on human rights grounds. The company frames these refusals as commitments to protect fundamental rights (privacy and protection of human life), explains the legal basis for its position, clarifies customer impacts, and commits to court challenge of the government's supply chain risk designation, with emphasis on privacy rights, life safety, and democratic accountability.
Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
What's stopping the government from using the usual nasty tricks the world has known about for decades?
DPA?
All Writs Act?
Force them to comply and then prevent them talking about it with NSLs?
I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.
So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?
I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".
I don't know if I like Anthropic more, but I certainly like their competitors much less now.
The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.
It’s not just admirable it’s the obvious position to take and any alternative is head scratching.
It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.
"Warfighters" has been used for decades to describe service members, though usage picked up (in my experience) some time in the late 00s or 2010s. It's actually pretty common to describe "serving the warfighter" for all the all the missions that support combat roles but aren't combat roles themselves.
It isn't a new thing at all, and the term has been around for a while. I was an Infantryman from 05-08 and heard it back then. I have also more recently been a defense contractor. I don't think members of the military prefer any title, honestly. In the most broad sense, good terms are soldiers, sailors, airmen, marines. Defense Contractors constantly refer to the military as "warfighter" and have for a while. In short, nobody in the military is going to flinch one way or the other if you use either term. Just don't call marines anything but marines.
Actually why is nobody in Cali just trying to join Canada - would be better for everyone in terms of more similar culture and values. Weird that it isn't discussed more
A question - being considered a supply chain risk is the same as being sanctioned? Or does it only affect their ability to be a defense supplier in the US (even if transitively?)
It's an honest question by the way - not trying to throw any gothas.
Just trying to understand if comoanies or people that don't orbit defense contracting are free to operate with Anthropic still or risk being sanctioned too.
The usual suspects have stood up to it. Ben & Jerry's, Patagonia. In the former case it led to an illegal takeover by Unilever for which they're now being sued (or more accurately, the spinoff). Capgemini sold a US division over working with ICE, though that's a French company.
The reason that no one involved in the game's development objected to the word "warfighter" is that the U.S. Defense Department has used "warfighter" as a standard term for military personnel since the late 1980s or early 1990s: Thus Earl L. Wiener et al., Eds. Human Factors in Aviation, 1988
Warfighter is literally the Department of War's Amazonian or Googler or any other cringe term you'd see in company PR or recruiting material.
My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.
So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.
However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.
> I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.
It is indeed kind of crazy. That's because the current US administration is composed of people whose sole qualification is being able to work for Donald Trump. Being competent, rational or ethical is career-limiting.
Anthropics principles are extraordinarily weak from an absolute point of view.
Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.
Yeah dude, I'm sure just about any burglar I pull out of prison will agree.
Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.
That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.
Yeah, but you can’t contract your software to the department of defense and then demand that they not use it to surveil foreigners. If that’s the line you want to draw, you’d have to avoid working with them in the first place.
Statement explicitly identifies and rejects 'mass domestic surveillance of Americans' as central to its position, framing surveillance as 'violation of fundamental rights' and core company principle worth significant economic cost
FW Ratio: 50%
Observable Facts
Page states: 'mass domestic surveillance of Americans' is explicitly identified as one of two use cases company refuses
Statement: 'We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights'
Company states it will maintain this position despite government pressure and penalties
Inferences
Company prioritizes privacy protection above commercial relationship with powerful government actor
Statement frames mass surveillance as human rights violation rather than acceptable policy trade-off
Privacy protection is positioned as core company value worth significant economic and reputational cost
Statement explicitly connects fully autonomous weapons to endangerment of human life ('would endanger America's warfighters and civilians') and prioritizes protection of life over military capability
FW Ratio: 50%
Observable Facts
Statement: 'Allowing current models to be used in this way would endanger America's warfighters and civilians'
Company explicitly refuses to enable 'fully autonomous weapons' despite government pressure
Inferences
Company prioritizes protection of human life over military capability enhancement
Statement connects autonomous AI weapons to direct and clear risk of harm to human persons
Statement explicitly invokes 'fundamental rights' as justification for refusing mass surveillance and autonomous weapons, grounding company position in rights-based discourse
FW Ratio: 50%
Observable Facts
Page states: 'mass domestic surveillance of Americans constitutes a violation of fundamental rights'
Statement positions refusal of two specific AI uses as defense of fundamental rights
Inferences
Company frames governance and technology policy within language of fundamental human rights rather than commercial grounds
Statement suggests company views rights framework as normative for evaluating AI capabilities and uses
Statement argues supply chain designation is 'legally unsound' and establishes unequal treatment of American companies relative to historical precedent
FW Ratio: 50%
Observable Facts
Statement claims supply chain designation is 'legally unsound'
Statement: 'This designation would...set a dangerous precedent for any American company that negotiates with the government'
Inferences
Company argues government action violates principle of equal legal treatment
Statement frames designation as breaking established precedent and creating unequal treatment
Statement appeals to broader public governance interest: 'This designation would...set a dangerous precedent for any American company that negotiates with the government'
FW Ratio: 33%
Observable Facts
Statement: 'We believe this designation would...set a dangerous precedent for any American company that negotiates with the government'
Inferences
Company appeals to broader public governance interests in defending against designation
Statement engages with question of how democratic government should treat companies in policy negotiations