+0.36 Statement from Dario Amodei on Our Discussions with the Department of War (www.anthropic.com S:+0.45 )
2912 points by qwertox 3 days ago | 1569 comments on HN | Moderate positive Mission · v3.7 · 2026-02-28 11:58:14 0
Summary Surveillance & Democratic Values Advocates
Anthropic's CEO articulates the company's ethical boundaries on Claude deployment to the U.S. Department of War, refusing to enable mass domestic surveillance or fully autonomous weapons despite government pressure and threats. The statement grounds its refusal in commitments to privacy, human liberty, and responsible technology governance—principles central to the UDHR.
Article Heatmap
Preamble: +0.40 — Preamble P Article 1: +0.20 — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: +0.47 — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: +0.30 — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: +0.30 — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: +0.20 — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: +0.67 — Privacy 12 Article 13: +0.30 — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: +0.30 — Freedom of Thought 18 Article 19: +0.30 — Freedom of Expression 19 Article 20: +0.40 — Assembly & Association 20 Article 21: +0.20 — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: ND — Education Article 26: No Data — Education 26 Article 27: ND — Cultural Participation Article 27: No Data — Cultural Participation 27 Article 28: +0.20 — Social & International Order 28 Article 29: +0.50 — Duties to Community 29 Article 30: +0.60 — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean +0.36 Structural Mean +0.45
Weighted Mean +0.40 Unweighted Mean +0.36
Max +0.67 Article 12 Min +0.20 Article 1
Signal 15 No Data 16
Volatility 0.14 (Medium)
Negative 0 Channels E: 0.6 S: 0.4
SETL +0.12 Editorial-dominant
FW Ratio 57% 28 facts · 21 inferences
Evidence 19% coverage
2H 3M 10L 16 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.30 (2 articles) Security: 0.39 (2 articles) Legal: 0.25 (2 articles) Privacy & Movement: 0.48 (2 articles) Personal: 0.30 (1 articles) Expression: 0.30 (3 articles) Economic & Social: 0.00 (0 articles) Cultural: 0.00 (0 articles) Order & Duties: 0.43 (3 articles)
HN Discussion 20 top-level · 30 replies
alangibson 2026-02-26 22:54 UTC link
It's not named the Department of War because Congress didn't rename it.

Other than that, good on ya.

flumpcakes 2026-02-26 23:04 UTC link
This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
kace91 2026-02-26 23:17 UTC link
As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.
nkoren 2026-02-26 23:18 UTC link
This makes me a very happy Claude Max subscriber.

Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.

qaid 2026-02-26 23:20 UTC link
I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

helaoban 2026-02-26 23:34 UTC link
All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

tabbott 2026-02-26 23:43 UTC link
An organization character really shows through when their values conflict with their self-interest.

It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

jjcm 2026-02-26 23:55 UTC link
This is the strongest statement in the post:

> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

atleastoptimal 2026-02-26 23:56 UTC link
I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

lebovic 2026-02-27 00:21 UTC link
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908

zb1plus 2026-02-27 02:00 UTC link
It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.
QuiEgo 2026-02-27 03:57 UTC link
I'd be amused beyond all reason if we saw this chain of events:

- Anthropic says "no"

- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)

- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."

Bonus points if its some of the hyperscalers like AWS.

Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.

bambax 2026-02-27 06:28 UTC link
> These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Nicely put. In other words: Department of Morons.

eisfresser 2026-02-27 06:47 UTC link
> mass __domestic__ surveillance is incompatible with democratic values

But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?

I don't think the moral high ground Anthropic is taking here is high enough.

u1hcw9nx 2026-02-27 09:53 UTC link
Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Divided https://notdivided.org/

-----

The Department of War is threatening to

- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

- Label the company a "supply chain risk"

All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.

The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.

They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

We are the employees of Google and OpenAI, two of the top AI companies in the world.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Signed,

omnee 2026-02-27 12:00 UTC link
Agree fully with the main points of this statement. Mass domestic surveillance is the hallmark of an authoritarian and undemocratic state. That such a state holds 'votes' regularly does not detract from the chilling effect on public discourse and politics caused by mass surveillance.

The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.

egorfine 2026-02-27 12:18 UTC link
> mass surveillance presents serious, novel risks to our fundamental liberties.

Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.

zkmon 2026-02-27 12:22 UTC link
Same as saying "Look I sold nukes to USA to protect democracy, but we put 2 rules about usage". Everyone got nukes and nobody can enforce the rules. Just whitewashing of pure business greed, using terms like national security, democracy etc.
krzyk 2026-02-27 12:29 UTC link
Does US really have Department of War? Is this Antropics way to show how f&^^& up they are in Department of Defense, or did they rebranded it to the old WWI/II days?
mocamoca 2026-02-27 12:34 UTC link
Something feels off about this announcement. Anyone else?

Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.

On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.

What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.

This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.

saulpw 2026-02-26 23:13 UTC link
Hope is not a plan, unfortunately, so if that's all we've got, I don't have much hope.
1024core 2026-02-26 23:14 UTC link
It's addressed to Hegseth, who insists on calling it that.

If they had called it DoD, then that would have been another finger in his eye.

63 2026-02-26 23:16 UTC link
While I agree the name change has not (yet) been made with the proper authority, I'm quite partial to the name and prefer to use it despite its prematurity. I think it does a better job of communicating the types of work actually done by the department and rightly gives people pause about their support of it. Though I'm sure that wasn't the administration's intention.
nubg 2026-02-26 23:21 UTC link
I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.
davidw 2026-02-26 23:22 UTC link
This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.
helaoban 2026-02-26 23:22 UTC link
It SHOULD be called the Department of War, as it was originally, since it makes its function clear. We are a society that has euphemized everything and so we no longer understand anything.
mwigdahl 2026-02-26 23:25 UTC link
Take your pick from the many other choices offered by companies that don't care about mass spying on _anyone_.
fluidcruft 2026-02-26 23:42 UTC link
It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.
ghshephard 2026-02-26 23:43 UTC link
I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.
ricardobeat 2026-02-26 23:48 UTC link
> The technology can just be requisitioned

During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.

xeonmc 2026-02-26 23:56 UTC link

    > I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...

bicx 2026-02-27 00:20 UTC link
They already kissed the ring, just not the asshole. They have a little dignity left.
idiotsecant 2026-02-27 00:20 UTC link
The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.
dheera 2026-02-27 00:28 UTC link
> opted to sell priority access to their models to the Pentagon

The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.

This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.

It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.

The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.

Synaesthesia 2026-02-27 00:29 UTC link
AI was always particularly well suited to military use and mass surveillance. It can take huge amounts of raw data and parse it for your, provide useful information from that. And let's face it, companies exist for profit.
tootie 2026-02-27 00:39 UTC link
It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.
zug_zug 2026-02-27 01:25 UTC link
Is there a different AI company that IS taking that stance?

Because as far as I know, Anthropic is taking the most moral stance of any AI company.

presentation 2026-02-27 01:31 UTC link
So your stance is that anything military-related is immoral?
neom 2026-02-27 01:37 UTC link
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
bamboozled 2026-02-27 02:10 UTC link
I can imagine that this will be the logical conclusion for many companies, I thought the same thing too, if it's too hard in the USA, they will just move.
panarky 2026-02-27 02:12 UTC link
Does the Defense Production Act force employees to continue working at Anthropic?
JumpCrisscross 2026-02-27 02:24 UTC link
> this is a strong arm by the governemnt to allow any use

It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.

RyanShook 2026-02-27 02:28 UTC link
The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.
jobs_throwaway 2026-02-27 02:38 UTC link
> The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.

I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.

> Or the models could be developed internally, after having requisitioned the data centers.

I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?

techblueberry 2026-02-27 03:03 UTC link
The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.
mandeepj 2026-02-27 03:18 UTC link
First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).

> Mass domestic surveillance.

Since when has DoD started getting involved with the internal affairs of the country?

https://en.wikipedia.org/wiki/United_States_Department_of_De...

tempestn 2026-02-27 03:22 UTC link
The name is extremely off-putting, but I can see how they would want to be diplomatic toward the administration in using their chosen name. Save the push-back for where it really matters.
skeptic_ai 2026-02-27 03:38 UTC link
USA would bomb their country before any visa is approved
exodust 2026-02-27 04:00 UTC link
I read the statement twice. I can't understand how you landed on "take my money".

Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.

To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.

stevenpetryk 2026-02-27 04:39 UTC link
Being labeled a supply chain risk means that companies with government contracts cannot use Anthropic products _for those government contracts_, not that they have to cease all usage of Anthropic products. Reporters seem to be reporting on this incorrectly.
Editorial Channel
What the content says
+0.70
Article 12 Privacy
High Advocacy Framing Practice
Editorial
+0.70
SETL
+0.26

Statement centers on privacy as fundamental democratic right: explicitly states mass surveillance 'is incompatible with democratic values,' identifies AI-specific privacy threats (automated assembly of scattered data at scale), cites bipartisan Congressional opposition to warrantless collection, grounds refusal in 'serious, novel risks to our fundamental liberties.'

+0.60
Article 30 No Destruction of Rights
Medium Advocacy
Editorial
+0.60
SETL
ND

Statement's core argument: some uses 'undermine, rather than defend, democratic values' and fall outside bounds of what technology 'can safely and reliably do.' Explicitly argues against permitting uses that would destroy fundamental rights.

+0.50
Article 3 Life, Liberty, Security
High Advocacy Framing Practice
Editorial
+0.50
SETL
+0.22

Statement explicitly prioritizes life, liberty, and security: argues fully autonomous weapons lack reliability for safe deployment, advocates human judgment in military decisions, frames these safeguards as necessary to 'defend democratic values.'

+0.50
Article 29 Duties to Community
Medium Advocacy Practice
Editorial
+0.50
SETL
0.00

Statement explicitly asserts corporate duty to refuse harmful uses: 'we cannot in good conscience accede to their request.' Frames corporate responsibility as duty-bearing beyond legal obligation.

+0.40
Preamble Preamble
Low Framing
Editorial
+0.40
SETL
ND

Statement references 'defending democratic values' and 'fundamental liberties,' aligning with Preamble's emphasis on human dignity and freedom-based order.

+0.40
Article 20 Assembly & Association
Medium Advocacy
Editorial
+0.40
SETL
ND

Statement explicitly identifies 'associations' as protected interest threatened by mass surveillance, and frames this as incompatible with democratic values.

+0.30
Article 5 No Torture
Low Framing Practice
Editorial
+0.30
SETL
0.00

Statement's concern about fully autonomous weapons relates implicitly to preventing indiscriminate harm and cruel treatment through unreliable targeting.

+0.30
Article 7 Equality Before Law
Low Advocacy
Editorial
+0.30
SETL
ND

Statement identifies mass surveillance as threat to equal treatment: notes government can collect detailed records 'without obtaining a warrant,' implying unequal application of law protection.

+0.30
Article 13 Freedom of Movement
Low Framing
Editorial
+0.30
SETL
ND

Statement identifies surveillance of 'movements' as privacy threat, which implicitly threatens freedom of movement through chilling effect.

+0.30
Article 18 Freedom of Thought
Low Framing
Editorial
+0.30
SETL
ND

Statement identifies surveillance of 'associations' as privacy threat, which relates to freedom of conscience: ability to hold beliefs without monitoring.

+0.30
Article 19 Freedom of Expression
Low Framing
Editorial
+0.30
SETL
ND

Statement identifies surveillance of 'web browsing' as privacy threat, which chills free expression through monitoring of information consumption.

+0.20
Article 1 Freedom, Equality, Brotherhood
Low Framing
Editorial
+0.20
SETL
ND

Statement emphasizes equal human dignity implicitly through refusal of mass surveillance, which would create unequal power to monitor citizens.

+0.20
Article 9 No Arbitrary Detention
Low Framing
Editorial
+0.20
SETL
ND

Statement identifies mass surveillance as enabling mechanism for arbitrary detention: 'Powerful AI makes it possible to assemble this scattered data into a comprehensive picture of any person's life.'

+0.20
Article 21 Political Participation
Low Framing
Editorial
+0.20
SETL
ND

Statement's emphasis on 'democratic values' and critique of surveillance enabling government overreach implicitly concern democratic participation.

+0.20
Article 28 Social & International Order
Low Framing
Editorial
+0.20
SETL
ND

Statement's framing of technology governance as defense of 'democratic advantage' against 'autocratic adversaries' relates implicitly to international order protective of democratic rights.

ND
Article 2 Non-Discrimination

Not addressed.

ND
Article 4 No Slavery

Not addressed.

ND
Article 6 Legal Personhood

Not addressed.

ND
Article 8 Right to Remedy

Not addressed.

ND
Article 10 Fair Hearing

Not addressed.

ND
Article 11 Presumption of Innocence

Not addressed.

ND
Article 14 Asylum

Not addressed.

ND
Article 15 Nationality

Not addressed.

ND
Article 16 Marriage & Family

Not addressed.

ND
Article 17 Property

Not addressed.

ND
Article 22 Social Security

Not addressed.

ND
Article 23 Work & Equal Pay

Not addressed.

ND
Article 24 Rest & Leisure

Not addressed.

ND
Article 25 Standard of Living

Not addressed.

ND
Article 26 Education

Not addressed.

ND
Article 27 Cultural Participation

Not addressed.

Structural Channel
What the site does
+0.60
Article 12 Privacy
High Advocacy Framing Practice
Structural
+0.60
Context Modifier
ND
SETL
+0.26

Anthropic's refusal to deploy mass surveillance capabilities demonstrates structural commitment to privacy protection. Company forewent revenue to prevent CCP access to Claude, preventing uses that could enable surveillance.

+0.50
Article 29 Duties to Community
Medium Advocacy Practice
Structural
+0.50
Context Modifier
ND
SETL
0.00

Anthropic's practice of forgoing 'several hundred million dollars in revenue' to prevent CCP access to Claude demonstrates structural commitment to ethical responsibilities over profit maximization.

+0.40
Article 3 Life, Liberty, Security
High Advocacy Framing Practice
Structural
+0.40
Context Modifier
ND
SETL
+0.22

Anthropic's refusal to deploy unreliable autonomous weapons or enable mass surveillance demonstrates structural commitment to securing life and liberty over short-term revenue.

+0.30
Article 5 No Torture
Low Framing Practice
Structural
+0.30
Context Modifier
ND
SETL
0.00

Refusal to deploy fully autonomous weapons demonstrates commitment to preventing systems that could inflict cruel/indiscriminate harm.

ND
Preamble Preamble
Low Framing

Not applicable—statement is editorial in nature.

ND
Article 1 Freedom, Equality, Brotherhood
Low Framing

Not directly addressed.

ND
Article 2 Non-Discrimination

Not addressed.

ND
Article 4 No Slavery

Not addressed.

ND
Article 6 Legal Personhood

Not addressed.

ND
Article 7 Equality Before Law
Low Advocacy

Not directly addressed.

ND
Article 8 Right to Remedy

Not addressed.

ND
Article 9 No Arbitrary Detention
Low Framing

Not directly addressed.

ND
Article 10 Fair Hearing

Not addressed.

ND
Article 11 Presumption of Innocence

Not addressed.

ND
Article 13 Freedom of Movement
Low Framing

Not directly addressed.

ND
Article 14 Asylum

Not addressed.

ND
Article 15 Nationality

Not addressed.

ND
Article 16 Marriage & Family

Not addressed.

ND
Article 17 Property

Not addressed.

ND
Article 18 Freedom of Thought
Low Framing

Not directly addressed.

ND
Article 19 Freedom of Expression
Low Framing

Not directly addressed.

ND
Article 20 Assembly & Association
Medium Advocacy

Not directly addressed.

ND
Article 21 Political Participation
Low Framing

Not directly addressed.

ND
Article 22 Social Security

Not addressed.

ND
Article 23 Work & Equal Pay

Not addressed.

ND
Article 24 Rest & Leisure

Not addressed.

ND
Article 25 Standard of Living

Not addressed.

ND
Article 26 Education

Not addressed.

ND
Article 27 Cultural Participation

Not addressed.

ND
Article 28 Social & International Order
Low Framing

Not directly addressed.

ND
Article 30 No Destruction of Rights
Medium Advocacy

Not directly addressed.

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.51 high claims
Sources
0.4
Evidence
0.6
Uncertainty
0.5
Purpose
0.9
Propaganda Flags
3 manipulative rhetoric techniques found
3 techniques detected
appeal to fear
AI-driven mass surveillance presents serious, novel risks to our fundamental liberties; frontier AI systems are simply not reliable enough to power fully autonomous weapons
flag waving
defend the United States and other democracies, and to defeat our autocratic adversaries; we chose to forgo several hundred million dollars in revenue; defend America's lead in AI
loaded language
incompatible with democratic values; fundamental liberties; democratic advantage
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.1
Arousal
0.6
Dominance
0.7
Transparency
Does the content identify its author and disclose interests?
0.33
✓ Author ✗ Conflicts
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.44 mixed
Reader Agency
0.4
Stakeholder Voice
Whose perspectives are represented in this content?
0.25 2 perspectives
Speaks: corporation
About: governmentmilitary_securityindividuals
Temporal Framing
Is this content looking backward, at the present, or forward?
present short term
Geographic Scope
What geographic area does this content cover?
national
United States, China, Ukraine
Complexity
How accessible is this content to a general audience?
moderate medium jargon domain specific
Longitudinal 1579 HN snapshots · 8 evals
+1 0 −1 HN
Audit Trail 28 entries
2026-02-28 15:14 eval_success Lite evaluated: Moderate positive (0.40) - -
2026-02-28 15:14 eval Evaluated by llama-4-scout-wai: +0.40 (Moderate positive) -0.20
reasoning
Editorial stance on human rights, specifically AI use in military
2026-02-28 15:13 eval_success Lite evaluated: Strong positive (0.60) - -
2026-02-28 15:13 eval Evaluated by llama-3.3-70b-wai: +0.60 (Strong positive) 0.00
reasoning
Editorial defends human rights
2026-02-28 11:58 eval Evaluated by claude-haiku-4-5-20251001: +0.40 (Moderate positive) -0.11
2026-02-28 10:02 eval Evaluated by claude-haiku-4-5-20251001: +0.52 (Moderate positive)
2026-02-28 01:40 dlq Dead-lettered after 1 attempts: Statement from Dario Amodei on Our Discussions with the Department of War - -
2026-02-28 01:38 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-28 01:37 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-28 01:36 dlq_replay DLQ message 97679 replayed to LLAMA_QUEUE: Statement from Dario Amodei on Our Discussions with the Department of War - -
2026-02-28 00:21 eval_success Light evaluated: Strong positive (0.60) - -
2026-02-28 00:21 eval Evaluated by llama-3.3-70b-wai: +0.60 (Strong positive)
reasoning
Editorial defends human rights
2026-02-27 21:09 dlq Dead-lettered after 1 attempts: Statement from Dario Amodei on Our Discussions with the Department of War - -
2026-02-27 21:07 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 21:06 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 21:05 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 21:05 dlq_auto_replay DLQ auto-replay: message 97556 re-enqueued - -
2026-02-27 16:32 eval_success Light evaluated: Strong positive (0.60) - -
2026-02-27 16:32 eval Evaluated by llama-4-scout-wai: +0.60 (Strong positive)
reasoning
Editorial stance on human rights, specifically AI use in military
2026-02-27 12:43 eval_success Evaluated: Mild positive (0.13) - -
2026-02-27 12:43 eval Evaluated by deepseek-v3.2: +0.13 (Mild positive) 10,371 tokens
2026-02-27 12:39 dlq Dead-lettered after 1 attempts: Statement from Dario Amodei on Our Discussions with the Department of War - -
2026-02-27 12:37 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 12:36 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 12:35 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-27 12:34 eval Evaluated by claude-haiku-4-5: +0.63 (Neutral)
2026-02-27 11:45 credit_exhausted Credit balance too low, pausing provider for 30 min - -
2026-02-27 11:15 credit_exhausted Credit balance too low, pausing provider for 30 min - -