189 points by lukeplato 4 days ago | 125 comments on HN
| Moderate positive
Contested
Editorial · v3.7· 2026-02-26 03:29:19 0
Summary Corporate Autonomy & State Coercion Advocates
This article advocates for principled constraints on AI use and criticizes governmental coercion in contract negotiations. The content reports on Pentagon threats against Anthropic over refusal to guarantee non-use for mass surveillance or autonomous weapons, framing the company's human rights demands as legitimate and the government's unprecedented enforcement threats as an abuse of power. The author advocates for freedom from coercion, privacy protection, and transparent governance while acknowledging the legitimacy of some constraints on corporate freedom when grounded in human rights protection.
Point blank one of the most nakedly evil things the government has ever tried to do. Apparently Anthropic's sticking points were no using the model for autonomous kill orders and no mass surveillance...
So the Pentagon is strongarming a company into cooperation? That reminds of how my alcoholic neighbor used to treat his family. It's almost as if someone let a mean drunk be in charge of the Pentagon.
sing the "supply chain risk" designation against a domestic AI company is wild. Not sure that tool had vendors who won't rewrite their ToS on demand in mind.
Meanwhile the Pentagon could just build its own capacity. Commercial AI outspends federal science R&D 75:1 right now.
Imagine a world where in order to do business in the US you must grant the government control of your company. This sounds worse than even the most alarmist China takes.
I understand that Anthropic has one of the most popular products in the market.
But no one, especially the government, should get in bed with them, when anthropic leadership has a track record trying to use their early mover advantace, to effectively create an AI cartel [1]
I'm glad Anthropic is getting a taste of their own medicine.
Might be a long stretch, but that every analyst I’ve heard talking about this is concerned about mass surveillance of us citizens again, and the Wyden Siren is hinting at illegal activities by the CIA.
Plus that the US military also used anthropics products in some form during the Venezuela operation as they publicly acknowledged, plus Hegseth seeming to be willing to put the boot down anthropics’ neck according to the options presented to them, are a lot of interesting things that happened in a very short amount of time for an environment that is usually known to work as frictionless as possible.
Even for Hegseth this is a lot of public eyes on something the pentagon of previous administrations would have handled probably with the same willingness to drown anthropic in their own tears but completely out of public sight.
But the Pentagon works in mysterious ways, and therefore there might be a very good reason for this kind of pressure, that the people who are responsible for national security even risk making a public fuss about it, that we peasants simply don’t see.
I also can’t wait to see how the us military is messing this whole AI superiority softporn up. It’s not a matter of if but only of when.
They have a track record misshandling weapons of mass destruction.
To be fair tho, for the amount of nuclear weapons they are handling overall they are doing a pretty good job. But no more open blast doors for the pizza delivery guy, ok?
The real question is how many broken arrow events can we even have with AI? Is it better luck next time baby skynet serious or we fucked up Sir, everyone is going to die as matchsticks bad, if whatever system they use decides every problem they throw at it can be solved by removing the human from the equation, all of them preferably.
I can't help but compare what happened with nuclear physics to what will happen with ASI/AGI. We could have used nuclear energy to provide abundant, clean energy. Instead we used it for warfare to kill people. All the of the brightest minds and frontier technology was directed towards killing people.
We could use AI for medical advances and to create a communist utopia without serfdom. But it's already looking like we're getting killer robots and more oppression.
Hope I'm thinking about this wrong. I fear very soon the government will begin nationalizing AI resources and forcing AI researchers to direct their efforts towards weapons systems. Similar to what happened in physics. "We have to be first to have autonomous robot armies" basically.
I'm really not understanding this. Doesn't the typical path for advanced technology making it into the hands of civilians start with military applications and end with it being modified for civilian use?
If the Pentagon wants Anthropic's technology because it has desirable characteristics, can it not just train its own AI models? Why can't the Pentagon build data centers full of GPUs and hire some smart people like the commercial AI providers did?
Why in this case, has the usual path for technology been flipped? Starting out as commercial tech for civilians, and then being re-purposed for military use feels unusual to me. Maybe Hegseth's "War department" has a recruiting problem.
this pairs nicely with the finding of the supreme court:
Under our constitutional structure of separated powers, the nature of Presidential power entitles a former President to absolute immunity from criminal prosecution for actions within his conclusive and preclusive constitutional authority. And he is entitled to at least presumptive immunity from prosecution for all his official acts. There is no immunity for unofficial acts.
,,Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.''
Why would killbots be sensible moderate with the number of hallucinations LLMs have right now?
They just need to have one rm -rf bug somewhere to so something disasterous, and at least Antrhopic's CEO understands the limitations of the software.
This is going to be a controversial take but I don't agree with Anthropic on this one. My gut instinct says that the Pentagon should back down, but my gut is wrong because of political bias. I can't claim to be serious about AI governance if Anthropic is able to sidestep the interests of the Pentagon, whoever might be in charge. Anthropic is not stronger than the US government, and it would set a dangerous precedent if they don't comply.
At the end of the rabbit hole, it's all about enforcement, regardless of the contract. Who's going to enforce Anthropic's terms and conditions if they betray the Pentagon?
There's a lot of talk about "Future Claude", even Karpathy has mentioned something similar. But does anyone stop to think about how utterly dystopian this is?
We are creating a worse version of the Panopticon than was originally designed. A Panopticon that could have entirely devastating consequences. Not only is "the guard" able to see what any given "prisoner" is doing at any time, but they can look into the past. The self-regulation happens because the prisoners could be being watched. It is Orwellian. But this thing we're building? It can look at the prisoners' actions before it was even completed.
I think people don't think about this enough. Culture changes and in that time what is considered morally justifiable or even reasonable changes. Sometimes it is easy to judge people in the past by our current standards but other times it is not. Other times there is context needed, which is lost not only by time but in what is never recorded. How do prisoners self-regulate to future values that they do not know they are supposed to align to?
This creates a terrible machine where whoever controls it will likely have the power to prosecute anyone arbitrarily. Get the morals to change just slightly or just take things out of context and you have the public demanding prosecution. I think people think this seems far fetched but I'm willing to bet every single person on HN has fallen for some disinformation campaign. Be it the "carrots help you see in the dark", peoples misunderstanding between paper/plastic/canvas tote bags, a wide variety of topics related to environmentalism, and on and on. Even if you believe you have never fallen for such a disinformation (or malinformation) campaign, you'll have to concede that it is common for others to. That's all that is needed for someone in power to execute on this Panopticon, and it is a strategy people with power have been refining for thousands of years.
I really do support Anthropic pushing back here, but the discussions about "Future Claude" really are unsettling. It is like we are treating this as an inevitability. As if we have no choice in the matter. If that is true, then we are the mindless automata and then what does the military need killer-bots for? The would already have them.
Obviously, domestic surveillance of U.S. citizens is bad but before even getting to that, the thing that doesn't make sense is: it's illegal for the DoD to do that (unless the citizens are military or DoD employees).
And, does anyone seriously think developing autonomous kill-bots without a human in the loop in the next 3 years is something the DoD should be unilaterally doing now without congressional review? Personally, I think autonomous kill bots with a human in the loop, with congressional review, and even 10 years from now are categorically a terrible idea.
However, I can imagine some reasonable people perhaps quibbling over saying never by citing things like "sufficient safeguards", "congressional oversight" and at a future time where AIs don't hallucinate constantly. But none of that is in contention here. The DoD is publicly proclaiming their need to do things right now which are either A. illegal, or B. no serious person thinks is sane.
My strong initial reaction to even the idea of "fully autonomous AI killbots" made me miss a subtle distinction about what the real danger is. We already have a variety of non-AI killbots. Conceptually, any area denial weapon like a proximity triggered Claymore mine is a non-AI "killbot". And just tying one or more sensors to trigger a gun or explosive already works today without AI. . So what's gained by adding full AI?
Such non-AI automatic triggering and targeting can already be constrained by location, range, time frame, remote-control, etc using fairly sophisticated non-AI heuristics. If non-AI devices can already <always pull trigger if X, Y and Z conditions = TRUE>, this is really about not pulling the trigger based on more complex judgements. That really only enables leaving such systems armed and active in far larger, less constrained contexts where 'friend or foe' judgements exceed basic true/false sensor conditions. That the military feels such urgent need for that capability is much more worrying to me.
What, Dario is just going to turn on unlimited-token-CEO-mode and ask Claude to devise a plan to out maneuver the military and intelligence services? It’s not AGI yet, and this request would be far outside the training distribution: it would just hallucinate something based on Tom Clancy novels.
The voters and congress tell the military how to use technology, not Anthropic. Shifting the decision to Anthropic takes away power from the citizenship.
Edit: The point is, go vote if you don't agree with what the administration is doing. Somebody will sell the DoD whatever they want no matter what Anthropic does.
You're smoking something funny. They have just shown they are willing to designate a US company as essentially a foreign spy agency because they wanted to try and renegotiate a contract and didn't get what they wanted and that's your reaction?
We know that the current administration functions like a cabal of sex-trafficking mobsters, so none of this is surprising; strong-arming is the norm, not the exception. I expect this to get ugly, and I hope Anthropic has the financial and legal resources to respond accordingly.
As if governments throughout history haven't constantly used threats to gain leverage? No need to take a personal shot at the guy in charge when this is SOP throughout the administration.
Without reading every word of every embedded tweet, a part missing from the conversation is HOW they are strongarming.
It isn't in private. It's a public threat in the court of public opinion to apply societal pressure on the company. They are attempting to reshape Anthropic's decision into a tribal one, and hurt the brand's reputation within the tribe unless it capitulates.
The whole government 'strong-arms' many of its counter-parties in a variety of situations; this is unfortunately nothing new, and far from an innovation by Hegseth. A more clearly illegal example (because the government was acting as a regulator, not a purchaser) is Operation Choke Point, though there are many others: https://en.wikipedia.org/wiki/Operation_Choke_Point
Like...governments pressuring social media companies to censor/ban/deamplify unapproved views and making up an Orwellian term like "misinformation" to justify it?
This is exactly America’s path. All this time we were “fighting” regimes like Chinese and Russian and now it is like “can’t beat them, join them” banana republic
Step 1.5 is also the one being ignored by 95% of comments here: the leverage the Pentagon is using is the lucrative contract Anthropic signed with them. The only threat here is Anthropic sucking up less money from the DoD.
I don't doubt that Claude is capable of mass surveillance, but surely it is not too much of a stretch to say it may not be suitable for automated killbots?
Our government notably derives its power from the rights we delegate to said government. We have not given our government the right to just tear up contracts willy-nilly.
I don't even understand why it is thought that letting a small non-elected clique run economically important infrastructure and control the lives of thousands of employees isn't considered dystopian. Public ownership at least has democratic legitimacy.
Look, you can't have a (working, democratic) government where one party can send the other to jail as soon as they get into power. If presidents could go to jail for doing their job, their opposing party would absolutely try to send them there.
This would then ultimately handicap the president: anything they do that the opposition can find a legal justification against could land them in jail, so they won't do anything that comes close to that. We do not want our chief executive making key decisions for the country based on fear of political retribution!
The Supreme Court has failed, miserably and repeatedly lately, and some of their decisions run directly counter to the law (often they even contradict past decisions!) But deciding the president won't face political retribution for trying to do his job was not a mistake.
The old path of 'military invents it, civilians eventually get it' (like the Space Race or early ARPANET) hasn't been true for decades. Today, almost all major technological leaps like the modern internet, search engines, smartphones, commercial drones, etc. start in the commercial consumer sector first. The global consumer market dwarfs the defense market, which means the private sector has vastly more capital for R&D. Government payscale caps out ~$190k-$200k/year for specialized roles without some congressional workaround. The top AI researchers at OpenAI, Anthropic, Google etc. make ~$1m-$5m+/year for total compensation. The government couldn't afford to hire the right talent and the right talent likely would refuse based on moral, ethical, and rational principles with the current government.
Content exemplifies freedom of expression by publishing critical analysis of government overreach. The article presents factual reporting with clear opinion, enabling readers to evaluate the situation independently. Author identifies concerns about governmental coercion and uses reasoned argument.
FW Ratio: 50%
Observable Facts
Article provides detailed factual account of contract negotiations, Pentagon demands, and threatened consequences.
Content is published freely without paywall, enabling broad access to information about government-corporate conflicts.
The article is structured as opinion/analysis rather than disguised advocacy, clearly signaling the author's interpretive framing.
Inferences
Publishing critical analysis of government action exemplifies freedom of expression and the right to impart information of public importance.
Free access to the content supports the right to receive information without economic barriers.
The transparent, reasoned argument supports informed public discourse about government power.
Content advocates for human dignity and rational discourse by questioning government overreach and demanding principled constraints on AI use. The framing treats fundamental constraints (no mass surveillance, no autonomous killbots) as non-negotiable human rights protections.
FW Ratio: 50%
Observable Facts
The article reports that the Pentagon demanded Anthropic remove usage policy constraints and refused to guarantee non-use for mass surveillance or autonomous weapons.
The author characterizes the Pentagon's use of 'supply chain risk' designation as threatening a domestic company and calls this designation 'unprecedented.'
Inferences
The framing advocates for recognition of human dignity by insisting on constraints against mass surveillance and autonomous killing, core preamble principles.
The article's critique of governmental coercion suggests alignment with the preamble's emphasis on freedom and rational order.
Content affirms the right to life by criticizing Anthropic's refusal to guarantee constraints on 'no-human-in-the-loop killbots,' implying the author supports safeguards against autonomous lethal weapons that threaten the right to life.
FW Ratio: 50%
Observable Facts
Anthropic demanded a guarantee that their AIs would not be used for 'no-human-in-the-loop killbots,' and the Pentagon refused this guarantee.
The article presents this demand for constraints on autonomous weapons as a principled position without criticism of Anthropic.
Inferences
By presenting Anthropic's demand for human oversight in lethal decisions as reasonable, the author advocates for protection of life against autonomous killing systems.
The framing treats constraints on autonomous weapons as a legitimate human rights concern, not an unreasonable constraint.
Content advocates for freedom of association and peaceful assembly by implicitly supporting Anthropic's right to refuse an unwanted contract modification and criticizing governmental coercion to compel agreement. The author frames Anthropic's principled stance as legitimate.
FW Ratio: 50%
Observable Facts
Article reports that Anthropic 'demurred' and 'refused' the Pentagon's demands, and the Pentagon responded with threats of 'consequences.'
Author characterizes the Pentagon's use of legal tools to force compliance as an unprecedented abuse of power.
Inferences
By framing Anthropic's refusal as legitimate, the author advocates for the right to refuse unwanted contracts without coercion.
The critique of governmental threats as an abuse of power supports the principle of freedom from forced association or compelled action.
Content advocates against governmental abuse of power by criticizing the Pentagon's use of precedent-breaking coercion. The author implicitly argues that rights protections in the UDHR should not be undermined by governmental assertions of power or national security concerns.
FW Ratio: 50%
Observable Facts
Article describes Pentagon's use of supply chain risk designation as 'unprecedented,' implying the author views this as an improper expansion of governmental power.
Author frames the threat to Anthropic as an abuse of legal authority, not a legitimate exercise of power.
Inferences
The critique of unprecedented governmental coercion advocates against interpretations of national security that would undermine human rights protections.
By framing legitimate corporate rights as compatible with community safety constraints, the author argues against subordinating all individual/corporate rights to security claims.
Content indirectly affirms equality by criticizing the Pentagon's threat to apply unprecedented enforcement mechanisms to a domestic company, suggesting unequal treatment and power abuse.
FW Ratio: 33%
Observable Facts
The article states the 'supply chain risk' designation has 'previously only been used for foreign companies like Huawei' and notes its use against a domestic company in contract negotiations is 'unprecedented.'
Inferences
The emphasis on unprecedented and unequal application of enforcement mechanisms implies a concern that formal equality is being violated through discriminatory governmental power.
By highlighting the disparity in treatment (foreign vs. domestic), the author advocates for equal protection under law.
Content implicitly advocates for equal protection under law by criticizing governmental overreach and the unprecedented use of enforcement mechanisms against a domestic company, suggesting the author views this as violating equal protection principles.
FW Ratio: 33%
Observable Facts
The article criticizes the Pentagon for using supply chain risk designation 'as a bargaining chip to threaten a domestic company' in a way that is 'unprecedented.'
Inferences
The critique of unequal application of legal enforcement mechanisms advocates for equal protection and non-discrimination.
The author frames the Pentagon's actions as an abuse of power that violates principles of equal legal protection.
Content advocates for privacy by criticizing the Pentagon's demand for 'all lawful purposes' and Anthropic's requirement for guarantees against 'mass surveillance of American citizens.' This affirms the right to privacy from government surveillance.
FW Ratio: 33%
Observable Facts
Anthropic asked for 'a guarantee that their AIs would not be used for mass surveillance of American citizens,' and the Pentagon refused.
Inferences
The author's sympathetic framing of Anthropic's privacy concern suggests alignment with privacy rights as fundamental.
The critique of the Pentagon's refusal to constrain mass surveillance advocates for protection of privacy from governmental intrusion.
Content critiques governmental processes by highlighting the Pentagon's use of threats and coercion in contract negotiations, implicitly advocating for transparent, fair governmental decision-making rather than extralegal pressure tactics.
FW Ratio: 50%
Observable Facts
Article reports the Pentagon 'threatening "consequences"' and describes threatened actions as 'bargaining chips,' implying coercive rather than legitimate governmental processes.
Author notes the supply chain risk designation is being used in an 'unprecedented' manner, suggesting departure from normal governmental procedure.
Inferences
The critique of coercive tactics advocates for legitimate, transparent governmental processes rather than extrajudicial pressure.
The emphasis on precedent violation suggests the author expects government to follow established procedures, supporting the right to fair and legitimate governance.
Content implicitly advocates for a social order protecting human rights by criticizing governmental actions that violate human rights principles (no mass surveillance, no autonomous killing). The author frames principled constraints as necessary for legitimate social order.
FW Ratio: 50%
Observable Facts
Article treats Anthropic's demand for constraints on mass surveillance and autonomous weapons as legitimate human rights protections, not as unreasonable restrictions.
Author frames the Pentagon's refusal to guarantee these constraints as a failure of principled governance.
Inferences
By affirming constraints on surveillance and autonomous weapons as legitimate, the author advocates for social order grounded in human rights protections.
The critique of governmental refusal to guarantee human rights protections supports the principle that legitimate governance must respect fundamental rights.
Content advocates for freedom balanced against community rights by criticizing governmental coercion while supporting legitimate constraints on AI use. The author frames constraints on mass surveillance and autonomous weapons as necessary limitations on freedom in service of community and human rights.
FW Ratio: 50%
Observable Facts
Article presents Anthropic's demand for constraints on AI use as legitimate, suggesting the author views some limitations on corporate freedom as necessary for community protection.
Author does not argue against government oversight of AI, only against coercive and unprecedented enforcement.
Inferences
By supporting constraints on AI use while criticizing coercive enforcement, the author advocates for balancing corporate freedom with community safety.
The framing suggests legitimate limits on freedom are acceptable when grounded in rights protection and transparent process.
Content critiques governmental discrimination and unequal application of law, implicitly advocating for non-discriminatory enforcement of rules and constraints.
FW Ratio: 33%
Observable Facts
Article states the supply chain risk designation 'has previously only been used for foreign companies' but is now being 'used as a bargaining chip to threaten a domestic company.'
Inferences
The criticism of selective application of enforcement mechanisms suggests the author views this as a discriminatory use of government power.
The article advocates for consistent application of legal tools, implying a commitment to non-discriminatory treatment.
Publication tagline 'P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary' suggests rationalist, analytical mission aligned with evidence-based discourse. Minor positive modifier.
Editorial Code
—
No explicit editorial code or standards observed on-page.
Ownership
—
Published on Substack by Scott Alexander; no corporate ownership conflicts apparent from page content.
Access & Distribution
Access Model
+0.10
Article 25 Article 26 Article 27
Article marked 'isAccessibleForFree: true' in schema.org markup. Free access aligns with democratic information principles. Minor positive modifier.
Ad/Tracking
—
No ad or tracking content observable on provided page content.
Accessibility
+0.10
Article 19 Article 26 Article 27
Substack platform provides basic accessibility features; article is text-based and accessible via screen readers. Modest positive modifier.
Article is marked 'isAccessibleForFree' in schema markup, enabling free distribution. Substack platform provides basic accessibility for screen readers. These structural features support the right to receive and impart information without interference.
Phrases like 'nuclear option' and 'threatening "consequences"' use emotionally charged language to frame Pentagon actions negatively.
appeal to fear
Emphasis on the Pentagon's 'nuclear option' designation and description of it as 'potentially fatal to their business' may invoke fear about governmental power.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 13:57:54 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.