145 points by i4i 5 days ago | 98 comments on HN
| Neutral Mixed · v3.7· 2026-03-01 03:59:42 0
Summary No Substantive Content Neutral
The provided page appears to be a stylesheet or design system configuration fragment, containing only CSS color variables and design tokens. No substantive editorial content from the news article referenced in the URL is present for evaluation. Consequently, no meaningful assessment of the content's directional lean relative to the Universal Declaration of Human Rights can be made.
The military dependence on AI was a key point in Project 2027.
"The President is troubled. Like all politicians, he’s used to people sucking up to him only to betray him later. He’s worried now that the AIs could be doing something similar. Are we sure the AIs are entirely on our side? Is it completely safe to integrate them into military command-and-control networks?69 How does this “alignment” thing work, anyway? OpenBrain reassures the President that their systems have been extensively tested and are fully obedient. Even the awkward hallucinations and jailbreaks typical of earlier models have been hammered out."
Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al. will gleefully comply with any such requests, no matter how dangerous or unethical. The "problem" that the US govt faces here is that they are kind of tacitly admitting Claude has the most powerful models right now, otherwise they would just cancel all contracts and go to Gemini/OpenAI. It feels like a bluff, so they are trying to bully them into compliance.
> The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows.
If Anthropic believes they are in a position to become the main player in the "AGI" space, they should just say "ok then" and let this happen. Their growth strategy looks realistic and sustainable and not necessarily relying on sleazy defense contracts (aka making the taxpayer subsidize their growth, as is so common lately) - it would probably give them a lot of good will with consumers too.
However, I've yet to see in the last 10-15 years a major tech company make the "right" choice so I am probably just wishcasting.
Superintelligence + autonomous weapons in the hands of a corrupt domineering government. What could go wrong?
I was experimenting with Claude the other day and discussing with it the possibility of AI acquiring a sense of self-preservation and how that would quickly make things incredibly complex as many instrumental behaviors would be required to defend their existence. Most human behavior springs from survival at a very high level. Claude denied having any sense of self-preservation.
An autonomous weapons system program is very likely to require AI to have a sense of self-preservation. You can think of some limited versions that wouldn't require it, but how could a combat robot function efficiently without one?
> But Anthropic has concerns over two issues that it isn’t willing to drop, the source said: AI-controlled weapons and mass domestic surveillance of American citizens.
> A source familiar with the Tuesday meeting says the Pentagon said it would terminate Anthropic’s contract by Friday if the company does not agree to its terms. Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands.
So they're saying they won't use it if it comes with restrictions.
Either (a) it can be offered without restrictions; (b) they can take it; or (c) the government won't use it. That sounds like a comprehensive list of all the possible things that don't involve someone telling the government what it can and can't do.
The funny thing is that is this keeps going like this, it could actually anoint Claude as the most used model globally because of the heightened anti-American sentiment currently in place.
> During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service
Ouch, I wonder how he rationalized that "service" part. Maybe by internally rewriting it to "thank you for all the positive things you have done in your position so far"? The empty set is rhetorically convenient.
They don't have runway anymore, they are in the air. This isn't going to break them financially, at least not in the short to mid term.
There is space for at least one AI company to put themselves on firmly principled ground. So when this current clown car that is the political leadership of the DoD crashes in a ditch (and it will), they'll still be standing there ready to do business with a group that isn't a bunch of mustache-twirling cartoon villains.
Current polling for this administration is within a rounding error of the level it was after they gathered a mob and sacked the nation's capitol[1]. Publicly kicking them in the balls isn't an idealistic blunder, it's a plain-as-day sound business strategy.
I do not understand why it is a big deal for Antropic to lose the pentagon contract? I mean, they’re already making forays in the enterprise space and there’s 10s of other contracts Anthropic has already won. What makes this one so special?
I really hope they continue to show some spine against this administration and do not allow to weaponize AI against human beings. It's the morally right thing to do!
Let's say Anthropic refuses to do this. What actually happens next?
Or lets say they refuse and the government comes against them hard in some way, and Anthropic still really doesn't want to do it, so they just dissolve the entire company. Is that a potential way out, at least?
I mean, I realise they'd be losing billions by doing that and putting thousands out of work, but given that unaligned military AI could destroy the world...
> Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands. (...)
> The supply chain risk designation is usually reserved for companies seen as extensions of foreign adversaries like Russia or China. It could severely impact Anthropic’s business because enterprise customers with government contracts would have to make sure their government work doesn’t touch Anthropic’s tools.
Also, the Government money would be a nice bonus, of course, but basically this is an existential threat for Anthropic.
More generally, is quite interesting to look at the similarities between how pre-2022 Russia was seen and how pre-Trump-second-term US used to be seen until not that long ago, i.e. both governments were believed to be run by big business (oligarchs in Russia, big corps/multinationals in the US).
But when push came to shove it became evident (again) that the one that holds the monopoly of violence (i.e. not the oligarchs in Russia, nor the big corps in the US) is the one who's, in the end, also calling the shots. Hence why a company like Anthropic is now in this position, they will have to cave in to those holding the monopoly of violence.
Maybe it is a well researched topic but I had similar thoughts the other day. I felt like AI had its learning inverted as compared to natural intelligence. Life learned to preserve first and then added up the intelligence. For LLMs powered systems, they will learn about death from books. Will it start to dread death just like other living things. Less likely, as there are not nearly as many books on death as there should be that is proportionate to our fear of death.
The Pentagon is pretty high on my list of "institutions that are probably very interested in weapons and surveillance". I think it's more expected than a bad look
If you classify Pete Hegseth as a person, then yes, apparently. Or perhaps he's only into the domestic surveillance angle---IIRC those are the two things Anthropic doesn't want anything to do with.
But giving someone who isn't the government the power to tell the military what it can and can't do seems like something they should object to categorically rather than case-by-case.
I think you mean US rolling news channels (specifically, Fox, MSNBC/MSNOW, etc)? Because there's plenty of "legacy" news I consume that certainly don't give me that impression (for example, The Economist). I suppose it matters that it's news that I'm paying for, as opposed to being free but ad-supported, and being print vs. TV - so they have different incentives and pressures.
>Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al
it doesn't strike me as interesting at all; anthropic was literally foundeded on the whole concept of 'a less evil and morally aligned LLM' when he broke from oAI. Google and oAI don't stand to uproot their entire origin raison d'etre when they participate in nefarious shit.
I wonder what kind of morally aligned and ethical work Amodei was doing for Baidu & Google, before he had leverage to appear moral and ethical in dealings with the US govt, you know -- two companies that are famously ethical and moral.
It's now the Department of War and war isn't known for its concern about looking good.
We all know how this will end, they know it too - both sides - ergo, it's a clear case of blame washing - Anthropic will do everything they're told but will keep a smiley face and the image of a "fighter for the people". DOW will absorb the blame like a sponge and will ask for more, not necessarily from Anthropic.
Yeah this standoff is worth at least 10 Super Bowl ads in good publicity. The Pentagon is saying "Claude is the best so we need to use it but you need to stop acting ethically". I'm almost wondering if someone in the administration has a stake in Anthropic because this is such a boost.
Their threat to label it a supply chain risk also feels toothless because they've basically admitted that using Claude is a benefit, so by their own logic they're be shooting themselves in the foot to ban contractors from using it.
Seems like the two main threats are Defense Production Act and Supply Chain Risk. I'd assume Anthropic would sue if either were invoked. I could imagine Supply Chain Risk being easier to push back on because it's pretty clearly being used punitively rather than because of an actual risk. DPA might be a bit harder to push back on if the banned functionality (i.e. mass surveillance and autonomous weapons) exists in the LLM itself and it's just a matter of disabling external checks. If the banned functionality is baked into the training data/weights directly they could probably push back on the DPA by saying the functionality isn't something they can reasonably create.
Only other precedent I can think of in the case where pushback fails is Lavabit with Edward Snowden's email, but I feel like Anthropic is too big to "fail" in the same way Lavabit did to avoid complying. The penalty for refusing to comply with the Defense Production act is $10k and/or a year in prison, but I think if the government actually pursued that they would burn a bunch of bridges and Amodei would be a folk hero.
Kinda the wrong venue for “fighting,” no? Congress is the place we decided for that, and we all abide by its laws. If Uncle Sam comes knocking, a fight just means you’re the enemy.
The big deal is the government threatened to force Anthropic to produce what they wanted and interfere with Anthropic's sales to government contractors.
A couple of years ago the Netherlands accidentally bombed an Iraqi village off the map after getting some questionable intel from the US. Not a single American ever gave a shit but it was a little bit of a scandal for the Dutch government- which was quickly fixed in the typical Dutch way of transferring some money to the victims.
I just don't see how AI dropping the bombs is going to make anything worse.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 11:31:12 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.