722 points by cwwc 5 days ago | 683 comments on HN
| Neutral Editorial · v3.7· 2026-02-26 04:42:32 0
Summary Free Expression & Information Access Acknowledges
TIME's exclusive reporting on Anthropic's AI safety policy change represents journalistic exercise of free expression and information dissemination on matters of public interest. The reporting demonstrates commitment to editorial scrutiny of corporate AI practices, though the site's structural infrastructure includes behavioral advertising tracking and consent management frameworks that constrain reader privacy. Overall engagement with UDHR principles is concentrated in Article 19 (free expression) and limited engagement with Articles 12 (privacy) and 25 (public welfare).
What an interesting week to drop the safety pledge.
This is how all of these companies work. They’ll follow some ethical code or register as a PBC until that undermined profits.
These companies are clearly aiming at cheapening the value of white collar labor. Ask yourself: will they steward us into that era ethically? Or will they race to transfer wealth from American workers to their respective shareholders?
This headline unfortunately offers more smoke than light. This article has nothing to do with the current tête-à-tête with the Pentagon. It is discussing one specific change to Anthropic's "Responsible Scaling Policy" that the company publicly released today as version "3.0".
TBH I am sad that Anthropic is changing its stance, but in the current world, if you even care about LLM safety, I feel that this is the right choice — there’s too many model providers and they probably don’t consider safety as high priority as Anthropic. (Yes that might change, they can get pressurized by the govt, yada yada, but they literally created their own company because of AI safety, I do think they actually care for now)
If we need safety, we need Anthropic to be not too far behind (at least for now, before Anthropic possibly becomes evil), and that might mean releasing models that are safer and more steerable than others (even if, unfortunately, they are not 100% up to Anthropic’s goals)
Dogmatism, while great, has its time and place, and with a thousand bad actors in the LLM space, pragmatism wins better.
I don’t blame anthropic here. The government literally threatened their existence publicly. They either agreed or their business would be nationalized.
I guess this is Anthropic's DRM moment. (Mozilla resisted allowing Firefox to play DRM- limited media for a long time, until it finally had to give in to stay relevant.)
I don't know enough to evaluate this or other decisions. I'm just glad someone is trying to care, because the default in today's world is to aggressively reject the larger picture in favor of more more more. I don't know how effective Anthropic's attempts to maintain some level of responsibility can be, but they've at least convinced me that they're trying. In the same way that OpenAI, for example, have largely convinced me that they're not. (Neither of those evaluations is absolute; OpenAI could be much worse than it is.)
Developments like this make me less interested in building a "successful" tech company.
It increasingly feels like operating at that scale can require compromises I’m not comfortable making. Maybe that’s a personal limitation—but it’s one I’m choosing to keep.
I’d genuinely love to hear examples of tech companies that have scaled without losing their ethical footing. I could use the inspiration.
Only well written legislation backed by effective enforcement and severe and personal criminal penalties will prevent large corporate entities from behaving badly.
Pledges are a cynical marketing strategy aimed at fomenting a base politics that works to prevent such a regulatory regime.
> “We felt that it wouldn't actually help anyone for us to stop training AI models,”
How magnanimous! They are only thinking of others, you see. They are rejecting their safety pledge for you.
> “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”
Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”.
For all of you who thought Anthropic were “the good guys”, I hope this serves as a wake up call that they were always all the same. None of them care about you, they only care about winning.
I used to work at Anthropic. I fully believe that the folks mentioned in the article, like Jared Kaplan, are well-intentioned and concerned about the relationship between safety research and frontier capabilities – not purely profit.
That said, I'm not thrilled about this. I joined Anthropic with the impression that the responsible scaling policy was a binding pre-commitment for exactly this scenario: they wouldn't set aside building adequate safeguards for training and deployment, regardless of the pressures.
This pledge was one of many signals that Anthropic was the "least likely to do something horrible" of the big labs, and that's why I joined. Over time, the signal of those values has weakened; they've sacrified a lot to get and keep a seat at the table.
Principled decisions that risk their position at the frontier seem like they'll become even more common. I hope they're willing to risk losing their seat at the table to be guided by values.
1. AI is military/surveillance technology in essence, like many other information technologies,
2. Any guarantee given by AI companies is void since it can be changed in a day,
3. Tech companies have no real control over how their technology will be used,
4. AI companies may seem over-valued with low profits if you think AI as a civil technology. But their investors probably see them as a part of defense (war) industry.
Who could've seen that one coming? Honestly, if you want to do profit maximising AI research at the cost of humanity, go for it. Its all this fake preaching about how they want to save the world from all the other bad AI companies that really irks me.
I think the US Gov’t is basically forcing them and while it sounds nice to be all safe… If we were involved in WW3 would an organization like anthropic really not support the western side?
The race is on for military supremacy in an AI world. The safest thing to do is to race ahead lest your geopolitical adversary leads the way. This is similar to the nuclear arms race. In the ideal universe, nobody does it, but in the real world and game theory, you do not have a choice.
Any pledges/values/principles that are abandoned as soon as it becomes difficult to keep them, are just marketing. This is just the next item on the list.
> “We felt that it wouldn't actually help anyone for us to stop training AI models,”
Is the implication here that Anthropic admits they already can't meet their own risk and safety guidelines? Why else would they have to stop training models?
Google adopted "Don't be evil" shortly after founding and held onto it for about 15 years before Alphabet quietly dropped it in 2015. (Google the subsidiary technically kept it until 2018).
Anthropic's Responsible Scaling Policy, the hard commitment to never train a model unless safety measures were guaranteed adequate in advance, lasted roughly 2.5 years (Sept 2023 to Feb 2026).
The half-life of idealism in AI is compressing fast. Google at least had the excuse of gradualism over a decade and a half.
> Then something went wrong, and no one knew how to stop it,
This is the problem with every AI safety scenario like this. It has a level of detachment from reality that is frankly stark.
If linesman stop showing up to work for a week, the power goes out. The US has show that people with "high powered" rifles can shut down the grid.
We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil".
A lot of what safety amounts to is politics (National, not internal, example is Taiwan a country). And a lot more of it is cultural.
I genuinly curious why they are so holy to you, when to me I see just another tech company trying to make cash
Edit: Reading some of the linked articles, I can see how Anthropic CEO is refusing to allow their product for warfare (killing humans), which is probably a good thing that resonates with supporting them
> This article has nothing to do with the current tête-à-tête with the Pentagon.
The article yes, but we cannot be sure about its topic. We definitely cannot claim that they are unrelated. We don't know. It's possible that the two things have nothing to do with each other. It's also possible that they wanted to prevent worse requests and this was a preventive measure.
> If we need safety, we need Anthropic to be not too far behind (at least for now, before Anthropic possibly becomes evil)
I don't think it's going to be as easy to tell as you think that they might be becoming evil before it's too late if this doesn't seem to raise any alarm bells to you that this is already their plan
Once they are a dominant market leader they will go back to asking the government to regulate based on policy suggestions from non-profits they also fund.
The world would be so much nicer if there were just fewer pragmatists shitting up the place for everyone. We might actually handle half our externalities.
Maybe this is a weird arena to state the obvious. But you don't need to build a multi-billion vc/public company. Build a smaller revenue generating company without outside funding and it's up to you.
No, they either agreed or fought the government. You’re allowed to fight governments. Mahatma Gandhi and Reverend King Jr did it, and they wrote about how to do it. You might lose sometimes, but my god, you can at least fight.
It's not like that happened out of the blue. (Which could've also been the case in today's day and age.) Anthropic shouldn't have gotten involved in government contracts to begin with.
They inserted themselves into the supply chain, and then the government told them that they'll be classified as a supply chain risk unless they get unfettered access to the tech. They knew what they were getting into, but didn't want the competitors to get their slice of the pie.
The government didn't pursue them, Anthropic actively pursued government and defense work.
Talk about selling out. Dario's starting to feel more and more like a swindler, by the day.
> The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.
It's not just AI, replace "safe" with "open" and you will find a close match with many companies. I guess the difference is that after the initial phase, we are continuously being gaslighted by companies calling things "open" when they are most definitely not.
If you want to be able to retain ethics, among other things make sure not to take the company public. Then you’re basically legally required to drop ethics in favor of profits.
Also don’t take investment from anyone who isn’t fully aligned ethically. Be skeptical of promises from people you don’t personally know extremely well.
That may limit you to slower growth, or cap your growth (fine if you want to run a company and take home $2M/ye from it; not fine if you want to be acquired for $100M and retire.) It may also limit you to taking out loans to fund growth that you can’t bootstrap to, which is a different kind of risky.
> I joined Anthropic with the impression that the responsible scaling policy was a binding pre-commitment for exactly this scenario
Pledges are generally non-binding (you can pledge to do no evil and still do it), but fulfill an important function as a signal: actively removing your public pledge to do "no evil" when you could have acted as you wished anyway, switches the market you're marketing to. That's the most worrying part IMO.
> Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”.
I mean, yes, that is actually how world works. That is why we need safety, environmental and other anti-fraud regulations. Because without them, competition makes it so that every successful company will fraud, hurt and harm. Those who wont will be taken over by those who do.
Indeed, Anthropic can’t afford to be the ones that impose any kind of sense in the market - that’s supposed to be the job of the government by creating policy,
regulations and installing watchdogs to monitor things.
But lucky for the AI companies, most of them are based in place that only has a government on paper and everyone forgot where that paper is.
Article 19 protects freedom of opinion and expression. The exclusive reporting on Anthropic's safety pledge represents journalism exercising editorial judgment to report on matters of public interest regarding AI policy. The headline and exclusive framing suggest TIME's commitment to investigative reporting and information dissemination.
FW Ratio: 50%
Observable Facts
Article is published as an exclusive TIME report, indicating editorial selection and verification of reported information.
Content focuses on corporate AI safety policy changes, a matter of significant public interest regarding AI governance.
Page metadata includes navigation and content discovery features supporting access to TIME's broader news coverage.
Inferences
The exclusive reporting format demonstrates TIME's exercise of editorial judgment in deciding what information merits public attention.
Reporting on Anthropic's safety pledge change reflects journalistic scrutiny of corporate practices affecting public interests.
Professional journalism standards embedded in exclusive reporting support informed public discourse on AI policy.
Article 25 addresses right to adequate standard of living including health and welfare. The exclusive reporting on AI safety policies has indirect relevance to public health and safety considerations related to AI systems, but does not directly engage welfare or living standards.
FW Ratio: 50%
Observable Facts
Reporting addresses Anthropic's safety commitments, which relate to AI system safety and public protection.
Content is distributed via online platform enabling broad potential audience reach.
Inferences
Exclusive reporting on AI safety policies indirectly relates to public welfare by informing citizens about corporate practices affecting societal safety.
Potential paywalling may restrict access to information relevant to understanding AI safety governance.
The article does not explicitly address privacy rights, though it reports on corporate AI safety practices. The mention of exclusive reporting suggests selective disclosure.
FW Ratio: 57%
Observable Facts
Page code includes TCF (Transparency and Consent Framework) API initialization for cookie consent management.
Page code includes GPP (Global Privacy Platform) stub implementation for privacy signal handling.
Page code contains iasPET tracking pixel configuration with publisher ID 931641 for behavioral advertising purposes.
Content is tagged with 'data-purposes="behavioral_advertising"' indicating monetization through behavioral tracking.
Inferences
The presence of consent management frameworks suggests acknowledgment of privacy obligations, but active behavioral tracking contradicts user privacy protection.
The iasPET tracking infrastructure enables profiling of readers, constraining their right to privacy during content consumption.
Exclusive reporting format may limit reader autonomy in accessing information about AI safety policies.
Preamble's framing of human dignity and inherent rights of all members of the human family is not directly engaged by this exclusive tech policy reporting.
TIME's digital publishing platform enables distribution of news and opinion; exclusive reporting model represents editorial gatekeeping that mediates information access while supporting professional journalism standards.
Online publishing platform provides accessible information about corporate practices affecting public safety and welfare; however, potential paywall restrictions may limit universal access to this reporting.
Site infrastructure includes TCF consent management, GPP privacy framework, and IAS behavioral advertising tracking (iasPET), demonstrating active behavioral tracking and data collection practices that constrain privacy.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.