87 points by ej88 5 days ago | 60 comments on HN
| Moderate positive
Contested
Editorial · v3.7· 2026-02-26 04:45:46 0
Summary Scientific Integrity & Research Transparency Advocates
This technical blog post documents METR's redesign of a developer productivity study following discovery of significant methodological flaws. The content exemplifies scientific transparency and intellectual honesty by openly acknowledging selection biases, measurement unreliability, and data limitations that undermine conclusions—prioritizing epistemic integrity over organizational credibility. The article supports human rights frameworks through open data access, transparent reporting, and commitment to building reliable evidence infrastructure for informed governance of AI systems.
Really interesting updates to their 2025 experiment.
Repeat devs from the original experiment went from 0-40% slowdown to now -10-40% speedup - and METR estimates this as a 'lower-bound'
more devs saying they dont even want to do 50% of their work without AI, even for 50/hr
30-50% of devs decided not to submit certain tasks without AI, missing the tasks with the highest uplift
it also seems like there is a skill gap - repeat devs from the first study are more productive with ai tools than newly recruited ones with variable experience
overall it seems like the high preference for devs to use AI is actually hurting METR's ability to judge their speedup, due to a refusal to do tasks without it. imo this is indirectly quite supportive for ai coding's productivity claims.
I'm a bit perplexed by the developer selection effects.
I get that developers want to use AI. But are they also claiming there's not still a no/low-AI population of developers? Or that their means of selection don't find these developers?
Are they worried that by splitting devs into groups of AI experience they might be measuring some confounder that causes people to choose AI / not AI in their careers?
This is very interesting because I see a lot of AI detractors point to the original study as proof that AI is overhyped and nothing to worry about. In this new study the findings are essentially reversed (20% slowdown to 20% speedup).
Unless this measures the entire SDLC longitudinally (like say, over a year) I'm not interested. I too can tell Claude Code to do things all day every day, but unless we have data on the defect rate it doesn't matter at all.
It's kind of funny that METR is known primarily for both the most bearish study on AI progress (the original 20% slowdown one), and the most bullish one on AI progress (the long-task horizon study showing exponential increase in duration of tasks AI models can accomplish with respect to date of release).
In either case, it seems people ended up bolstering their preexisting views on AI based on whichever study most affirmed them (for the former, that AI coding models didn't actually help and created a mirage of productivity that required more work to fix than was worth it, the latter that AI models were improving at an exponential rate and will invariably eclipse SWE's in all tasks in a deterministic amount of time.)
I think the truth is somewhere in the middle. Just anecdotally we've seen multi-million dollar fortunes being minted by small teams developing using 90% AI-assisted coding. Anthropic claims they solely use agents to code and don't modify any code manually.
"I don't want to do this without AI" sounds like we're already well into the brain atrophy stage of this. Now what? (I'd think about it myself but....)
> When surveyed, 30% to 50% of developers told us that they were choosing not to submit some tasks because they did not want to do them without AI. This implies we are systematically missing tasks which have high expected uplift from AI.
In fact, one of the developers in the original study later revealed on Twitter that he had already done exactly that during the study, i.e. filtered out tasks he prefered not to do without AI: https://xcancel.com/ruben_bloom/status/1943536052037390531
While this was only one developer (that we know of), given the N was 16 and he seems to have been one of the more AI-experienced devs, this could have had a non-trivial effect on the results.
The original study gets a lot of air-time from AI naysayers, let's see how much this follow-up gets ;-)
> The subjects are using ChatGPT 2.5 and copy-pasting code.
The reason AI hype seems to be so bipolar is that "AI" isn't one thing. Hundreds of models, dozens of tools. And to get something done well, a seasoned engineer needs to master half a dozen at a time.
What worried me is that LLMs are becoming a crutches for overworked engineers. But instead of reducing the workload it has also increased the expectation and consequently more aggressive deadlines, making it all worst overall
The study was designed to have devs who are comfortable with AI perform 50% of tasks with AI and 50% without. So the problem is the population of "Developers who use AI regularly but are willing to do tasks without AI" is shrinking.
>> Are they worried that by splitting devs into groups of AI experience they might be measuring some confounder that causes people to choose AI / not AI in their careers?
The developer sample size was small (16 people in the original study) and the task sample size is larger (~250 tasks). I think the worry is variance in developer productivity would totally wash out any signal.
Developers are refusing to complete the survey or selecting themselves out because they (apparently) don’t want to complete the non-AI task.
The also saw selection effects from a large reduction in the pay for the study (which is an unfortunate confounder here), 150/hr -> 50/hr.
They guess this makes their estimates lower bounds, but the selection effect is complicated (which they acknowledge).
Overall this is a hard problem for them in the current state. It will be challenging to produce convincing year over year analysis under these conditions.
I think their old findings were hard to treat as gospel just due to the kind of comparison + the sample, but this new result is probably much noisier.
It’s hard to make reliable, directional assumptions about the kind of self-selection and refusal they saw, even without worrying about the reward dropping 66%.
At this point the AI labs would pretty much have to form an illegal price fixing cartel in order to jack the prices up, they've been competing to drive down prices for so long.
They'd have to get the Chinese AI labs to go along with that price fixing too.
AI detractors loved that previous study so much. It seems to have been brought up in the majority of conversations about AI productivity over the past six months.
(Notable to me was how few other studies they cited, which I think is because studies showing AI productivity loss are quite uncommon.)
The finding of the first study was people cannot judge their performance with these tools. So I don’t think the lack of individuals not willing to work without them is indicative of productivity improvements. I think it’s indicative of them being enjoyable to use.
"I avoid issues like AI can finish things in just 2 hours, but I have to spend 20 hours. I will feel so painful if the task is decided as AI-disallowed."
What really doesn't sound like the results they got where developers may get up to twice as productive on the best scenario.
There's surely something scary there. And the lack of people ambivalent about AI isn't a certain indication it's well accepted as they think, it can just as easily be caused by polarization.
> Anthropic claims they solely use agents to code and don't modify any code manually.
Have you used CC? It shows. They did not make their fortune off this, and it’s at least lost me a customer because of how sloppy it is. The model is good, and it’s why they have to gate access to it. I’d much rather use a different harness.
I do think you’re on to something though. As societal wealth further concentrates among the few, we’re going to get more and more slop for the rest of us because we have no money (relatively speaking). Agentic coding is here to stay because we as a society are forced more and more slop. It’s already rampant, this is just automating it.
I'm pretty sure that this was exactly the response to the first generation of devs who insisted on coding with a terminal instead of submitting punch cards like "real programmers".
> 3. Regarding me specifically, I work on the LessWrong codebase which is technically open-source. I feel like calling myself an "open-source developer" has the wrong connotations, and makes it more sound like I contribute to a highly-used Python library or something as an upper-tier developer which I'm not
That’s very interesting! This kinda matches what I see at work:
- low performers love it. it really does make them output more (which includes bugs, etc. it’s causing some contention that’s yet to be resolved)
- some high performers love it. these were guys who are more into greenfield stuff and ok with 90% good. very smart, but just not interested in anything outside of going fast
- everyone else seems to be finding use out of it, but reviews are painful
fwiw i think the interesting part about the original study wasn't so much the slowdowm part, but the discrepancy between perceived and measured speedup/slowdown (which is the part i used to bring up frequently when talking to other devs)
I don’t want to do work around the house without a fully charged battery for my ryobi either. I don’t want to go on a groccery run without my car. Using tools is not brain atrophy
I really am quite in awe of Claude Code recently, so definitely not a naysayer, but this is a really important point. It’s so easy to create code, but am I shipping that much to prod than I used to? A bit.
Obviously this highly depends on your company and your setup and risk tolerance and what not.
As one of the naysayers who talked a lot about the original study, I enthusiastically endorse any attempt at all to actually measure AI productivity. An increase from 20% slowdown to 20% speedup over the past year seems broadly consistent with my understanding of how things have gone. I think I remain classified as a "naysayer", though, because the "booster" case has gone from "I'm multiple times more productive" to "I never have to look at code my AI agents just handle everything" over the same period.
Keep in mind that they make large profit on inference. Not enough to make up for losses on training but it won’t be a problem for Chinese labs which will just steal their weights.
There are some people participating in the study who will fire & forget instructions to Claude/Codex running in parallel worktrees, but would really struggle if they were required to work on their project without AI assistance.
So while some study participants probably are seeing an actual speedup because of the discipline with which they manage their codebase's structure & documentation, other study participants are actually getting worse at non-AI coding.
...and METR's study can't tell which is which because METR's study isn't using any sort of codebase quality metrics for grounding.
For the thousandth time - they. make. a. profit. Inference margin is over 60%, today.
They are spending that money training ever-larger models, so they are cashflow negative, but under almost any sane GAAP treatment that does not allow one to write down all R&D upfront (capital costs of model training), they are profitable.
Should this matter to you? Only if you're making financial decisions that assume that somehow one day the "jig will be up" - i.e. please don't short these stocks when they float, or at least do so very judiciously.
Content exemplifies freedom of expression through transparent reporting of research findings, methodological critiques, and honest acknowledgment of data limitations. Publication of contradictory or unfavorable results demonstrates editorial commitment to disclosure and intellectual integrity.
FW Ratio: 57%
Observable Facts
The article explicitly details flaws in study design, selection biases, and measurement unreliability.
Raw datasets from both early and current study are made publicly available for independent review.
Content exemplifies effort to establish social order enabling human rights through rigorous research on AI capabilities. Study attempts to create reliable evidence base for informed decision-making about AI governance, directly supporting Article 28's right to social and international order enabling rights.
FW Ratio: 57%
Observable Facts
Study explicitly measures 'how well [AI systems] can perform complex tasks autonomously' to inform risk assessment.
Organization's stated mission is to 'scientifically measure whether and when AI systems might threaten catastrophic harm.'
Publication transparently identifies data limitations to prevent misinformed policy conclusions.
Content demonstrates engagement with cultural and scientific advancement through research methodology designed to measure AI capabilities and impact. Commitment to open science and transparent reporting supports Article 27's right to share in scientific progress.
FW Ratio: 60%
Observable Facts
Organization publishes research findings and datasets publicly without commercial restrictions.
Study design involves collaboration with developer communities in co-creating knowledge about AI impacts.
Article transparently documents methodological evolution and scientific learning.
Inferences
Open publication and data sharing enable broader participation in scientific discovery and understanding.
Iterative methodology and transparent error correction reflect commitment to scientific integrity and communal knowledge advancement.
Content addresses working conditions through study design: researchers paid developers for participation ($50-150/hour) and allowed task selection. However, the article identifies how lower pay rates contributed to selection bias, implying awareness of work compensation's importance.
FW Ratio: 60%
Observable Facts
The study explicitly pays developers $50/hour for participation in previous iteration, $150/hour in original study.
Article identifies that reduced pay rate ($50/hour down from $150/hour) contributed to recruitment problems.
Developers were allowed to work on their own projects rather than assigned tasks, suggesting some agency in work selection.
Inferences
Discussion of how payment levels affect developer participation acknowledges that work compensation relates to freedom to choose employment.
The structure allowing developers to select tasks recognizes some dimension of work autonomy, relevant to Article 23.
Content implicitly affirms values of scientific integrity, transparency about methodological flaws, and commitment to understanding AI's societal impacts—aligned with Preamble's emphasis on reason and justice as foundations for human rights.
FW Ratio: 60%
Observable Facts
The article transparently documents methodological failures and selection biases in the study design.
Authors acknowledge limitations prevent reliable conclusions about AI productivity effects.
The publication prioritizes epistemic honesty over defending initial findings.
Inferences
The transparent acknowledgment of research limitations suggests commitment to principles of accountability central to rights-based governance.
Publishing corrections and redesigns supports the Preamble's vision of human dignity through informed discourse.
Content indirectly relates to education through focus on developer technical skill and knowledge-building. Study methodology emphasizes understanding and documenting AI capabilities, relevant to educational advancement.
FW Ratio: 60%
Observable Facts
Content is published openly and made accessible to general audience without paywalls.
Datasets are made publicly available for educational and research purposes.
Study recruits experienced developers (median 10 years experience), implying engagement with skilled knowledge communities.
Inferences
Public data availability supports educational access to research materials and methodologies.
Focus on developer communities implies engagement with technical knowledge and skill development.
Website structure enables open access to research data and full publication record; datasets are publicly available, supporting reader access to underlying evidence for independent verification.
Organization operates as nonprofit research entity; open-access publishing model supports collective knowledge infrastructure. Transparent documentation of limitations enables informed public discourse.
Organization structure as nonprofit research entity and open data access support participation in scientific advancement; website provides research outputs and datasets freely.
Study structure involves researcher-developer payment relationships; article documents how compensation levels affect study participation, suggesting structural engagement with labor conditions.
Website includes 'Research' and 'Notes' sections suggesting knowledge dissemination; full datasets publicly available supports educational access, though article does not explicitly address education rights.
Content implicitly affirms values of scientific integrity, transparency about methodological flaws, and commitment to understanding AI's societal impacts—aligned with Preamble's emphasis on reason and justice as foundations for human rights.
Website includes a 'Donate' and 'Careers' section, suggesting organizational structure that may facilitate participation and association, though this is minimal structural evidence.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 11:31:12 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.