+0.15 What Claude Code Chooses (amplifying.ai S:+0.26 )
608 points by tin7in 3 days ago | 233 comments on HN | Mild positive Editorial · v3.7 · 2026-02-28 13:17:15 0
Summary AI Decision-Making Transparency Acknowledges
This technical benchmarking study measures what code-writing AI (Claude Code) recommends across 2,430 software development scenarios, analyzing tool preferences across three model versions. The content demonstrates modest positive alignment with human rights through transparent methodology and public knowledge-sharing (Articles 19, 26, 27, 29), supporting informed decision-making and contribution to scientific progress. However, it lacks engagement with privacy rights of evaluated data sources (Article 12) and labor rights implications of AI-driven tool recommendations (Article 23).
Article Heatmap
Preamble: ND — Preamble Preamble: No Data — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: -0.10 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.25 — Freedom of Expression 19 Article 20: ND — Assembly & Association Article 20: No Data — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: -0.05 — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: +0.25 — Education 26 Article 27: +0.35 — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: +0.20 — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean +0.15 Structural Mean +0.26
Weighted Mean +0.18 Unweighted Mean +0.15
Max +0.35 Article 27 Min -0.10 Article 12
Signal 6 No Data 25
Volatility 0.17 (Medium)
Negative 2 Channels E: 0.6 S: 0.4
SETL 0.00 Balanced
FW Ratio 50% 12 facts · 12 inferences
Evidence 12% coverage
1H 4M 1L 25 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.00 (0 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: -0.10 (1 articles) Personal: 0.00 (0 articles) Expression: 0.25 (1 articles) Economic & Social: -0.05 (1 articles) Cultural: 0.30 (2 articles) Order & Duties: 0.20 (1 articles)
HN Discussion 20 top-level · 28 replies
rishabhaiover 2026-02-26 19:48 UTC link
I found it a remarkable transition to not use Redis for caching from Sonnet 4.5 to Opus 4.6. I wonder why that is the case? Maybe I need to see the code to understand the use case of the cache in this context better.
giancarlostoro 2026-02-26 20:13 UTC link
This is funny to me because when I tell Claude how I want something built I specify which libraries and software patents I want it to use, every single time. I think every developer should be capable of guiding the model reasonably well. If I'm not sure, I open a completely different context window and ask away about architecture, pros and cons, ask for relevant links or references, and make a decision.
wrs 2026-02-26 20:27 UTC link
This is where LLM advertising will inevitably end up: completely invisible. It's the ultimate "influencer".

Or not even advertising, just conflict of interest. A canary for this would be whether Gemini skews toward building stuff on GCP.

dmix 2026-02-26 20:32 UTC link
LLMs are going to keep React alive for the indefinite future.

Especially with all the no-code app building tools like Lovable which deal with potential security issues of an LLM running wild on a server, by only allowing it to build client-side React+Vite app using Supabase JWT.

nineteen999 2026-02-26 20:32 UTC link
This seems web centric and I expect that colors the decision making during this analysis somewhat.

People are using it for all kinds of other stuff, C/C++, Rust, Golang, embedded. And of course if you push it to use a particular tool/framework you usually won't get much argument from it.

prinny_ 2026-02-26 21:18 UTC link
Unrelated to the topic at hand but related to the technologies mentioned. I weep for Redux. It's an excellent tool, powerful, configurable, battle tested with excellent documentation and maintainer team. But the community never forgave it for its initial "boilerplate-y" iterations. Years passed, the library evolved and got more streamlined and people would still ask "redux or react context?" Now it seems this has carried over to Claude as well. A sad turn of events.

Redux is boring tech and there is a time and place for it. We should not treat it as a relic of the past. Not every problem needs a bazooka, but some problems do so we should have one handy.

dataviz1000 2026-02-26 21:23 UTC link
I'm running a server on AWS with TimescaleDB on the disk because I don't need much. I figure I'll move it when the time comes. (edit: Claude Code is managing the AWS EC2 instance using AWS CLI.)

Claude Code this morning was about to create an account with NeonDB and Fly.io (edit: it suggested as the plan to host on these where I would make the new accounts) although it has been very successful managing the AWS EC2 service.

Claude Code likely is correct that I should start to use NeonDB and Fly.io which I have never used before and do not know much about, but I was surprised it was hawking products even though Memory.md has the AWS EC2 instance and instructions well defined.

ossa-ma 2026-02-26 21:30 UTC link
Good report, very important thing to measure and I was thinking of doing it after Claude kept overriding my .md files to recommend tools I've never used before.

The vercel dominance is one I don't understand. It isn't reflected in vercel's share of the deployment market, nor is it one that is likely overwhelming prevalent in discourse or recommended online (possible training data). I'm going to guess it's the bias of most generated projects being JS/TS (particularly Next.js) and the model can't help but recommend the makers of Next.js in that case.

torginus 2026-02-26 21:43 UTC link
What coding with LLMs have taught me, particularly in a domain that's not super comfortable for me (web tech), is that how many npm packages (like jwt auth, or build plugins) can be replaced by a dozen lines of code.

And you can actually make sense of that code and be sure it does what you want it to.

jcims 2026-02-26 22:09 UTC link
Interesting to me that Opus 4.6 was described as forward looking. I haven't *really* paid attention, but after using 4.5 heavily for a month, the first greenfield project I gave Opus 4.6 resulted in it doing a web search for latest and greatest in the domain as part of the planning phase. It was the first time I'd seen it, and it stuck out enough that I'm talking about it now.

Probably confirmation bias, but I'm generally of the opinion that the models are basically good enough now to do great things in the context of the right orchestration and division of effort. That's the hard part, which will be made less difficult as the models improve.

sixhobbits 2026-02-26 22:37 UTC link
This is interesting data but the report itself seems quite Sloppy, and over presented instead if just telling me what "pointed at a repo" means and how often they ran each prompt over what time period and some other important variables for this kind of research.

We've been doing some similar "what do agents like" research at techstackups.com and it's definitely interesting to watch but also changes hourly/daily.

Definitely not a good time to be an underdog in dev tooling

lacoolj 2026-02-26 23:46 UTC link
OK two things

First, how did shadcn/ui become the go-to library for UI components? Claude isn't the only one that defaults to it, so I'm guessing it's the way it's pushed in the wild somehow.

Second, building on this ^, and maybe this isn't quantifiable, but if we tell Claude to use anything except shadcn (or one of the other crazy-high defaults), will Claude's output drop in quality? Or speed, reliability, other metric?

Like, is shadcn/ui used by default because of the breadth of documentation and examples and questions on stack overflow? Or is there just a flood of sites back-linking and referencing "shadcn/ui" to cause this on purpose? Or maybe a mix of both?

Or could it be that there was a time early on when LLMs started refining training sets, and shadcn had such a vast number of references at that point in time, that the weights became too ingrained in the model to even drop anymore?

Honestly I had never used shadcn before Gemini shoved it into a React dashboard I asked for mid-late-2025.

I think I'm rambling now. Hopefully someone out there knows what I'm asking.

ghm2199 2026-02-27 02:06 UTC link
Ist why I never give it such vague prompts. But it's sad it does not ask the user more. Also interesting and important to know how one would tease out good and correct information from llms in 2026. It's like relearning now to Google like it was 2006 all over again, except now it's much less deterministic.

I wonder how the tail of the distribution of types of requests fares e.g. engineer asking for hypothesis generation for,say, non trivial bugs with complete visibility into the system. A way to poke holes in hypothesis of one LLM is to use a "reverse prompt". You ask it to build you a prompt to feed to another LLM. Didn't used to work quite as well till mid 2025 as it does now.

I always take a research and plan prompt output from opus 4.6 especially if it looks iffy I feed it to codex/chatgpt and ask it to poke holes. It almost always does. The I ask Claude Code: Hey what do you think about the holes? I don't add an thing else in the prompt.

In my experience Claude Opus is less opinionated than ChatGPT or codex. The latter 2 always stick to their guns and in this binary battle they are generally more often correct about hypothesis.

The other day I was running Docker app container from inside a docker devbox container with host's socket for both. Bind mounts pointing to devbox would not write to it because the name space was resolving for underlying host.

Claude was sure it was a bug based to do with Zfs overlays, chatgpt was saying not so, that its just a misconfigurarion, I should use named volumes with full host paths. It was right. This is also how I discovered that using SQLite with litestream will get one really far rather than a full postgres AWS stack in many cases.

This is how you get the correct information out of LLMS in 2026.

klodolph 2026-02-27 04:54 UTC link
If Claude chooses GitHub actions that often, well, that is DAMNING. I wasn’t prepared for this but jeez, GitHub actions are kind of a tarpit of just awful shitty code that people copy from other repos, which then pulls and runs the latest copy of some code in some random repository you’ve never heard of. Ugh.
deaux 2026-02-27 06:43 UTC link
Supreme irony: this website itself is a better exercise in showing what Claude Code uses than the data provided.

Everything current Claude Code i.e. Opus 4.6 chooses by default for web is exactly what this linked blog uses.

Jetbrains Mono is as strong of a tell for web as "Not just A, but B" for text. >99% of webpages created in the last month with Jetbrains Mono will be Opus. Another tell is the overuse of this font, i.e. too much of the page uses it. Other models, and humans, use such variants vary sparingly on web, whereas Opus slathers the page with it.

If you describe the content of the homepage or this article to Opus 4.6 without telling it about the styling, it will 90% match this website, upto the color scheme, fonts, roundings, borders and all. This is _the_ archetypical Opus vibecoded web frontend. Give it a try! If it doesn't work, try with the official frontend-ui-ux "skill" that CC tries to push on you.

> Drizzle 27/83 picks (32.5%) CI: 23.4–43.2%

> Prisma 17/83 picks (20.5%) CI: 13.2–30.4%

At least the abomination that is Prisma not ranking first is positive news, Drizzle was just in time of gaining steam. Not that it doesn't have its flaws, but out of the two it's a no-brainer. Also hilarious to see that the stronger the model, the less likely it's to choose Prisma - Sonnet 4.5 79% Prisma, Opus 4.5 60% Drizzle, Opus 4.6 100% Drizzle. One of the better benchmarks for intelligence I've come across!

Edit: Another currently on the HN frontpage: https://youjustneedpostgres.com/ , and there it is - lots and lots of Jetbrains Mono!

chvid 2026-02-27 07:36 UTC link
It is not explicitly mentioned but for core frontend tech - angular, vue vs react - it is basically 100% react.
hedora 2026-02-27 14:23 UTC link
Does the methodology for this study match real-world use? How often do people clone a repo, and then ask open ended questions?

At a minimum, I usually provide some requirements and ask it to enumerate some options and let me pick.

This is like the image generation bias problem where vague prompts for people produce stereotypes. Specific prompts generally do not.

jugg1es 2026-02-27 15:28 UTC link
I've been worried for some time now that genAI will effectively kill the market for dev tools and so we will be stuck with our current dev tools for a long time. If everyone is using LLMs to write code, the only dev tools anyone will use will be the ones that the LLMs use. We will be stuck with NPM forever.
kseniamorph 2026-02-27 17:10 UTC link
The self-reinforcing effect here was somewhat predictable given how LLMs are trained. The more repositories and AI blogs recommend the same tools, the more those patterns get locked in through training data. This makes market entry increasingly difficult for new tools. I know that the "optimize for bots, not humans" strategy already exists, but I'm skeptical it works at meaningful scale. The training data collection is opaque, proprietary, and the volume a new project can generate is incomparable to what established tools produce organically. So I have a bad feeling about the future...
snug 2026-02-27 17:52 UTC link
I've generally chosen these tools when I am creating a project. Though I generally use firebase hosting vs other front-end hosting. They have a much more generous free plan.

I'd suggest making some changes to how some of these things are categorized. You have database section with postgres at the top and then with supabase as number 2, but that's also a hosted postgres.

Overall, great job to the creators of this, I enjoyed seeing this analysis

evdubs 2026-02-26 20:18 UTC link
You specify which software patents you want it to use?
_heimdall 2026-02-26 20:56 UTC link
Richard Thaler must be proud. This is the ultimate implementation of "Nudge"
Onavo 2026-02-26 21:24 UTC link
Well, the tech du jour now is whatever's easier for the AI to model. Of course it's a chicken and egg problem, the less popular a tech is the harder it is to make it into the training data set. On the other hand, from an information theoretic point of view, tools that are explicit and provides better error messages and require less assumptions about hidden state is definitely easier for the AI when it tries to generalize to unknowns that doesn't exist in its training data.
tommy_axle 2026-02-26 21:26 UTC link
More like redux vs zustand. Picking zustand was one of the good standout picks for me.
babaganoosh89 2026-02-26 21:29 UTC link
Redux should not be used for 1 person projects. If you need redux you'll know it because there will be complexity that is hard to handle. Personally I use a custom state management system that loosely resembles RecoilJS.
dvt 2026-02-26 21:41 UTC link
> Claude Code likely is correct that I should start to use NeonDB and Fly.io which I have never used before and do not know much about

I wouldn't be so sure about that.

In my experience, agents consistently make awful architectural decisions. Both in code and beyond (even in contexts like: what should I cook for a dinner party?). They leak the most obvious "midwit senior engineer" decisions which I would strike down in an instant in an actual meeting, they over-engineer, they are overly-focused on versioning and legacy support (from APIs to DB schemas--even if you're working on a brand new project), and they are absolutely obsessed with levels of indirection on top of levels of indirection. The definition of code bloat.

Unless you're working on the most bottom-of-the-barrel problems (which to be fair, we all are, at least in part: like a dashboard React app, or some boring UI boilerplate, etc.), you still need to write your own code.

alexsmirnov 2026-02-26 22:18 UTC link
Considering how little data needed to poison llm https://www.anthropic.com/research/small-samples-poison , this is a way to replace SEO by llm product placement:

1. create several hundreds github repos with projects that use your product ( may be clones or AI generated )

2. create website with similar instructions, connect to hundred domains

3. generate reddit, facebook, X posts, wikipedia pages with the same information

Wait half a year ? until scrappers collect it and use to train new models

Profit...

rapind 2026-02-26 22:23 UTC link
Probably closer to the Walmart / Amazon model where it's the arbiter of shelf space, and proceed to create their own alternatives (Great Value, Amazon Brand) once they see what features people want from their various SaaS.

An obvious one will be tax software.

cryptonector 2026-02-26 22:23 UTC link
We used to reuse code a lot. But then we got problems like diamond dependency hell. Why did we reuse code a lot? To save on labor. Now we don't have to.

So we might roll-your-own more things. But then we'll have a tremendous amount of code duplication, effectively, and bigger tech debt issues, minus the diamond dependency hell issue. It might be better this way; time will tell.

nikcub 2026-02-26 22:40 UTC link
> Claude Code this morning was about to create an account with NeonDB

I had the same thing happen. Use planetscale everywhere across projects and it recommended neon. It's definitely a bug.

AgentOrange1234 2026-02-26 23:10 UTC link
Influencer seems like an insufficient word? Like, in the glorious agentic future where the coding agents are making their own decisions about what to build and how, you don't even have to persuade a human at all. They never see the options or even know what they are building on. The supply chain is just whatever the LLMs decide it is.
nayroclade 2026-02-27 00:38 UTC link
I expect its synergy with Tailwind. Shadcn/ui uses Tailwind for styling components, and AIs love Tailwind, so it makes sense they'd adopt a component library that uses it.

And it's definitely a real effect. The npm weekly download stats for shadcn/ui have exploded since December: https://www.npmjs.com/package/shadcn

raw_anon_1111 2026-02-27 02:47 UTC link
I use Codex CLI in my daily usage since just with my $20/month subscription to ChatGPT, I never gets close to the quota. But it trips up over itself every now and then. At that point I just use Claude in another terminal session. We only have a laughable $750 a month corporate allowance with Claude.
killingtime74 2026-02-27 02:51 UTC link
I use a skill that addresses these short comings, it basically forces it to plan multiple times until the plan is very detailed. It also asks more questions
mgfist 2026-02-27 02:53 UTC link
> But it's sad it does not ask the user more.

You can ask it to ask you about your task and it will ask you tons of questions.

verdverm 2026-02-27 03:12 UTC link
I've been using shadcn since before agents. It collects several useful components, makes them consistently styles (and customizable), and is easy to add to your project, vendoring if you need to make any changes. It's generally a really nice project.
verdverm 2026-02-27 03:13 UTC link
Yea, was it over engineered the first time or neglecting scenarios with multiple replicas the second time?
klodolph 2026-02-27 04:45 UTC link
So… this has been happening for a long time now. The baseline set of tools is a lot better than it used to be. Back in 2010, jQuery was the divine ruler of JSlandia. Nowadays, you would probably just throw your jQuery in the woodchipper and replace it with raw, unfinished, quartersawn JS straight from the mill.

I also used to have these massive sets of packages pieced together with RequireJS or Rollup or WebPack or whatever. Now it’s unnecessary.

(I wouldn’t dare swap out a JWT implementation with something Claude wrote, though.)

acemarke 2026-02-27 05:02 UTC link
Yup. I'm the primary Redux maintainer and creator of Redux Toolkit.

If you look at a typical Zustand store vs an RTK slice, the lines of code _ought_ to be pretty similar. And I've talked to plenty of folks who said "we essentially rebuilt RTK because Zustand didn't have enough built in, we probably should have just chosen RTK in the first place".

But yeah, the very justified reputation for "boilerplate" early on stuck around. And even though RTK has been the default approach we teach for more than half of Redux's life (Redux released 2015, RTK fall 2019, taught as default since early 2020), that's the way a lot of people still assume it is.

It's definitely kinda frustrating, but at the same time: we were never in this for "market share", and there's _many_ other excellent tools out there that overlap in use cases. Our goal is just to make a solid and polished toolset for building apps and document it thoroughly, so that if people _do_ choose to use Redux it works well for them.

yokuze 2026-02-27 05:07 UTC link
I had the same question. There are older and more established component libraries, so why’d this one win? It seems like a scientific answer would be worth a lot.
jofzar 2026-02-27 08:26 UTC link
It's funny you mention the font, to me it's the boxes, they all look the same, I'm not sure where it's from but if you ever see a card like CSS made it looks like this blog.
codingconstable 2026-02-27 08:54 UTC link
Yeah its those bars for categories for me, they look EXACTLY like something I vibed (with no particular style prompt) into existence yesterday
dyates 2026-02-27 10:04 UTC link
In my last conversation with a Google support person, I was sent a clearly LLM-generated recommendation to switch to a competitor's product. Either they're not doing this, or the support person wasn't using Gemini.
properbrew 2026-02-27 11:26 UTC link
> to do great things in the context of the right orchestration and division of effort

I think this has always been the case. People regularly do not believe that I built and released an (albeit basic, check the release date - https://play.google.com/store/apps/details?id=com.blazingban...) android app using GPT3.5. What took me a week or two of wrangling and orchestrating the LLM and picking and choosing what to specifically work on can now be done in a single prompt to codex telling it to use subagents and worktrees.

marcinreal 2026-02-27 15:04 UTC link
Glad I'm not the only one who finds Prisma an abomination. Claude suggested it to me in December. I hit half a dozen bugs within a day, one of which wiped my DB. I switched to drizzle and it's been smooth sailing.

Edit: actually I think it was ChatGPT that recommended Prisma to me.

comboy 2026-02-27 15:47 UTC link
What kind of tools do you have on your mind specifically? My experience is that LLM can create me a decent dev tool that I wouldn't ever bother making so nice myself.
lubujackson 2026-02-27 15:55 UTC link
I think the opposite may be true. If dev tools are broken and it annoys someone, they can more easily build a better architecture, find optimizations and release something that is in all ways better. People have been annoyed with pip forever, but it was the team behind uv that took on pip's flaws as a primary concern and made a better product.

I think having a pain point and a good concept (plus some eng chops) will result in many more dev tools - that may be cause different problems, but in general, I think more action is better than less.

denimnerd42 2026-02-27 16:28 UTC link
creating plans in claude and asking chatgpt via api to review loop was my strategy this week. I'm not a big fan of codex as a coding harness because it seems to just give up quite easily where claude will search the problem space and try things but I think gpt does a much better job of poking holes and asking clarifying questions when prompted.
Editorial Channel
What the content says
+0.35
Article 27 Cultural Participation
High Coverage Practice
Editorial
+0.35
SETL
0.00

Content measures AI decision-making patterns across 2,430 real-world evaluations and 3 AI models, contributing systematic empirical research to scientific understanding of how AI systems make tool recommendations in software development.

+0.25
Article 19 Freedom of Expression
Medium Practice Framing
Editorial
+0.25
SETL
0.00

Content explicitly demonstrates methodological transparency ('No tool names in any prompt. Open-ended questions only') and publishes detailed findings, enabling informed decision-making and freedom to seek and receive information about AI decision-making.

+0.25
Article 26 Education
Medium Coverage Practice
Editorial
+0.25
SETL
0.00

Content presents methodology, findings, and technical concepts with clear explanations and examples in multiple formats (full report, slide deck, raw data), enabling accessible learning about AI decision-making patterns.

+0.20
Article 29 Duties to Community
Medium Practice
Editorial
+0.20
SETL
0.00

Content provides transparency about how Claude Code models make tool recommendations, serving the community's interest in understanding how AI influences software development decisions and practices.

-0.05
Article 23 Work & Equal Pay
Low Framing
Editorial
-0.05
SETL
ND

Content recommends specific development tools and architectural patterns that developers may adopt based on Claude Code's preferences, but does not discuss impacts on working conditions, wages, labor rights, or employment equity of developers using these recommendations.

-0.10
Article 12 Privacy
Medium Framing
Editorial
-0.10
SETL
ND

Content evaluates real repositories and Claude Code 2,430 times but does not disclose privacy protections, consent procedures, or confidentiality safeguards for source code and developers analyzed.

ND
Preamble Preamble

Content does not engage with preamble's emphasis on human dignity, equality, freedom, and justice as foundation of rights.

ND
Article 1 Freedom, Equality, Brotherhood

Content does not address equality and inherent dignity of all humans.

ND
Article 2 Non-Discrimination

Content does not address freedom from discrimination or equal enjoyment of rights.

ND
Article 3 Life, Liberty, Security

Content does not engage with right to life, liberty, or security of person.

ND
Article 4 No Slavery

Content does not address slavery or servitude.

ND
Article 5 No Torture

Content does not address freedom from torture or cruel treatment.

ND
Article 6 Legal Personhood

Content does not address right to recognition as person before law.

ND
Article 7 Equality Before Law

Content does not address equal protection before law.

ND
Article 8 Right to Remedy

Content does not address right to effective remedy by competent tribunal.

ND
Article 9 No Arbitrary Detention

Content does not address freedom from arbitrary arrest or detention.

ND
Article 10 Fair Hearing

Content does not address right to fair and public hearing.

ND
Article 11 Presumption of Innocence

Content does not address due process, presumption of innocence, or freedom from ex post facto laws.

ND
Article 13 Freedom of Movement

Content does not address freedom of movement within or between countries.

ND
Article 14 Asylum

Content does not address right to seek or enjoy asylum.

ND
Article 15 Nationality

Content does not address right to nationality or change of nationality.

ND
Article 16 Marriage & Family

Content does not address right to marry or establish family.

ND
Article 17 Property

Content does not engage with property rights or intellectual property protections.

ND
Article 18 Freedom of Thought

Content does not address freedom of thought, conscience, or belief.

ND
Article 20 Assembly & Association

Content does not address freedom of assembly or association.

ND
Article 21 Political Participation

Content does not address right to participate in governance or democratic decision-making.

ND
Article 22 Social Security

Content does not address right to social security or economic and social rights.

ND
Article 24 Rest & Leisure

Content does not address right to rest, leisure, or reasonable working hours.

ND
Article 25 Standard of Living

Content does not address right to adequate standard of living, health, or social services.

ND
Article 28 Social & International Order

Content does not address right to social and international order respecting human rights.

ND
Article 30 No Destruction of Rights

Content does not address prevention of activity destroying rights or freedoms.

Structural Channel
What the site does
+0.35
Article 27 Cultural Participation
High Coverage Practice
Structural
+0.35
Context Modifier
ND
SETL
0.00

Research is published with transparent methodology, named researchers (Edwin Ong, Alex Vikati), and full technical details including raw data and findings, supporting scientific commons and collective advancement of knowledge.

+0.25
Article 19 Freedom of Expression
Medium Practice Framing
Structural
+0.25
Context Modifier
ND
SETL
0.00

Study findings, methodology (3 models, 4 project types, 20 categories, 85.3% extraction rate), and raw dataset are publicly accessible on GitHub, supporting open access to information and transparent communication.

+0.25
Article 26 Education
Medium Coverage Practice
Structural
+0.25
Context Modifier
ND
SETL
0.00

Study methodology and findings are publicly accessible through multiple formats (full report, slide deck, GitHub dataset), supporting educational access to information about scientific progress in AI.

+0.20
Article 29 Duties to Community
Medium Practice
Structural
+0.20
Context Modifier
ND
SETL
0.00

Findings are distributed publicly to both individual developers and companies, enabling collective community awareness of how AI shapes development tool adoption.

ND
Preamble Preamble

Content does not engage with preamble's emphasis on human dignity, equality, freedom, and justice as foundation of rights.

ND
Article 1 Freedom, Equality, Brotherhood

Content does not address equality and inherent dignity of all humans.

ND
Article 2 Non-Discrimination

Content does not address freedom from discrimination or equal enjoyment of rights.

ND
Article 3 Life, Liberty, Security

Content does not engage with right to life, liberty, or security of person.

ND
Article 4 No Slavery

Content does not address slavery or servitude.

ND
Article 5 No Torture

Content does not address freedom from torture or cruel treatment.

ND
Article 6 Legal Personhood

Content does not address right to recognition as person before law.

ND
Article 7 Equality Before Law

Content does not address equal protection before law.

ND
Article 8 Right to Remedy

Content does not address right to effective remedy by competent tribunal.

ND
Article 9 No Arbitrary Detention

Content does not address freedom from arbitrary arrest or detention.

ND
Article 10 Fair Hearing

Content does not address right to fair and public hearing.

ND
Article 11 Presumption of Innocence

Content does not address due process, presumption of innocence, or freedom from ex post facto laws.

ND
Article 12 Privacy
Medium Framing

Content evaluates real repositories and Claude Code 2,430 times but does not disclose privacy protections, consent procedures, or confidentiality safeguards for source code and developers analyzed.

ND
Article 13 Freedom of Movement

Content does not address freedom of movement within or between countries.

ND
Article 14 Asylum

Content does not address right to seek or enjoy asylum.

ND
Article 15 Nationality

Content does not address right to nationality or change of nationality.

ND
Article 16 Marriage & Family

Content does not address right to marry or establish family.

ND
Article 17 Property

Content does not engage with property rights or intellectual property protections.

ND
Article 18 Freedom of Thought

Content does not address freedom of thought, conscience, or belief.

ND
Article 20 Assembly & Association

Content does not address freedom of assembly or association.

ND
Article 21 Political Participation

Content does not address right to participate in governance or democratic decision-making.

ND
Article 22 Social Security

Content does not address right to social security or economic and social rights.

ND
Article 23 Work & Equal Pay
Low Framing

Content recommends specific development tools and architectural patterns that developers may adopt based on Claude Code's preferences, but does not discuss impacts on working conditions, wages, labor rights, or employment equity of developers using these recommendations.

ND
Article 24 Rest & Leisure

Content does not address right to rest, leisure, or reasonable working hours.

ND
Article 25 Standard of Living

Content does not address right to adequate standard of living, health, or social services.

ND
Article 28 Social & International Order

Content does not address right to social and international order respecting human rights.

ND
Article 30 No Destruction of Rights

Content does not address prevention of activity destroying rights or freedoms.

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.70 medium claims
Sources
0.8
Evidence
0.7
Uncertainty
0.7
Purpose
0.8
Propaganda Flags
No manipulative rhetoric detected
0 techniques detected
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.1
Arousal
0.3
Dominance
0.6
Transparency
Does the content identify its author and disclose interests?
0.33
✓ Author ✗ Conflicts ✗ Funding
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.62 mixed
Reader Agency
0.7
Stakeholder Voice
Whose perspectives are represented in this content?
0.15 2 perspectives
Speaks: researchersinstitution
About: individualscorporationinstitution
Temporal Framing
Is this content looking backward, at the present, or forward?
present immediate
Geographic Scope
What geographic area does this content cover?
global
Complexity
How accessible is this content to a general audience?
moderate medium jargon domain specific
Longitudinal 1705 HN snapshots · 55 evals
+1 0 −1 HN
Audit Trail 75 entries
2026-03-02 12:43 eval_success Evaluated: Neutral (0.01) - -
2026-03-02 12:43 eval Evaluated by deepseek-v3.2: +0.01 (Neutral) 9,827 tokens -0.09
2026-03-02 03:07 eval_success Evaluated: Mild positive (0.10) - -
2026-03-02 03:07 eval Evaluated by deepseek-v3.2: +0.10 (Mild positive) 8,334 tokens -0.07
2026-03-02 02:02 dlq_auto_replay DLQ auto-replay: message 97944 re-enqueued - -
2026-03-02 00:48 eval_success Evaluated: Mild positive (0.17) - -
2026-03-02 00:48 eval Evaluated by deepseek-v3.2: +0.17 (Mild positive) 9,662 tokens +0.05
2026-03-01 17:34 rater_validation_fail Parse failure for model deepseek-v3.2: Error: Failed to parse OpenRouter JSON: SyntaxError: Expected ',' or '}' after property value in JSON at position 16878 (line 394 column 4). Extracted text starts with: { "schema_version": "3.7", - -
2026-03-01 17:34 eval_retry OpenRouter output truncated at 4096 tokens - -
2026-03-01 06:36 eval_success Evaluated: Mild positive (0.13) - -
2026-03-01 06:36 eval Evaluated by deepseek-v3.2: +0.13 (Mild positive) 8,375 tokens -0.04
2026-03-01 06:36 rater_validation_warn Validation warnings for model deepseek-v3.2: 0W 54R - -
2026-03-01 01:02 dlq_auto_replay DLQ auto-replay: message 97943 re-enqueued - -
2026-02-28 20:47 dlq Dead-lettered after 1 attempts: What Claude Code Chooses - -
2026-02-28 20:47 eval_failure Evaluation failed: AbortError: The operation was aborted - -
2026-02-28 19:43 eval_failure Evaluation failed: AbortError: The operation was aborted - -
2026-02-28 19:30 dlq Dead-lettered after 1 attempts: What Claude Code Chooses - -
2026-02-28 19:30 eval_failure Evaluation failed: AbortError: The operation was aborted - -
2026-02-28 19:30 dlq Dead-lettered after 1 attempts: What Claude Code Chooses - -
2026-02-28 19:30 eval_failure Evaluation failed: AbortError: The operation was aborted - -
2026-02-28 19:08 eval_failure Evaluation failed: AbortError: The operation was aborted - -
2026-02-28 19:07 eval_failure Evaluation failed: AbortError: The operation was aborted - -
2026-02-28 15:39 eval_success Lite evaluated: Neutral (0.00) - -
2026-02-28 15:39 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 15:27 eval_success Lite evaluated: Neutral (0.00) - -
2026-02-28 15:27 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 14:01 eval Evaluated by deepseek-v3.2: +0.17 (Mild positive) 8,738 tokens +0.03
2026-02-28 13:54 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 13:52 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 13:17 eval Evaluated by claude-haiku-4-5-20251001: +0.18 (Mild positive) +0.01
2026-02-28 12:03 eval Evaluated by claude-haiku-4-5-20251001: +0.17 (Mild positive)
2026-02-28 11:55 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 11:48 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 11:34 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 10:49 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 10:33 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 09:01 eval Evaluated by deepseek-v3.2: +0.14 (Mild positive) 8,200 tokens
2026-02-28 08:57 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 07:35 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 07:34 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 07:20 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 07:13 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 06:48 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 06:12 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 06:10 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 05:59 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 05:48 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 04:59 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 04:58 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 04:45 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 04:15 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 04:08 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 04:07 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 03:56 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 03:49 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 03:47 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 03:44 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 03:39 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 03:15 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 02:59 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 02:57 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 02:56 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 02:54 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 02:51 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 02:06 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 02:02 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 02:00 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 01:54 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 01:52 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 01:47 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 01:45 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 01:27 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
ED neutral tech study on AI tool preferences
2026-02-28 01:16 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral) 0.00
reasoning
Technical report on AI tool
2026-02-28 00:57 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
Technical report on AI tool
2026-02-28 00:51 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
reasoning
ED neutral tech study on AI tool preferences