Model Comparison
Model Editorial Structural Class Conf SETL Theme
@cf/meta/llama-4-scout-17b-16e-instruct lite ND ND 0.80
@cf/meta/llama-4-scout-17b-16e-instruct lite 0.00 +0.20 Neutral 0.90 -0.20 AI Development
claude-haiku-4-5-20251001 +0.13 +0.09 Mild positive 0.26 0.06 Free Expression & Labor Rights
@cf/meta/llama-3.3-70b-instruct-fp8-fast lite ND ND 0.80
@cf/meta/llama-3.3-70b-instruct-fp8-fast lite 0.00 +0.20 Neutral 0.80 -0.20 AI research
@cf/qwen/qwen3-30b-a3b-fp8 lite ND ND
@cf/qwen/qwen3-30b-a3b-fp8 lite ND ND
openai/gpt-oss-120b:free lite ND ND
google/gemma-3-27b-it:free lite ND ND
qwen/qwen3-coder:free lite ND ND
Section @cf/meta/llama-4-scout-17b-16e-instruct lite @cf/meta/llama-4-scout-17b-16e-instruct lite claude-haiku-4-5-20251001 @cf/meta/llama-3.3-70b-instruct-fp8-fast lite @cf/meta/llama-3.3-70b-instruct-fp8-fast lite @cf/qwen/qwen3-30b-a3b-fp8 lite @cf/qwen/qwen3-30b-a3b-fp8 lite openai/gpt-oss-120b:free lite google/gemma-3-27b-it:free lite qwen/qwen3-coder:free lite
Preamble ND ND 0.13 ND ND ND ND ND ND ND
Article 1 ND ND ND ND ND ND ND ND ND ND
Article 2 ND ND ND ND ND ND ND ND ND ND
Article 3 ND ND 0.00 ND ND ND ND ND ND ND
Article 4 ND ND ND ND ND ND ND ND ND ND
Article 5 ND ND ND ND ND ND ND ND ND ND
Article 6 ND ND 0.08 ND ND ND ND ND ND ND
Article 7 ND ND 0.07 ND ND ND ND ND ND ND
Article 8 ND ND ND ND ND ND ND ND ND ND
Article 9 ND ND ND ND ND ND ND ND ND ND
Article 10 ND ND ND ND ND ND ND ND ND ND
Article 11 ND ND ND ND ND ND ND ND ND ND
Article 12 ND ND 0.10 ND ND ND ND ND ND ND
Article 13 ND ND 0.16 ND ND ND ND ND ND ND
Article 14 ND ND ND ND ND ND ND ND ND ND
Article 15 ND ND ND ND ND ND ND ND ND ND
Article 16 ND ND ND ND ND ND ND ND ND ND
Article 17 ND ND ND ND ND ND ND ND ND ND
Article 18 ND ND 0.19 ND ND ND ND ND ND ND
Article 19 ND ND 0.27 ND ND ND ND ND ND ND
Article 20 ND ND 0.11 ND ND ND ND ND ND ND
Article 21 ND ND ND ND ND ND ND ND ND ND
Article 22 ND ND 0.07 ND ND ND ND ND ND ND
Article 23 ND ND 0.16 ND ND ND ND ND ND ND
Article 24 ND ND ND ND ND ND ND ND ND ND
Article 25 ND ND 0.07 ND ND ND ND ND ND ND
Article 26 ND ND 0.10 ND ND ND ND ND ND ND
Article 27 ND ND 0.14 ND ND ND ND ND ND ND
Article 28 ND ND ND ND ND ND ND ND ND ND
Article 29 ND ND 0.05 ND ND ND ND ND ND ND
Article 30 ND ND ND ND ND ND ND ND ND ND
+0.13 Something is afoot in the land of Qwen (simonwillison.net S:+0.09 )
783 points by simonw 11 days ago | 360 comments on HN | Mild positive High agreement (3 models) Editorial · v3.7 · 2026-03-16 00:10:30 0
Summary Free Expression & Labor Rights Advocates
This blog post advocates for researchers' freedom to make autonomous career decisions and celebrates scientific achievement through open, knowledge-sharing models. The content champions multiple interconnected rights: freedom of expression and thought (through the author's analysis and opinion), freedom of association (through recognition of research teams), right to work and employment choice (through respectful reporting of resignations), and participation in scientific advancement (through detailed documentation of technical achievements). The narrative implicitly frames open-source AI development as a human rights matter, supporting broad access to knowledge and technology while protecting researchers' labor dignity and creative agency.
Rights Tensions 2 pairs
Art 23 Art 27 Researchers' freedom to leave employment (Article 23) potentially conflicts with continuity of scientific advancement (Article 27); content resolves this by advocating that departing researchers' talents should be preserved through new ventures or institutional roles rather than lost entirely.
Art 19 Art 12 The author's free expression of opinion about internal Alibaba reorganization details (Article 19) potentially encroaches on privacy of affected individuals (Article 12); content navigates this by reporting only publicly disclosed information and avoiding speculation about private motives.
Article Heatmap
Preamble: +0.13 — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: 0.00 — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: +0.08 — Legal Personhood 6 Article 7: +0.07 — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: +0.10 — Privacy 12 Article 13: +0.16 — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: +0.19 — Freedom of Thought 18 Article 19: +0.27 — Freedom of Expression 19 Article 20: +0.11 — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: +0.07 — Social Security 22 Article 23: +0.16 — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: +0.07 — Standard of Living 25 Article 26: +0.10 — Education 26 Article 27: +0.14 — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: +0.05 — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
E
+0.13
S
+0.09
Weighted Mean +0.13 Unweighted Mean +0.11
Max +0.27 Article 19 Min 0.00 Article 3
Signal 15 No Data 16
Volatility 0.06 (Low)
Negative 0 Channels E: 0.6 S: 0.4
SETL +0.06 Editorial-dominant
FW Ratio 53% 37 facts · 33 inferences
Agreement High 3 models · spread ±0.027
Evidence 26% coverage
4H 5M 6L 16 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.13 (1 articles) Security: 0.00 (1 articles) Legal: 0.07 (2 articles) Privacy & Movement: 0.13 (2 articles) Personal: 0.19 (1 articles) Expression: 0.19 (2 articles) Economic & Social: 0.10 (3 articles) Cultural: 0.12 (2 articles) Order & Duties: 0.05 (1 articles)
HN Discussion 19 top-level · 31 replies
raffael_de 2026-03-04 16:08 UTC link
> me stepping down. bye my beloved qwen.

the qwen is dead, long live the qwen.

airstrike 2026-03-04 16:10 UTC link
I'm hopeful they will pick up their work elsewhere and continue on this great fight for competitive open weight models.

To be honest, it's sort of what I expected governments to be funding right now, but I suppose Chinese companies are a close second.

zoba 2026-03-04 16:16 UTC link
I tried the new qwen model in Codex CLI and in Roo Code and I found it to be pretty bad. For instance I told it I wanted a new vite app and it just started writing all the files from scratch (which didn’t work) rather than using the vite CLI tool.

Is there a better agentic coding harness people are using for these models? Based on my experience I can definitely believe the claims that these models are overfit to Evals and not broadly capable.

sosodev 2026-03-04 16:22 UTC link
I really hope this doesn't hinder development too much. As Simon says, Qwen3.5 is very impressive.

I've been testing Qwen3.5-35B-A3B over the past couple of days and it's a very impressive model. It's the most capable agentic coding model I've tested at that size by far. I've had it writing Rust and Elixir via the Pi harness and found that it's very capable of handling well defined tasks with minimal steering from me. I tell it to write tests and it writes sane ones ensuring they pass without cheating. It handles the loop of responding to test and compiler errors while pushing towards its goal very well.

skeeter2020 2026-03-04 16:24 UTC link
Getting a bit of whiplash goin from AI is replacing people, to AI is dead without (these specific) people. Surely we're far enough ahead that AI can take it from here?

Wild times!

softwaredoug 2026-03-04 16:24 UTC link
I wonder how a US lab hasn't dumped truckloads of cash into various laps to ensure these researchers have a place at their lab
vonneumannstan 2026-03-04 16:32 UTC link
Were they kneecapped by Anthropic blocking their distillation attempts?
ilaksh 2026-03-04 17:00 UTC link
Does anyone know when the small Qwen 3.5 models are going to be on OpenRouter?
quantum_state 2026-03-04 17:14 UTC link
I would second that Qwen3.5 is exceptionally good. In a calibration, it (35b variant) was running locally with Ada NextGen 24GB to do the same things with easy-llm-cli in comparison with gemini-cli + Gemini 3 Pro, they were at par … really impressive it ran pretty fast …
hintymad 2026-03-04 17:54 UTC link
There has been tension between Qwen's research team and Alibaba's product team, say the Qwen App. And recently, Alibaba tried to impose DAU as a KPI. It's understandable that a company like Alibaba would force a change of product strategy for any number of reasons. What puzzled me is why they would push out the key members of their research team. Didn't the industry have a shortage of model researchers and builders?
nurettin 2026-03-04 19:09 UTC link
I am singularly impressed by 35B/A3, hope that is not the reason he had to leave.
lacoolj 2026-03-04 19:26 UTC link
I wonder if an american company poached one/all of them. They've been pretty much bleeding edge of open models and would not surprise me if Amazon or Google snatched them up
lzaborowski 2026-03-04 19:50 UTC link
One thing I’ve noticed with local models is that people tolerate a lot more trial and error behavior. When a hosted model wastes tokens it feels expensive, but when a local model loops a bit it just feels like it’s “thinking.”

If models like Qwen can get good enough for coding tasks locally, the real shift might be economic rather than purely capability.

w10-1 2026-03-04 20:05 UTC link
It sounds like the lead was demoted to attract new talent, quit as a result, and the rest of the team also resigned to force management to change their minds.

If so, I'm happy that the team held together, and I hope that endogenous tech leads get to control their own career and tech destiny after hard work leads to great products. (It's almost as inspiring as tank man, and the tank commanders who tried to avoid harming him...)

(ducking the downvote for challenging the primacy of equity...)

vicchenai 2026-03-04 20:51 UTC link
Been running the 32B locally for a few days and honestly surprised how well it handles agentic coding stuff. Definitely punches above its weight. Only complaint is it sometimes decides to ignore half your prompt when instructions get long, but at this size I guess thats the tradeoff.
qwenverifier 2026-03-04 23:17 UTC link
As a mathematician, lately I experimented a lot with Qwen, to produce as good as possible professional summaries and relations between articles, and in one case even a verification of misattributions claims which was used in an arXiv article.

All is collected in https://imar.ro/~mbuliga/ai-talks.html

RandyOrion 2026-03-05 04:12 UTC link
First, thank you Junyang and Qwen team for your incredible work. You deserve better.

This is sad for local LLM community. First we lost wizardLM, Yi and others, then we lost Llama and others, now we lost Qwen...

nopurpose 2026-03-05 08:09 UTC link
How do those companies make money? Qwen, GLM, Kimi, etc all released for free. I have no experience in the field, but from reading HN alone my impression was training is exceptionally costly and inference can be barely made profitable. How/why do they fund ongoing development of those models? I'd understand if they release some of their less capable models for street cred, but they release all their work for free.
vidarh 2026-03-04 16:26 UTC link
Who is suggesting "AI is dead without (these specific) people"? People are wondering what it means specifically for the Qwen model family.
mft_ 2026-03-04 16:27 UTC link
Indeed; or, Europe badly needs a competitive model to hedge against US political nonsense.
mhitza 2026-03-04 16:29 UTC link
We've gone from AGI goals to short-term thinking via Ads. That puts things better in perspective, I think.
sosodev 2026-03-04 16:30 UTC link
I've noticed that open weight models tend to hesitate to use tools or commands unless they appeared often in the training or you tell them very explicitly to do so in your AGENTS.md or prompt.

They also struggle at translating very broad requirements to a set of steps that I find acceptable. Planning helps a lot.

Regarding the harness, I have no idea how much they differ but I seem to have more luck with https://pi.dev than OpenCode. I think the minimalism of Pi meshes better with the limited capabilities of open models.

bilbo0s 2026-03-04 16:31 UTC link
They probably have tried, but you have to have more cash than those researchers feel they can get starting their own lab. When you consider the fact that their new startup lab would have the entire nation of China as, in effect, a captive market; you start to see how almost any amount of money would be too little to convince them not to make a run at that new startup. If money is their aim.

I think Alibaba needs to just give these guys a blank check. Let them fill it in themselves. Absent that, I'm pretty sure they'll make their own startup.

I do think it'd be a big loss for the rest of the world though if they close whatever model their startup comes up with.

velcrovan 2026-03-04 16:49 UTC link
What the US has done is dumped truckloads of cash to make it likely that as a legal immigrant you will be abducted and sent to a camp.
Twirrim 2026-03-04 16:55 UTC link
I've been testing the same with some rust, and it's has spent a fair bit of time going through an infinite seeming loop before finally unjamming itself. It seems a little more likely to jam up than some other models I've experimented with.

It's also driving itself crazy with deadpool & deadpool-r2d2 that it chose during planning phase.

That said, it does seem to be doing a very good job in general, the code it has created is mostly sane other than this fuss over the database layer, which I suspect I'll have to intervene on. It's certainly doing a better job than other models I'm able to self-host so far.

armanj 2026-03-04 17:09 UTC link
gaoshan 2026-03-04 17:29 UTC link
ICE has been detaining Chinese people in my area (and going door to door in at least one neighborhood where a lot of Chinese and Indians live). I was hearing about this just last week as word spread amongst the Chinese community here (Ohio) to make sure you have some legal documentation beyond just your driver's license on you at all times for protection. People will hear about this through the grapevine and it has a massive (and rightly so) chilling effect. US labs can try but with US government behaving like it is I don't think they will have much luck.

*edit: not that it matters, but since MAGA can't help but assume, these are all US citizens and green card holders that I am referring to.

abhikul0 2026-03-04 17:34 UTC link
Are you running it locally with llama.cpp? If so, is it working without any tweaking of the chat template? The tool calls fail for me when using the default chat template, however it seems to work a whole lot better with this: https://huggingface.co/Qwen/Qwen3.5-35B-A3B/discussions/9#69...
nu11ptr 2026-03-04 17:38 UTC link
What hardware do you have it running on? Do you feel you could replace the frontier models with it for everyday coding? Would/will you?
misnome 2026-03-04 17:52 UTC link
I've been playing with 3.5:122b on a GH200 the past few days for rust/react/ts, and while it's clearly sub-Sonnet, with tight descriptions it can get small-medium tasks done OK - as well as Sonnet if the scope is small.

The main quirk I've found is that it has a tendency to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked, and I find it has stripped all the preliminary support infrastructure for the new feature out of the code.

dude250711 2026-03-04 17:54 UTC link
Claude is incapable of producing a native application for itself, and is bad enough with web ones to justify Anthropic acquiring Bun.
janalsncm 2026-03-04 18:39 UTC link
Anthropic has one nine of uptime right now. One.

https://status.claude.com/

If AI could effectively replace people, you wouldn’t need CEOs to keep trying to convince people.

vardalab 2026-03-04 18:56 UTC link
q4 quant gives you 175 tg and 7K pp, beats most cloud providers
cmrdporcupine 2026-03-04 18:59 UTC link
Perhaps they wanted future Qwen models to be closed and proprietary, and the authors couldn't abide by that.
zozbot234 2026-03-04 19:05 UTC link
What Anthropic was complaining about is training on mass-elicited chat logs. It is very much a ToS violation (you aren't allowed to exploit the service for the purpose of building a competitor) so the complaint is well-founded but (1) it's not "distillation" properly understood; it can only feasibly extract the same kind of narrow knowledge you'd read out from chat logs, perhaps including primitive "let's think step by step" output (which are not true fine-tuned reasoning tokens); because you have no access to the actual weights; and (2) it's something Western AI firms are very much believed to do to one another and to Chinese models all the time anyway. Hence the brouhaha about Western models claiming to be DeepSeek when they answer in Chinese.
vardalab 2026-03-04 19:11 UTC link
Have frontier lab do the plan which is the most time consuming part anyways and then local llm do the implementation. Frontier model can orchestrate your tickets, write a plan for them and dispatch local llm agents to implement at about 180 tokens/s, vllm can probably ,manage something like 25 concurrent sessions on RTX 6000 Do it all in a worktrees and then have frontier model do the review and merge. I am just a retired hobbyist but that's my approach, I run everything through gitea issues, each issue gets launched by orchestrator in a new tmux window and two main agents (implementer and reviewer get their own panes so I can see what's going on). I think claude code now has this aspect also somewhat streamlined but I have seen no need to change up my approach yet since I am just a retired hobbyist tinkering on my personal projects. Also right now I just use claude code subagents but have been thinking of trying to replace them with some of these Qwen 3.5 models because they do seem cpable and I have the hardware to run them.
ferfumarma 2026-03-04 19:37 UTC link
It would surprise me if they're willing to come to the US in the setting of the current DHS and ICE situation.
anana_ 2026-03-04 19:46 UTC link
I've had even better results using the dense 27B model -- less looping and churning on problems
lreeves 2026-03-04 19:54 UTC link
In my experience Qwen3.5/Qwen3-Coder-Next perform best in their own harness, Qwen-Code. You can also crib the system prompt and tool definitions from there though. Though caveat, despite the Qwen models being the state of the art for local models they are like a year behind anything you can pay for commercially so asking for it to build a new app from scratch might be a bit much.
Tepix 2026-03-04 20:11 UTC link
What is "the new qwen model"? There are a dozen and you can get them in a dozen different quantizations (or more) which are of different quality each.
seanmcdirmid 2026-03-04 22:01 UTC link
They already kind of do, but I think anyone who was into US money has already left for it, and the money China is throwing at the problem is pretty good also. You can also have a lot more influence in a Chinese company without having to adopt a weird new American corporate culture.
trvz 2026-03-04 22:09 UTC link
Wasted tokens are preferred for local models, I need the GPU mainframe in my bedroom to heat it as I live in a third world country with unreliable heating (Switzerland).
mmis1000 2026-03-05 01:38 UTC link
> Only complaint is it sometimes decides to ignore half your prompt when instructions get long

This sounds like your context is too big and getting cut off.

ramgine 2026-03-05 03:54 UTC link
Running 32b on what hardware?
indrora 2026-03-05 08:26 UTC link
Ostensibly, a mix of VC funding and that they host an endpoint that lets them run the big (200+GB) models on their infrastructure rather than having to build machines with hundreds of gigs of llm-dedicated memory.
rwmj 2026-03-05 09:02 UTC link
The small spend may be worth it to destroy US proprietary AI companies.
gdiamos 2026-03-05 09:24 UTC link
Results as good as Qwen has been posting would seem to trigger a power struggle.

I think companies that don’t navigate these correctly eventually lose.

theshrike79 2026-03-05 13:12 UTC link
Chinese companies don't always operate on purely capitalistic principles, there is sometimes government direction in the background.

For China, the country, it's a good thing if American AI companies have to scramble to compete with Chinese open models. It might not be massively profitable for the companies producing said models, but that's only a part of the equation

gmerc 2026-03-05 14:28 UTC link
How do US tech companies make money? They don't until the competition has been starved.
Editorial Channel
What the content says
+0.25
Article 19 Freedom of Expression
High A: Freedom of opinion and expression A: Freedom to seek and receive information A: Freedom to impart information and ideas
Editorial
+0.25
SETL
+0.13

Core function of content. Article is a detailed opinion piece and analysis of emerging technology news. Author: (1) Expresses personal judgments ('truly remarkable,' 'exceptionally good,' 'real tragedy'), (2) Synthesizes information from multiple sources (tweet, 36kr article, Wikipedia, technical experimentation), (3) Imparts findings and analysis to readers, (4) Cites sources transparently, (5) Engages in public reasoning about industry implications. The article models all dimensions of Article 19: forming and expressing opinions freely, seeking diverse information sources, and sharing knowledge with an audience.

+0.22
Article 18 Freedom of Thought
High A: Freedom of thought and conscience regarding AI development A: Freedom of belief in open-source model philosophy
Editorial
+0.22
SETL
+0.12

Content advocates for open thought and diverse perspectives in AI research. Implicitly supports freedom of conscience by: (1) reporting on individuals' decisions to leave organizations, (2) valuing Qwen's open-weight model philosophy without demanding conformity, (3) respecting researchers' internal convictions about how to advance the field. The author explicitly celebrates the Qwen team's intellectual contributions: 'It would be a real tragedy if the Qwen team were to disband now, given their proven track record in continuing to find new ways to get high quality results.'

+0.18
Article 13 Freedom of Movement
Medium A: Freedom of movement (implied through international research mobility)
Editorial
+0.18
SETL
+0.10

Content implicitly supports researchers' freedom to move between organizations and roles. Speculation about what departing talent 'might do next' respects their freedom to pursue new opportunities: 'If those core Qwen team members either start something new or join another research lab I'm excited to see what they do next.'

+0.18
Article 23 Work & Equal Pay
High A: Right to work and free choice of employment A: Just and favorable working conditions
Editorial
+0.18
SETL
+0.10

Content directly engages Article 23 through the lens of researcher autonomy and labor dignity. The entire narrative centers on researchers' freedom to make employment decisions: Junyang Lin's resignation, multiple team members' departures, and the author's respect for 'If those core Qwen team members either start something new or join another research lab.' Implicitly supports: (1) right to leave employment freely, (2) value of creative and intellectual work, (3) concern for working conditions that enable good outcomes ('Given far fewer resources than competitors, Junyang's leadership is one of the core factors in achieving today's results'). The author frames the reorganization as a potential threat to favorable working conditions.

+0.15
Preamble Preamble
Medium F: Recognition that talented researchers should have dignity and agency
Editorial
+0.15
SETL
+0.09

Content implicitly respects the principle that human dignity and freedom of choice are inherent. The framing of researchers' departures acknowledges their agency and accomplishments without diminishing their worth.

+0.15
Article 27 Cultural Participation
High A: Participation in cultural and scientific advancement A: Protection of scientific and cultural achievement
Editorial
+0.15
SETL
+0.07

Article explicitly celebrates and protects scientific achievement through detailed analysis of Qwen's research contributions. Author advocates for continued scientific advancement: 'It would be a real tragedy if the Qwen team were to disband now, given their proven track record in continuing to find new ways to get high quality results out of smaller and smaller models.' The piece documents scientific progress (model evolution from 397B to 0.8B), acknowledges researcher contributions by name, and expresses concern about potential loss of scientific capability. This supports the right to participate in scientific advancement and protection of scientific achievement.

+0.12
Article 12 Privacy
Medium F: Privacy of reputation and personal relationships A: Protection of individual research contributions
Editorial
+0.12
SETL
+0.07

Content respectfully reports on personal resignations and emotional ties ('bye my beloved qwen'). Acknowledges relationships and personal motivations without invasive speculation. Reports facts while protecting dignity of affected individuals.

+0.12
Article 20 Assembly & Association
Medium A: Freedom of peaceful assembly and association (implied through research community solidarity)
Editorial
+0.12
SETL
+0.05

Content implicitly supports collective action and association by: (1) Reporting on organized resignations as legitimate group action, (2) Noting emotional bonds within research teams ('bye my beloved qwen'), (3) Suggesting future collective work ('If those core Qwen team members either start something new or join another research lab'). The framing respects researchers' right to associate and coordinate decisions.

+0.10
Article 6 Legal Personhood
Low F: Recognition of individuals as legal persons with rights
Editorial
+0.10
SETL
+0.07

Content treats researchers (Lin Junyang, Bowen Yu, etc.) as individuals with names, agency, and specific professional identities. Implicitly recognizes them as rights-bearing persons.

+0.10
Article 26 Education
Medium A: Right to education A: Education directed toward human dignity and understanding
Editorial
+0.10
SETL
0.00

Content functions as educational material about AI development, researcher careers, and model capabilities. The detailed technical explanation of model sizes and capabilities ('They started with Qwen3.5-397B-A17B on February 17th—an 807GB model—and then followed with a flurry of smaller siblings in 122B, 35B, 27B, 9B, 4B, 2B, 0.8B sizes') educates readers. The narrative also implicitly teaches about research organization, career autonomy, and global collaboration in knowledge creation.

+0.08
Article 7 Equality Before Law
Low F: Equal treatment under institutional frameworks
Editorial
+0.08
SETL
+0.05

Content implies concerns about equal treatment through the framing of reorganization and departures. Notes that Junyang Lin was 'one of Alibaba's youngest P10 employees,' suggesting recognition of rank and status distinctions.

+0.08
Article 22 Social Security
Low F: Social security and welfare through knowledge access
Editorial
+0.08
SETL
+0.04

Content supports social protection implicitly by advocating for open-source AI development and knowledge sharing. Open models and smaller quantized versions support broader access to AI benefits: 'I've tried the 9B, 4B and 2B models and found them to be notably effective considering their tiny sizes. That 2B model is just 4.57GB—or as small as 1.27GB quantized—and is a full reasoning and multi-modal (vision) model.' This accessibility supports inclusive social participation.

+0.08
Article 25 Standard of Living
Low F: Social welfare and adequate standard of living
Editorial
+0.08
SETL
+0.05

Content indirectly supports welfare through advocacy for open and accessible AI models. By championing models that 'fit on a 32GB/64GB Mac' and quantized versions 'as small as 1.27GB,' the author advocates for technology that supports broader participation and improved living standards across economic groups.

+0.05
Article 3 Life, Liberty, Security
Low A: Implicit support for right to life through discussion of research continuity
Editorial
+0.05
SETL
0.00

Content expresses concern about potential disbanding of a research team, framing this as loss of human potential and contribution capacity.

+0.05
Article 29 Duties to Community
Low F: Duties toward community through knowledge sharing
Editorial
+0.05
SETL
0.00

Content implicitly performs duties toward community by sharing knowledge, providing analysis, and documenting important research developments. The author's work contributes to collective understanding of AI advancement.

ND
Article 1 Freedom, Equality, Brotherhood

Content does not directly address equal rights or dignity of all humans.

ND
Article 2 Non-Discrimination

No discussion of discrimination.

ND
Article 4 No Slavery

No discussion of slavery or servitude.

ND
Article 5 No Torture

No discussion of torture or cruel treatment.

ND
Article 8 Right to Remedy

No discussion of legal remedies or institutional recourse.

ND
Article 9 No Arbitrary Detention

No discussion of arbitrary detention.

ND
Article 10 Fair Hearing

No discussion of fair public hearing or impartial tribunal.

ND
Article 11 Presumption of Innocence

No discussion of criminal liability or presumption of innocence.

ND
Article 14 Asylum

No discussion of asylum or refuge.

ND
Article 15 Nationality

No discussion of nationality or citizenship.

ND
Article 16 Marriage & Family

No discussion of marriage or family.

ND
Article 17 Property

No discussion of property rights or arbitrary deprivation.

ND
Article 21 Political Participation

No discussion of political participation or democratic governance.

ND
Article 24 Rest & Leisure

No discussion of rest, leisure, or reasonable working hours.

ND
Article 28 Social & International Order

No discussion of social and international order.

ND
Article 30 No Destruction of Rights

No discussion of restrictions on rights or freedoms.

Structural Channel
What the site does
Element Modifier Affects Note
br_tracking +0.05
Preamble ¶5 Article 12 Article 19
No third-party trackers detected
br_security -0.05
Article 3 Article 12
Security headers: HTTPS
br_accessibility 0.00
Article 26 Article 27 ¶1
Accessibility: lang attr, 100% alt text
br_consent 0.00
Article 12 Article 19 Article 20 ¶2
No cookie consent banner detected
+0.18
Article 19 Freedom of Expression
High A: Freedom of opinion and expression A: Freedom to seek and receive information A: Freedom to impart information and ideas
Structural
+0.18
Context Modifier
+0.05
SETL
+0.13

Site provides multiple channels for information sharing: blog post, newsletter subscription, social media links (Mastodon, Bluesky, Twitter). Content is freely accessible without paywall. Archive structure (tags, chronological organization) facilitates information discovery. No cookie tracking or forced identification required to read. RSS-friendly structure supports syndication and reuse.

+0.15
Article 18 Freedom of Thought
High A: Freedom of thought and conscience regarding AI development A: Freedom of belief in open-source model philosophy
Structural
+0.15
Context Modifier
0.00
SETL
+0.12

Site hosts diverse viewpoints and encourages independent thought through open commentary and link-sharing. Subscription model respects user choice to engage or not. No content policing or ideological gatekeeping apparent.

+0.12
Article 13 Freedom of Movement
Medium A: Freedom of movement (implied through international research mobility)
Structural
+0.12
Context Modifier
0.00
SETL
+0.10

Site is publicly accessible without geographic restrictions. Content about global AI research community respects cross-border knowledge exchange.

+0.12
Article 23 Work & Equal Pay
High A: Right to work and free choice of employment A: Just and favorable working conditions
Structural
+0.12
Context Modifier
0.00
SETL
+0.10

Site employment model respects workers through: sponsorship option rather than mandatory advertising, transparent business model ('Sponsor me for $10/month'), author maintains editorial independence. Structure supports free choice and dignified work conditions.

+0.12
Article 27 Cultural Participation
High A: Participation in cultural and scientific advancement A: Protection of scientific and cultural achievement
Structural
+0.12
Context Modifier
0.00
SETL
+0.07

Site hosts scientific discourse and documents research progress through archival structure. Multiple distribution channels (newsletter, social media, RSS-compatible structure) amplify scientific knowledge sharing. Open discussion format encourages participation in scientific conversation.

+0.10
Preamble Preamble
Medium F: Recognition that talented researchers should have dignity and agency
Structural
+0.10
Context Modifier
0.00
SETL
+0.09

Site architecture respects user agency through theme choice controls and clear content organization. No manipulative design patterns observed.

+0.10
Article 20 Assembly & Association
Medium A: Freedom of peaceful assembly and association (implied through research community solidarity)
Structural
+0.10
Context Modifier
0.00
SETL
+0.05

Site supports community through multiple association mechanisms: newsletter subscription, social media linking, tagging system (articles tagged 'ai', 'generative-ai', 'llms', 'qwen' to build communities of interest). Weblog format inherently facilitates readers forming communities around shared interests.

+0.10
Article 26 Education
Medium A: Right to education A: Education directed toward human dignity and understanding
Structural
+0.10
Context Modifier
0.00
SETL
0.00

Site is fully accessible (per DCP: '100% alt text'). Language attribute enables assistive technology. Clear reading structure supports comprehension. Tagging system ('ai', 'generative-ai', 'llms', 'qwen') facilitates educational discovery and learning paths.

+0.08
Article 12 Privacy
Medium F: Privacy of reputation and personal relationships A: Protection of individual research contributions
Structural
+0.08
Context Modifier
0.00
SETL
+0.07

No third-party tracking detected (per DCP). Site uses local storage only for user preferences. HTTPS encryption protects communication privacy. No cookie consent needed, indicating minimal data collection.

+0.06
Article 22 Social Security
Low F: Social security and welfare through knowledge access
Structural
+0.06
Context Modifier
0.00
SETL
+0.04

Free public access to knowledge about AI development supports broader welfare through information access. No paywalls or exclusive barriers.

+0.05
Article 3 Life, Liberty, Security
Low A: Implicit support for right to life through discussion of research continuity
Structural
+0.05
Context Modifier
-0.05
SETL
0.00

HTTPS security protocol maintains basic user safety. No malicious code or exploitation vectors apparent.

+0.05
Article 6 Legal Personhood
Low F: Recognition of individuals as legal persons with rights
Structural
+0.05
Context Modifier
0.00
SETL
+0.07

Site respects users as individual agents through account-optional design and local storage of preferences without forced identification.

+0.05
Article 7 Equality Before Law
Low F: Equal treatment under institutional frameworks
Structural
+0.05
Context Modifier
0.00
SETL
+0.05

No differential access to content based on user identity. Public information equally available.

+0.05
Article 25 Standard of Living
Low F: Social welfare and adequate standard of living
Structural
+0.05
Context Modifier
0.00
SETL
+0.05

Free public information access supports welfare knowledge dissemination.

+0.05
Article 29 Duties to Community
Low F: Duties toward community through knowledge sharing
Structural
+0.05
Context Modifier
0.00
SETL
0.00

Site structure supports community duties through open access and multiple sharing channels without exploitation.

ND
Article 1 Freedom, Equality, Brotherhood

No structural barriers to access based on status, but no explicit equal opportunity mechanisms.

ND
Article 2 Non-Discrimination

No discriminatory barriers apparent in site design.

ND
Article 4 No Slavery

No evidence of exploitative practices in site structure.

ND
Article 5 No Torture

No harmful or cruel design patterns.

ND
Article 8 Right to Remedy

No recourse mechanisms visible on page.

ND
Article 9 No Arbitrary Detention

No content custody or lockdown patterns.

ND
Article 10 Fair Hearing

No judicial or dispute-resolution function on site.

ND
Article 11 Presumption of Innocence

No criminal judgment or liability framework present.

ND
Article 14 Asylum

No asylum or protection mechanisms relevant to this content.

ND
Article 15 Nationality

No nationality-based restrictions on access.

ND
Article 16 Marriage & Family

No family-related restrictions or protections.

ND
Article 17 Property

No property-related mechanisms on site.

ND
Article 21 Political Participation

No voting, political engagement, or governance mechanisms.

ND
Article 24 Rest & Leisure

No rest or leisure mechanisms on site.

ND
Article 28 Social & International Order

No social order governance mechanisms.

ND
Article 30 No Destruction of Rights

No harmful restrictions on rights apparent.

Psychological Safety
experimental
How safe this content is to read — independent from rights stance. Scores are ordinal (rank-order only). Learn more
PSQ
+0.4
Per-model PSQ
L4P +0.4 L3P +0.5
Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.77 medium claims
Sources
0.8
Evidence
0.8
Uncertainty
0.7
Purpose
0.8
Propaganda Flags
1 manipulative rhetoric technique found
1 techniques detected
appeal to emotion
'bye my beloved qwen' and 'It would be a real tragedy if the Qwen team were to disband' use emotional language to frame departures and potential team dissolution as personally significant losses.
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.3
Arousal
0.5
Dominance
0.6
Transparency
Does the content identify its author and disclose interests?
0.75
✓ Author ✓ Funding
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.63 mixed
Reader Agency
0.7
Stakeholder Voice
Whose perspectives are represented in this content?
0.68 5 perspectives
Speaks: individualsworkersinstitutioncorporation
About: corporationinstitution
Temporal Framing
Is this content looking backward, at the present, or forward?
present immediate
Geographic Scope
What geographic area does this content cover?
global
China, Beijing, United States, Singapore
Complexity
How accessible is this content to a general audience?
moderate medium jargon domain specific
Longitudinal 1219 HN snapshots · 185 evals
+1 0 −1 HN
Audit Trail 205 entries
2026-03-16 02:34 eval_success PSQ evaluated: g-PSQ=0.440 (3 dims) - -
2026-03-16 02:34 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-16 02:32 eval_success Lite evaluated: Neutral (0.08) - -
2026-03-16 02:32 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-16 02:32 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-16 00:10 eval_success Evaluated: Mild positive (0.13) - -
2026-03-16 00:10 eval Evaluated by claude-haiku-4-5-20251001: +0.13 (Mild positive) 16,500 tokens
2026-03-08 19:21 eval_success PSQ evaluated: g-PSQ=0.440 (3 dims) - -
2026-03-08 19:20 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-08 19:05 eval_success PSQ evaluated: g-PSQ=0.450 (3 dims) - -
2026-03-08 19:05 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-08 18:52 eval_success Lite evaluated: Neutral (0.08) - -
2026-03-08 18:52 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-08 18:51 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-08 18:02 eval_success Lite evaluated: Neutral (0.08) - -
2026-03-08 18:02 eval Evaluated by llama-3.3-70b-wai: +0.08 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-08 18:02 rater_validation_warn Lite validation warnings for model llama-3.3-70b-wai: 1W 0R - -
2026-03-08 17:57 eval_success Lite evaluated: Neutral (0.08) - -
2026-03-08 17:57 eval Evaluated by llama-3.3-70b-wai: +0.08 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-08 17:57 rater_validation_warn Lite validation warnings for model llama-3.3-70b-wai: 1W 0R - -
2026-03-08 16:31 eval_success PSQ evaluated: g-PSQ=0.440 (3 dims) - -
2026-03-08 16:31 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-08 16:12 eval_success PSQ evaluated: g-PSQ=0.450 (3 dims) - -
2026-03-08 16:12 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-08 15:55 eval_success Lite evaluated: Neutral (0.08) - -
2026-03-08 15:55 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-08 15:55 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-08 15:38 eval_success Lite evaluated: Neutral (0.08) - -
2026-03-08 15:38 eval Evaluated by llama-3.3-70b-wai: +0.08 (Neutral) +0.02
reasoning
Technical blog post on Qwen AI models
2026-03-08 15:38 rater_validation_warn Lite validation warnings for model llama-3.3-70b-wai: 1W 0R - -
2026-03-07 19:28 eval_success PSQ evaluated: g-PSQ=0.450 (3 dims) - -
2026-03-07 19:28 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 19:23 eval_success PSQ evaluated: g-PSQ=0.450 (3 dims) - -
2026-03-07 19:23 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 19:08 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 18:20 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 17:34 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 17:13 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 14:33 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 14:20 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 14:00 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 13:48 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 13:26 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 13:15 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 12:55 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 12:42 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 12:25 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 12:09 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 11:55 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 11:50 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 11:38 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 11:20 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 11:08 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 10:49 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 10:44 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 10:35 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 10:12 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 10:05 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 09:42 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 09:37 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 09:33 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 09:07 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 09:01 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 08:35 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 08:30 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 08:29 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 07:59 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 07:57 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 07:54 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 07:23 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 07:22 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 06:40 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 06:39 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 04:17 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 04:15 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 03:44 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 03:43 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 03:09 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 03:07 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 03:04 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 02:32 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 02:29 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 02:00 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 01:58 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-07 01:55 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-07 01:53 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 22:19 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 21:47 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 21:42 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 21:17 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 21:13 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 21:00 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 20:31 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 20:19 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 20:14 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 19:55 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 19:36 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 19:19 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 19:14 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 19:01 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 18:56 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 18:36 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 18:10 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 17:21 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 17:16 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 17:00 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 16:40 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 16:25 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 16:03 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 15:49 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 15:27 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 15:11 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 14:51 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 14:28 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 14:23 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 14:07 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 13:46 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 13:28 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 13:06 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 12:54 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 12:31 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 12:22 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 11:58 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 11:52 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 11:49 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 11:19 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 11:18 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 10:47 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 10:45 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 10:12 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 10:12 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 09:40 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 09:39 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 09:35 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 09:04 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 09:02 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 08:32 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 08:32 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 08:27 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 08:02 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 07:57 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 07:56 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 07:51 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 07:26 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 07:20 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 06:54 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 06:49 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 06:22 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 06:18 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 06:13 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 05:40 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 05:33 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 04:31 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-06 04:06 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-06 04:01 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-05 17:23 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-05 12:56 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-05 07:29 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive) 0.00
2026-03-05 06:17 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-05 06:12 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive) 0.00
2026-03-05 04:39 eval Evaluated by llama-4-scout-wai-psq: +0.44 (Moderate positive)
2026-03-05 04:19 eval Evaluated by llama-3.3-70b-wai-psq: +0.45 (Moderate positive)
2026-03-05 04:13 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 04:07 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-05 03:39 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 03:30 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-05 02:56 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 02:52 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-05 02:51 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 02:14 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-05 02:13 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 01:40 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-05 01:37 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 01:02 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-05 01:00 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 00:27 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-05 00:22 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-05 00:17 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 23:50 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 23:45 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 23:41 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 23:35 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 23:07 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 23:02 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 22:54 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 22:49 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 22:22 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 22:17 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 22:08 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 21:48 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 21:37 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 21:13 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 20:52 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 20:47 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 20:33 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 20:13 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 20:08 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 19:52 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 19:16 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 19:06 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 18:13 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 18:08 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral) 0.00
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 18:05 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral) 0.00
reasoning
Technical blog post on Qwen AI models
2026-03-04 16:41 eval Evaluated by llama-4-scout-wai: +0.08 (Neutral)
reasoning
Article discusses Qwen AI model developments and team changes, no explicit human rights discussion.
2026-03-04 16:39 eval Evaluated by llama-3.3-70b-wai: +0.06 (Neutral)
reasoning
Technical blog post on Qwen AI models