+0.10 Can I Run AI locally? (www.canirun.ai S:+0.17 )
1457 points by ricardbejarano 2 days ago | 345 comments on HN | Mild positive Moderate agreement (2 models) Product · v3.7 · 2026-03-15 22:23:53 0
Summary Digital Access & Technological Empowerment Acknowledges
CanIRun.ai is a free, browser-based AI model compatibility tool that detects hardware capabilities and identifies which AI models users can run locally. The application demonstrates modest positive signals toward human rights through universal free access, privacy-protective client-side architecture, and support for technological education and informed decision-making. While the tool does not explicitly advocate for human rights, its design choices—particularly no-cost access and information transparency—align with principles of equality, privacy, and educational access.
Article Heatmap
Preamble: ND — Preamble Preamble: No Data — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: ND — Privacy Article 12: No Data — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.15 — Freedom of Expression 19 Article 20: ND — Assembly & Association Article 20: No Data — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: +0.35 — Education 26 Article 27: +0.13 — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: +0.07 — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
E
+0.10
S
+0.17
Weighted Mean +0.19 Unweighted Mean +0.17
Max +0.35 Article 26 Min +0.07 Article 29
Signal 4 No Data 27
Volatility 0.10 (Medium)
Negative 0 Channels E: 0.6 S: 0.4
SETL -0.11 Structural-dominant
FW Ratio 56% 28 facts · 22 inferences
Agreement Moderate 2 models · spread ±0.093
Evidence 12% coverage
4M 15L 27 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.00 (0 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.00 (0 articles) Personal: 0.00 (0 articles) Expression: 0.15 (1 articles) Economic & Social: 0.00 (0 articles) Cultural: 0.24 (2 articles) Order & Duties: 0.07 (1 articles)
HN Discussion 20 top-level · 30 replies
sxates 2026-03-13 16:05 UTC link
Cool thing!

A couple suggestions:

1. I have an M3 Ultra with 256GB of memory, but the options list only goes up to 192GB. The M3 Ultra supports up to 512GB. 2. It'd be great if I could flip this around and choose a model, and then see the performance for all the different processors. Would help making buying decisions!

phelm 2026-03-13 16:13 UTC link
This is awesome, it would be great to cross reference some intelligence benchmarks so that I can understand the trade off between RAM consumption, token rate and how good the model is
twampss 2026-03-13 16:24 UTC link
Is this just llmfit but a web version of it?

https://github.com/AlexsJones/llmfit

LeifCarrotson 2026-03-13 16:47 UTC link
This lacks a whole lot of mobile GPUs. It also does not understand that you can share CPU memory with the GPU, or perform various KV cache offloading strategies to work around memory limits.

It says I have an Arc 750 with 2 GB of shared RAM, because that's the GPU that renders my browser...but I actually have an RTX1000 Ada with 6 GB of GDDR6. It's kind of like an RTX 4050 (not listed in the dropdowns) with lower thermal limits. I also have 64 GB of LPDDR5 main memory.

It works - Qwen3 Coder Next, Devstral Small, Qwen3.5 4B, and others can run locally on my laptop in near real-time. They're not quite as good as the latest models, and I've tried some bigger ones (up to 24GB, it produces tokens about half as fast as I can type...which is disappointingly slow) that are slower but smarter.

But I don't run out of tokens.

carra 2026-03-13 17:08 UTC link
Having the rating of how well the model will run for you is cool. I miss to also have some rating of the model capabilities (even if this is tricky). There are way too many to choose. And just looking at the parameter number or the used memory is not always a good indication of actual performance.
meatmanek 2026-03-13 17:25 UTC link
This seems to be estimating based on memory bandwidth / size of model, which is a really good estimate for dense models, but MoE models like GPT-OSS-20b don't involve the entire model for every token, so they can produce more tokens/second on the same hardware. GPT-OSS-20B has 3.6B active parameters, so it should perform similarly to a 3-4B dense model, while requiring enough VRAM to fit the whole 20B model.

(In terms of intelligence, they tend to score similarly to a dense model that's as big as the geometric mean of the full model size and the active parameters, i.e. for GPT-OSS-20B, it's roughly as smart as a sqrt(20b*3.6b) ≈ 8.5b dense model, but produces tokens 2x faster.)

cafed00d 2026-03-13 18:09 UTC link
Open with multiple browsers (safari vs chrome) to get more "accurate + glanceable" rankings.

Its using WebGPU as a proxy to estimate system resource. Chrome tends to leverage as much resources (Compute + Memory) as the OS makes available. Safari tends to be more efficient.

Maybe this was obvious to everyone else. But its worth re-iterating for those of us skimmers of HN :)

mark_l_watson 2026-03-13 18:20 UTC link
I have spent a HUGE amount of time the last two years experimenting with local models.

A few lessons learned:

1. small models like the new qwen3.5:9b can be fantastic for local tool use, information extraction, and many other embedded applications.

2. For coding tools, just use Google Antigravity and gemini-cli, or, Anthropic Claude, or...

Now to be clear, I have spent perhaps 100 hours in the last year configuring local models for coding using Emacs, Claude Code (configured for local), etc. However, I am retired and this time was a lot of fun for me: lot's of efforts trying to maximize local only results. I don't recommend it for others.

I do recommend getting very good at using embedded local models in small practical applications. Sweet spot.

andy_ppp 2026-03-13 18:20 UTC link
Is it correct that there's zero improvement in performance between M4 (+Pro/Max) and M5 (+Pro/Max) the data looks identical. Also the memory does not seem to improve performance on larger models when I thought it would have?

Love the idea though!

EDIT: Okay the whole thing is nonsense and just some rough guesswork or asking an LLM to estimate the values. You should have real data (I'm sure people here can help) and put ESTIMATE next to any of the combinations you are guessing.

mopierotti 2026-03-13 18:44 UTC link
This (+ llmfit) are great attempts, but I've been generally frustrated by how it feels so hard to find any sort of guidance about what I would expect to be the most straightforward/common question:

"What is the highest-quality model that I can run on my hardware, with tok/s greater than <x>, and context limit greater than <y>"

(My personal approach has just devolved into guess-and-check, which is time consuming.) When using TFA/llmfit, I am immediately skeptical because I already know that Qwen 3.5 27B Q6 @ 100k context works great on my machine, but it's buried behind relatively obsolete suggestions like the Qwen 2.5 series.

I'm assuming this is because the tok/s is much higher, but I don't really get much marginal utility out of tok/s speeds beyond ~50 t/s, and there's no way to sort results by quality.

mmaunder 2026-03-13 19:17 UTC link
OP can you please make it not as dark and slightly larger. Super useful otherwise. Qwen 3.5 9B is going to get a lot of love out of this.
azmenak 2026-03-13 19:21 UTC link
From my personal testing, running various agentic tasks with a bunch of tool calls on an M4 Max 128GB, I've found that running quantized versions of larger models to produce the best results which this site completely ignores.

Currently, Nemotron 3 Super using Unsloth's UD Q4_K_XL quant is running nearly everything I do locally (replacing Qwen3.5 122b)

amdivia 2026-03-13 20:45 UTC link
I found this to be inaccurate, I can run OSS GPT 120B (4 bit quant) on my 5090 and 64 ram system with around 40 t/s. Yet here the site claims it won't work
torginus 2026-03-13 21:00 UTC link
Huh, I never knew my browser just volunteers my exact hardware specs to any website without so much as even notifying me about it.
dxxvi 2026-03-14 00:55 UTC link
Not sure if there's anybody like me. I use AI for only 2 purposes: to replace Google Search to learn something and to generate images. I wonder where there are not many models that do only 1 thing and do it well. For example, there's this one https://huggingface.co/Fortytwo-Network/Strand-Rust-Coder-14... for Rust coding. I haven't used it yet, so don't know how it's compared to the free models that Kilo Code provides.
rahimnathwani 2026-03-14 02:02 UTC link
This site presents models in an incomplete and misleading way.

When I visit the site with an Apple M1 Max with 32GB RAM, the first model that's listed is Llama 3.1 8B, which is listed as needing 4.1GB RAM.

But the weights for Llama 3.1 8B are over 16GB. You can see that here in the official HF repo: https://huggingface.co/meta-llama/Llama-3.1-8B/tree/main

The model this site calls 'Llama 3.1 8B' is actually a 4-bit quantized version ( Q4_K_M) available on ollama.com/library: https://ollama.com/library/llama3.1:8b

If you're going to recommend a model to someone based on their hardware, you have to recommend not only a specific model, but a specific version of that model (either the original, or some specific quantized version).

This matters because different quantized versions of the model will have different RAM requirements and different performance characteristics.

Another thing I don't like is that the model names are sometimes misleading. For example, there's a model with the name 'DeepSeek R1 1.5B'. There's only one architecture for DeepSeek R1, and it has 671B parameters. The model they call 'DeepSeek R1 1.5B' does not use that architecture. It's a qwen2 1.5B model that's been finetuned on DeepSeek R1's outputs. (And it's a Q4_K_M quantized version.)

StefanoC 2026-03-14 14:40 UTC link
Can anybody share their setup using 64GB macs? I have an M2 Ultra studio and I'm trying Qwen 3.5 MLX models hosting them from the CLI, but I'm a bit stuck picking bigger models, more context, 4/8 bits, Opus-Reasoning-Distilled, coder... There are a bit too many permutations between mlx CLI flags, env variables, and models.

At the moment I'm exploring:

- nightmedia/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-qx64-hi-mlx

- BeastCode/Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-4bit

- mlx-community/Qwen3-Coder-Next-4bit

dexterlagan 2026-03-15 08:09 UTC link
Same as top comment, have spent a lot of time on local models. IMHO, qwen3.5 is the very first model that is actually usable for serious work, ever - and I've tried them all. The 35B 3B is very smart. It understands things no other local model I've ever used does, it's that good. The 9B runs on my slow Mac, and it's also very 'smart'. I can say with confidence that 2026 is the year of the local model, at last.
Western0 2026-03-15 08:20 UTC link
I tried using agents on a small Orange Pi and other small machines, and I’ve come to the conclusion that, unfortunately, it’s not feasible to do this in a practical way. Of course, I started writing an agent that could run in such an environment (long timeouts, retries, etc.), but it’s a real pain. For this to make sense, you need much more powerful hardware (a 5-year-old Mac mini is fine); the other issue is power consumption. Unfortunately, until you can run Mavericks 2 on your own hardware, it’s pretty expensive.
rurban 2026-03-15 15:18 UTC link
No, not yet. Up to a single H100 there is no single local model, which doesn't make your code worse. (excluding trivial stuff like ruby, python, typescript). Implement features, fix bugs.

Right now we started experimenting with 2 H100's, 160GB models. But even a single one is wide out of anyone others league.

deanc 2026-03-13 16:31 UTC link
Yes. But llmfit is far more useful as it detects your system resources.
lambda 2026-03-13 17:44 UTC link
Yeah, I looked up some models I have actually run locally on my Strix Halo laptop, and its saying I should have much lower performance than I actually have on models I've tested.

For MoE models, it should be using the active parameters in memory bandwidth computation, not the total parameters.

littlestymaar 2026-03-13 17:46 UTC link
While your remark is valid, there's two small inaccuracies here:

> GPT-OSS-20B has 3.6B active parameters, so it should perform similarly to a 3-4B dense model, while requiring enough VRAM to fit the whole 20B model.

First, the token generation speed is going to be comparable, but not the prefil speed (context processing is going to be much slower on a big MoE than on a small dense model).

Second, without speculative decoding, it is correct to say that a small dense model and a bigger MoE with the same amount of active parameters are going to be roughly as fast. But if you use a small dense model you will see token generation performance improvements with speculative decoding (up to x3 the speed), whereas you probably won't gain much from speculative decoding on a MoE model (because two consecutive tokens won't trigger the same “experts”, so you'd need to load more weight to the compute units, using more bandwidth).

pbronez 2026-03-13 17:50 UTC link
The docs page addresses this:

> A Mixture of Experts model splits its parameters into groups called "experts." On each token, only a few experts are active — for example, Mixtral 8x7B has 46.7B total parameters but only activates ~12.9B per token. This means you get the quality of a larger model with the speed of a smaller one. The tradeoff: the full model still needs to fit in memory, even though only part of it runs at inference time.

> A dense model activates all its parameters for every token — what you see is what you get. A MoE model has more total parameters but only uses a subset per token. Dense models are simpler and more predictable in terms of memory/speed. MoE models can punch above their weight in quality but need more VRAM than their active parameter count suggests.

https://www.canirun.ai/docs

rootusrootus 2026-03-13 18:00 UTC link
That's super handy, thanks for sharing the link. Way more useful than the web site this post is about, to be honest.

It looks like I can run more local LLMs than I thought, I'll have to give some of those a try. I have decent memory (96GB) but my M2 Max MBP is a few years old now and I figured it would be getting inadequate for the latest models. But llmfit thinks it's a really good fit for the vast majority of them. Interesting!

GeekyBear 2026-03-13 18:31 UTC link
> Is it correct that there's zero improvement in performance between M4 (+Pro/Max) and M5 (+Pro/Max)

Preliminary testing did not come to that conclusion.

> Apple’s New M5 Max Changes the Local AI Story

https://www.youtube.com/watch?v=XGe7ldwFLSE

tommy_axle 2026-03-13 18:36 UTC link
I'm guessing this is also calculating based on the full context size that the model supports but depending on your use case it will be misleading. Even on a small consumer card with Qwen 3 30B-A3B you probably don't need 128K context depending on what you're doing so a smaller context and some tensor overrides will help. llama.cpp's llama-fit-params is helpful in those cases.
J_Shelby_J 2026-03-13 18:52 UTC link
It’s a hard problem. I’ve been working on it for the better part of a year.

Well, granted my project is trying to do this in a way that works across multiple devices and supports multiple models to find the best “quality” and the best allocation. And this puts an exponential over the project.

But “quality” is the hard part. In this case I’m just choosing the largest quants.

johnmaguire 2026-03-13 18:54 UTC link
I'd love to know how you fit smaller models into your workflow. I have an M4 Macbook Pro w/ 128GB RAM and while I have toyed with some models via ollama, I haven't really found a nice workflow for them yet.
utopcell 2026-03-13 19:12 UTC link
Unfortunately, Apple retired the 512GiB models.
ProllyInfamous 2026-03-13 19:28 UTC link
I'm not usually one to whine, but agreed; additionally, add contrast to the modifiers (e.g. processor select). First thing I did when I visited was scale the website to 150%

Super impressive comparisons, and correlates with my perception having three seperate generations of GPU (from your list pulldown). Thanks for including the "old AMD" Polaris chipsets, which are actually still much faster than lower-spec Apple silicon. I have Ollama3.1 on a VEGA64 and it really is twice as fast as an M2Pro...

----

For anybody that thinks installing a local LLM is complicated: it's not (so long as you have more than one computer, don't tinker on your primary workhorse). I am a blue collar electrician (admittedly: geeky); no more difficult than installing linux. I used an online LLM to help me install both =D

downrightmike 2026-03-13 19:29 UTC link
LLMs are just special purpose calculators, as opposed to normal calculators which just do numbers and MUST be accurate. There aren't very good ways of knowing what you want because the people making the models can't read your mind and have different goals
comboy 2026-03-13 20:17 UTC link
What is the $/Mtok that would make you choose your time vs savings of running stuff locally?

Just to be clear, it may sound like a snarky comment but I'm really curious from you or others how do you see it. I mean there are some batches long running tasks where ignoring electricity it's kind of free but usually local generation is slower (and worse quality) and we all kind of want some stuff to get done.

Or is it not about the cost at all, just about not pushing your data into the clouds.

sdrinf 2026-03-13 20:35 UTC link
Just want to echo the recommendation for qwen3.5:9b. This is a smol, thinking, agentic tool-using, text-image multimodal creature, with very good internal chains of thought. CoT can be sometimes excessive, but it leads to very stable decision-making process, even across very large contexts -something we haven't seen models of this size before.

What's also new here, is VRAM-context size trade-off: for 25% of it's attention network, they use the regular KV cache for global coherency, but for 75% they use a new KV cache with linear(!!!!) memory-token-context size expansion! which means, eg ~100K token -> 1.5gb VRAM use -meaning for the first time you can do extremely long conversations / document processing with eg a 3060.

Strong, strong recommend.

ricardbejarano 2026-03-13 20:50 UTC link
OP here, it's not mine though!
aanet 2026-03-13 21:00 UTC link
+1

The website is super useful. That theme though... low-contrast text on too-dark theme is, uh, barely readable for me.

Jaxan 2026-03-13 21:03 UTC link
It doesn’t really. The website thinks I’m on a iPhone 19 pro, although I’m actually on a iPhone SE 1st gen. So it’s off by roughly a decade.
ebbi 2026-03-13 21:03 UTC link
I thought that's how airlines do the whole trickery around having different pricing if you access the site from Windows or Mac...
DanielHB 2026-03-13 21:06 UTC link
This stuff is used a lot in browser fingerprinting for tracking purposes. More privacy-focused browsers usually feed randomized info.
dataflow 2026-03-13 21:21 UTC link
Thanks for sharing this, it's super helpful. I have a question if you don't mind: I want a model that I can feed, say, my entire email mailbox to, so that I can ask it questions later. (Just the text content, which I can clean and preprocess offline for its use.) Have any offline models you've dealt with seemed suitable for that sort of use case, with that volume of content?
adamkittelson 2026-03-13 21:49 UTC link
Anecdotal but for some reason I had a pretty bad time with qwen3.5 locally for tool usage. I've been using GPT-OSS-120B successfully and switched to qwen so that I could process images as well (I'm using this for a discord chat bot).

Everything worked fine on GPT but Qwen as often as not preferred to pretend to call a tool and not actually call it. After much aggravation I wound up just setting my bot / llama swap to use gpt for chat and only load up qwen when someone posts an image and just process / respond to the image with qwen and pop back over to gpt when the next chat comes in.

0xbadcafebee 2026-03-13 22:17 UTC link
Too generic question. Gotta be more specific:

   "what is the best open weight model for high-quality coding that fits in 8GB VRAM and 32GB system RAM with t/s >= 30 and context >= 32768" -> Qwen2.5-Coder-7B-Instruct

   "what is the best open weight model for research w/web search that fits in 24GB VRAM and 32GB system RAM with t/s >= 60 and context >= 400k" -> Qwen3-30B-A3B-Instruct-2507

   "what is the best open weight embedding model for RAG on a collection of 100,000 documents that fits in 40GB VRAM and 128GB system RAM with t/s >= 50 and context >= 200k" -> Qwen3-Embedding-8B
Specific models & sizes for specific use cases on specific hardware at specific speeds.
flutetornado 2026-03-13 23:16 UTC link
My experience with qwen3.5 9b has not been the same. It’s definitely good at agentic responses but it hallucinates a lot. 30%-50% of the content it generated for a research task (local code repo exploration) turned out to be plain wrong to the extent of made up file names and function names. I ran its output through KimiK2 and asked it to verify its output - which found out that much of what it had figured out after agentic exploration was plain wrong. So use smaller models but be very cautious how much you depend on their output.
hotsalad 2026-03-13 23:24 UTC link
The latest Librewolf prompted me to allow the site permission to make a WebGL context. That's what it used for hardware detection.
zargon 2026-03-14 02:26 UTC link
They appear to be using Ollama as a data source. Ollama does that sort of thing regularly.
dyauspitr 2026-03-14 04:47 UTC link
For learning and general searching I find ChatGPT to be the best.

Nano Banana Pro for anything image and video related.

Grok Imagine for pretty decent porn generation.

gentleman11 2026-03-14 05:03 UTC link
ask apple to graciously allow you to install your own ram in the computer you "own"
ActorNightly 2026-03-14 06:36 UTC link
>. I have an M3 Ultra with 256GB of memory,

Im sorry but spending this kind of money when you could have just built yourself a dual 3090 workstation that would have been better for pretty much everything including local models is just plain stupid.

Hell, even one 3090 can now run Gemma 3 27b qat very fast.

duskdozer 2026-03-14 11:04 UTC link
Have to disagree in part at least. Text is pretty small which isn't good, but I'm glad to see it when sites don't succumb to the make-dark-mode-lighter trend.
DrAwdeOccarim 2026-03-14 14:46 UTC link
Totally doing this today! Have you tried OpenJarvis or NemoClaw (is it out yet?). I want to use my computer “through” the LLM.
Editorial Channel
What the content says
+0.15
Article 26 Education
Medium Practice
Editorial
+0.15
SETL
-0.16

Application provides technical information supporting informed technology education and capability assessment; empowers users to understand AI model requirements.

+0.10
Article 19 Freedom of Expression
Medium Practice
Editorial
+0.10
SETL
-0.14

Application does not restrict or curate user access to information; technical information presented factually without editorial bias.

+0.10
Article 27 Cultural Participation
Medium Practice
Editorial
+0.10
SETL
-0.09

Application supports participation in cultural and scientific advancement by providing access to information about AI model capabilities.

+0.05
Article 29 Duties to Community
Low Practice
Editorial
+0.05
SETL
-0.07

Application does not impose ideological restrictions or community limitations on user conduct.

ND
Preamble Preamble
Low Practice

No editorial content addressing human dignity or freedom principles observed.

ND
Article 1 Freedom, Equality, Brotherhood
Low Practice

No editorial content addressing equality or freedom from discrimination observed.

ND
Article 2 Non-Discrimination
Low Practice

No editorial statement on discrimination grounds observed.

ND
Article 3 Life, Liberty, Security
Low Practice

No editorial content addressing life, liberty, or security of person.

ND
Article 4 No Slavery

No observable editorial content concerning slavery or forced labor.

ND
Article 5 No Torture

No observable editorial content concerning torture or cruel treatment.

ND
Article 6 Legal Personhood

No observable editorial content concerning right to recognition as a person.

ND
Article 7 Equality Before Law

No observable editorial content concerning equality before law.

ND
Article 8 Right to Remedy

No observable editorial content concerning legal remedies.

ND
Article 9 No Arbitrary Detention

No observable editorial content concerning arbitrary arrest or detention.

ND
Article 10 Fair Hearing

No observable editorial content concerning fair trial.

ND
Article 11 Presumption of Innocence

No observable editorial content concerning criminal liability.

ND
Article 12 Privacy
Medium Practice

No editorial commentary on privacy or family life.

ND
Article 13 Freedom of Movement
Low Practice

No editorial content addressing freedom of movement.

ND
Article 14 Asylum
Low Practice

No editorial content addressing asylum or refuge.

ND
Article 15 Nationality

No observable editorial content concerning right to nationality.

ND
Article 16 Marriage & Family

No observable editorial content concerning marriage or family protection.

ND
Article 17 Property
Low Practice

No editorial content regarding property rights.

ND
Article 18 Freedom of Thought
Low Practice

No editorial content addressing freedom of thought, conscience, or religion.

ND
Article 20 Assembly & Association
Low Practice

No editorial content addressing freedom of peaceful assembly or association.

ND
Article 21 Political Participation
Low Practice

No editorial content addressing participation in government.

ND
Article 22 Social Security
Low Practice

No editorial content addressing social security or welfare.

ND
Article 23 Work & Equal Pay
Low Practice

No editorial content addressing right to work or labor conditions.

ND
Article 24 Rest & Leisure

No observable editorial content concerning rest, leisure, or working hours.

ND
Article 25 Standard of Living
Low Practice

No editorial content addressing healthcare or standard of living.

ND
Article 28 Social & International Order
Low Practice

No editorial content addressing social and international order.

ND
Article 30 No Destruction of Rights

No observable editorial content concerning prevention of right destruction.

Structural Channel
What the site does
Element Modifier Affects Note
Legal & Terms
Privacy
No privacy policy or privacy notice observed in provided content.
Terms of Service
No terms of service observed in provided content.
Identity & Mission
Mission
Creator identified as 'midudev' with URL midu.dev; no comprehensive mission statement observed.
Editorial Code
No editorial code or editorial standards observed.
Ownership
Creator identified as individual (midudev); ownership structure clear but minimal corporate governance visible.
Access & Distribution
Access Model +0.15
Article 26
Application advertises free access (price: 0 USD); no paywall or access restrictions observed, supporting equitable access to technology capability information.
Ad/Tracking
No advertising or tracking infrastructure observed in provided page content.
Accessibility
No accessibility statement observed in provided content.
+0.25
Article 26 Education
Medium Practice
Structural
+0.25
Context Modifier
+0.15
SETL
-0.16

Free, public-access tool supports universal education by removing cost barriers; enables self-directed learning about AI model compatibility.

+0.20
Article 19 Freedom of Expression
Medium Practice
Structural
+0.20
Context Modifier
0.00
SETL
-0.14

Tool provides unrestricted access to hardware and model compatibility information; no content filtering or information suppression observed.

+0.15
Article 27 Cultural Participation
Medium Practice
Structural
+0.15
Context Modifier
0.00
SETL
-0.09

Free tool enables users to participate in AI development ecosystem by understanding model compatibility; supports access to scientific culture.

+0.10
Article 29 Duties to Community
Low Practice
Structural
+0.10
Context Modifier
0.00
SETL
-0.07

Tool supports individual freedom within technical constraints; no community restrictions on use observed.

ND
Preamble Preamble
Low Practice

Free, public-access tool supporting individual agency and capability assessment; minimal structural signals re: fundamental freedoms.

ND
Article 1 Freedom, Equality, Brotherhood
Low Practice

Application designed to be universally available without authentication or status-based restrictions; neutral on discriminatory practices.

ND
Article 2 Non-Discrimination
Low Practice

Application appears to function identically regardless of user characteristics; no apparent discrimination based on protected characteristics.

ND
Article 3 Life, Liberty, Security
Low Practice

Tool provides capability assessment that could inform user decisions about technology use, indirectly supporting informed choice about personal security.

ND
Article 4 No Slavery

No structural signals regarding slavery or forced labor.

ND
Article 5 No Torture

No structural signals regarding torture or cruel treatment.

ND
Article 6 Legal Personhood

No structural signals regarding legal personhood.

ND
Article 7 Equality Before Law

No structural signals regarding legal equality.

ND
Article 8 Right to Remedy

No structural signals regarding remedy or appeal mechanisms.

ND
Article 9 No Arbitrary Detention

No structural signals regarding arrest or detention.

ND
Article 10 Fair Hearing

No structural signals regarding judicial processes.

ND
Article 11 Presumption of Innocence

No structural signals regarding criminal procedure.

ND
Article 12 Privacy
Medium Practice

Application processes hardware information locally in browser; no server transmission of personal data observed; architecture supports privacy protection.

ND
Article 13 Freedom of Movement
Low Practice

Application does not restrict user movement or constrain physical/digital mobility.

ND
Article 14 Asylum
Low Practice

Tool does not discriminate based on national origin or residency; open to all users.

ND
Article 15 Nationality

No structural signals regarding nationality recognition.

ND
Article 16 Marriage & Family

No structural signals regarding family or marriage rights.

ND
Article 17 Property
Low Practice

Application does not interfere with user property or device ownership; supports user control over their hardware assessment.

ND
Article 18 Freedom of Thought
Low Practice

Application does not monitor or restrict user thought, belief, or expression; neutral tool.

ND
Article 20 Assembly & Association
Low Practice

Application does not prevent or restrict user collective action or association.

ND
Article 21 Political Participation
Low Practice

Application does not provide mechanisms for political participation or governance input.

ND
Article 22 Social Security
Low Practice

Free tool supports economic empowerment and capability assessment, indirectly supporting social security principles.

ND
Article 23 Work & Equal Pay
Low Practice

Tool supports work capability assessment by allowing users to understand AI model compatibility, indirectly supporting informed work decisions.

ND
Article 24 Rest & Leisure

No structural signals regarding rest or leisure rights.

ND
Article 25 Standard of Living
Low Practice

Tool does not interfere with access to healthcare or living standards; free access removes financial barriers.

ND
Article 28 Social & International Order
Low Practice

Tool supports informational equity by providing universal access to technical capability information.

ND
Article 30 No Destruction of Rights

No structural signals regarding prevention of rights violation.

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.59 low claims
Sources
0.5
Evidence
0.6
Uncertainty
0.5
Purpose
0.8
Propaganda Flags
No manipulative rhetoric detected
0 techniques detected
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.2
Arousal
0.3
Dominance
0.4
Transparency
Does the content identify its author and disclose interests?
0.50
✓ Author
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.62 solution oriented
Reader Agency
0.7
Stakeholder Voice
Whose perspectives are represented in this content?
0.20 1 perspective
Speaks: creator
Temporal Framing
Is this content looking backward, at the present, or forward?
present immediate
Geographic Scope
What geographic area does this content cover?
global
Complexity
How accessible is this content to a general audience?
moderate medium jargon general
Longitudinal 1519 HN snapshots · 147 evals
+1 0 −1 HN
Audit Trail 167 entries
2026-03-15 22:23 eval_success Evaluated: Mild positive (0.19) - -
2026-03-15 22:23 eval Evaluated by claude-haiku-4-5-20251001: +0.19 (Mild positive) 18,819 tokens
2026-03-15 22:23 rater_validation_warn Validation warnings for model claude-haiku-4-5-20251001: 12W 27R - -
2026-03-15 21:43 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 21:43 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 21:43 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 21:31 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 21:31 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 21:04 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 21:04 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 21:04 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 20:51 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 20:51 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 20:26 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 20:26 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 20:26 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 20:14 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 20:14 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 19:51 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 19:51 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 19:51 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 19:39 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 19:39 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 19:13 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 19:13 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 19:13 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 19:01 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 19:01 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 18:28 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 18:28 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 18:28 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 18:08 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 18:08 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 17:18 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 16:59 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 16:07 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 15:51 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 15:31 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 15:11 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 14:55 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 14:34 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 14:19 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 13:56 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 13:42 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 13:19 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 13:06 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 12:40 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 12:26 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 12:02 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 11:49 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 11:22 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 11:09 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 10:40 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 10:29 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 10:02 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 09:49 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 09:21 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 09:06 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 08:36 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 08:26 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 07:53 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 07:44 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 07:12 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 07:02 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 06:35 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 06:26 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 06:00 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 05:51 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 05:25 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 05:17 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 04:50 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 04:41 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 04:15 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 04:06 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 03:39 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 03:29 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 03:00 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 02:52 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 02:25 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 02:16 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 01:46 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 01:40 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 01:15 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 01:12 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-15 00:47 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 00:44 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 23:47 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 23:40 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 23:08 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 23:02 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 22:07 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 22:01 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 21:07 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 21:01 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 19:54 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 19:51 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 19:13 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 19:11 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 18:08 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 18:08 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 16:33 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 16:32 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 15:23 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 15:19 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 14:44 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 14:39 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 14:07 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 14:02 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 13:32 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 13:25 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 12:56 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 12:49 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 12:21 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 12:15 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 11:46 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 11:39 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 11:11 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 11:04 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 10:36 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 10:25 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 09:58 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 09:44 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 09:16 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 09:05 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 08:37 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 08:26 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 07:56 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 07:42 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 07:14 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 07:00 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 06:35 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 06:21 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 05:56 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 05:39 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 05:18 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 04:59 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 04:39 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 04:20 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 04:04 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 03:44 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 03:23 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 03:02 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 02:47 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 02:23 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 02:09 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 01:45 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 01:31 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 01:07 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 01:03 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 00:40 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-14 00:38 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-14 00:05 eval Evaluated by llama-3.3-70b-wai-psq: +0.46 (Moderate positive)
2026-03-14 00:01 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
Technical content, zero rights discussion
2026-03-13 23:45 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-13 23:38 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-13 22:40 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-13 22:34 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-13 21:33 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-13 21:21 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-13 20:07 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-13 20:04 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-13 18:46 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-13 18:45 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-13 17:31 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-13 17:31 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Technical content, no human rights discussion
2026-03-13 16:04 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive)
2026-03-13 16:03 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
reasoning
Technical content, no human rights discussion