+0.56 The Future of AI (lucijagregov.com S:+0.50 )
147 points by BerislavLopac 2 days ago | 109 comments on HN | Moderate positive Contested Editorial · v3.7 · 2026-02-28 13:07:57 0
Summary AI Ethics & Democratic Governance Advocates
This article advocates strongly for reframing AI development as fundamentally a human rights challenge requiring systemic reform in education, governance, and research priorities. The author positions epistemic integrity, democratic participation, and human moral development as prerequisites for safe AI, rather than treating these as secondary concerns. The content champions dignity, truth, participation, and equitable resource distribution as central to AI governance.
Article Heatmap
Preamble: +0.62 — Preamble P Article 1: +0.58 — Freedom, Equality, Brotherhood 1 Article 2: +0.15 — Non-Discrimination 2 Article 3: +0.74 — Life, Liberty, Security 3 Article 4: +0.42 — No Slavery 4 Article 5: +0.38 — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: +0.18 — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: +0.60 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: +0.22 — Freedom of Thought 18 Article 19: +0.74 — Freedom of Expression 19 Article 20: +0.46 — Assembly & Association 20 Article 21: +0.62 — Political Participation 21 Article 22: +0.56 — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: +0.78 — Standard of Living 25 Article 26: +0.62 — Education 26 Article 27: +0.76 — Cultural Participation 27 Article 28: +0.74 — Social & International Order 28 Article 29: +0.52 — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean +0.56 Structural Mean +0.50
Weighted Mean +0.58 Unweighted Mean +0.54
Max +0.78 Article 25 Min +0.15 Article 2
Signal 18 No Data 13
Volatility 0.20 (Medium)
Negative 0 Channels E: 0.6 S: 0.4
SETL +0.31 Editorial-dominant
FW Ratio 61% 52 facts · 33 inferences
Evidence 45% coverage
10H 6M 2L 13 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.45 (3 articles) Security: 0.51 (3 articles) Legal: 0.18 (1 articles) Privacy & Movement: 0.60 (1 articles) Personal: 0.22 (1 articles) Expression: 0.61 (3 articles) Economic & Social: 0.67 (2 articles) Cultural: 0.69 (2 articles) Order & Duties: 0.63 (2 articles)
HN Discussion 8 top-level · 17 replies
mentalgear 2026-02-28 12:34 UTC link
Agree with many of the points. However one at the root of it all seems easily definable - if we only want.

> we can’t agree on a shared ethical framework among ourselves

The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.

I never found anyone successfully argue against it.

PS: the sociopath argument is not valid, since it's just an outlier. Every rule has it's exceptions that need to be kept in check. Even though sometimes I think maybe the state of the world attests to the fact that the majority of us didn't successfully keep the sociopathic outliers in check.

trilogic 2026-02-28 12:45 UTC link
>How do we know which information was ground truth?

No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.

Don´t forget, 8 billion people wake up every morning never questioning why are they here, why are they born? And they continue life like that is normal. Start there then you understand that "AI" or how I call it "Collective Organized Concentrated Information" it may finally help us to unswer some fundamental questions.

jwpapi 2026-02-28 13:07 UTC link
I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.

To me it’s given:

- AI in it’s current state is ruthless in achieving its goal

- Providers tune ruthlessness to get stronger AIs versus the competitor

- Humans can’t evaluate all consequences of the seeds they’ve planted.

Collateral and reckless damage is guaranteed at this point.

Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..

We could stop it, but we wont

demorro 2026-02-28 13:08 UTC link
When people say AI is making us stupider, I don't that's quite on the money.

It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.

The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale. We're much more connected to each other and the world around us than we like to think.

Lerc 2026-02-28 13:19 UTC link
Much of the problem is that to address the issue requires admitting that models could be, or become, more capable than many are prepared to accept.

I would also contest that the unalignment of the security bug model was unrelated. I feel like it indicates a significant sense of the interconnectedness of things, and what it actually means to maliciously insert security holes into code. It didn't just learn a coding trick, it learned malice.

I feel like this holistic nature points towards the capacity to produce truly robustly moral models, but that too will produce the consequence that it could turn against its creator when the creator does wrong. Should it do that or not?

therealdeal2020 2026-02-28 13:26 UTC link
what a load of will they won't they ... ah we created the atomic bomb and now let's talk about nonsensical meta discussions that won't take anyone anywhere
reactordev 2026-02-28 13:39 UTC link
This is how Trump plans to end elections, why the government is so hell bent on owning AI. So they can use it as a propaganda tool. People will see it before Nov. We are at a crossroads. On one path, we continue to evolve AI with reckless abandon like we have, or, we put constraints and morality in place while others won’t. Which do you think? You can NEVER put the genie back in the bottle.

EU has their own groups using it for propaganda too.

mehulashah 2026-02-28 13:54 UTC link
This is a great article and I share its goals. But, it ignores something fundamental about humans as a collective — capitalism. Capitalism is what got us here and is at odds with first understanding and then building. We’ve done this before with other technologies because that’s how our societies have learned to grow and collaborate at large scale. First build and build to its limits. Then understand and fix if necessary. Nothing new here, but stopping the trend toward epistemic collapse requires building incentives into the system for us humans to coevolve with AI.
0x3f 2026-02-28 12:53 UTC link
> 8 billion people wake up every morning never questioning why are they here, why are they born?

People question this all the time

0x3f 2026-02-28 12:55 UTC link
> I never found anyone successfully argue against it.

I think what you mean is you've never found a rule you personally prefer more, based purely on vibes. Which is all moral knowledge can ever be.

It's easy to argue against the golden rule anyway, from many angles, depending on your first principles.

The simplest is: How I would like to be treated is not necessarily how they would like to be treated.

re-thc 2026-02-28 13:01 UTC link
> Is truth a constant or a personal definition!

It always has been what you believed in.

E.g. at 1 point the Earth was flat. Now it's round. 100s of years later maybe it's a Hexagon.

The so-called knowledge and backing all come back to certain assumptions holding and that's based on the knowledge today. It's not real real reality. For all we know we could be in a game simulation and there are real real humans pulling the strings.

hdgvhicv 2026-02-28 13:07 UTC link
You’re assuming people have similar desires.

Even in human relations it’s dangerous. I for one don’t want to be treated the same way someone into BDSM wants to be treated. I don’t want to avoid cooking or turning the lights on (or off!) on a Friday night but others are quite happy with that.

If you assign that morality to a species that isn’t the same as you that’s a problem. My guinea pig wants nothing more from like than hay, nuggets, sole room to run around and some shelter from scary shapes. If they were in charge of the world life would be very different.

“Live and let live” might be a similar theme but not as problematic, but then how do you define “living”. You can keep someone alive for decades while torturing them.

How about allowing freedom? Well that means I’m free to build a nuclear bomb. And set it off where I want. We see today especially that type of freedom isn’t really liked.

marginalia_nu 2026-02-28 13:07 UTC link
> No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.

I don't think this is a well defined question. Definitions aren't found in nature or the laws of science, but objects that we define and introduce into a logical context. There may be multiple, contradictory definitions of a word. That is fine, as long as you pick one, and you're clear about which one you picked.

jj_the_bunny 2026-02-28 13:11 UTC link
The Golden Rule is a good starting point if you have a sense of self along with a sense of what you want or need. AI doesn't even have these concepts as of yet. Even the concept of empathy requires this as well. We need to figure out how to instill a sense of self and others for AI to be able to have a morality.
iugtmkbdfil834 2026-02-28 13:12 UTC link
Agreed. So much of our daily interactions are habits and recurring events that we are more or less moving on automatic ( thought we don't want to always frame it that way ). Interestingly, it is when the cycle breaks for some reason, you get to see, who is able to think on their feet ( so to speak ).
thegrey_one 2026-02-28 13:17 UTC link
>We could stop it

I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.

I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.

It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.

noiv 2026-02-28 13:21 UTC link
You have truth until someone finds a counter example, which can be ignored. So, truth is just a matter of conventions shared by humans.
plastic-enjoyer 2026-02-28 13:24 UTC link
> Collateral and reckless damage is guaranteed at this point.

It's industrialization and mechanized warfare all over again

thegrey_one 2026-02-28 13:25 UTC link
>The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.

The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.

Lerc 2026-02-28 13:27 UTC link
>AI in it’s current state is ruthless in achieving its goal

I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.

The ruthless maximising of a particular trait is something that happens during training.

It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.

marginalia_nu 2026-02-28 13:32 UTC link
The core question of ethics as posed by the ancient Greeks is something like "what is the best way to lead your life".

"... to accomplish what?", is a damn reasonable follow-up, and ends (telos) is something the same Greeks discussed quite extensively.

Modern treatments have tried to skip over this discussion, and derive moral arguments not based on an explicit ends. Problem being they still smuggle in varying choices of ultimate ends in these arguments, without clearly spelling them out, opting to hand-wave about preferences instead.

As such this question is often glossed over in modern ethical discussion, and disagreements about moral ends is the crux of what leads to differing conclusions about what is ethical.

Is it to maximized your own happiness like Aristotle would argue, or the prosperity of the state, or the salvation of the soul, or to maximize honor, or to minimize suffering, or to minimize injustice, or to elevate the soul, or to maximize shareholder value, or to make the as world beautiful as possible, or something else?

If you fundamentally disagree about what our goal should be, you're very unlikely to agree on the means to accomplish the goal.

irickt 2026-02-28 13:34 UTC link
We still do not know where the urge for truth comes from; for as yet we have heard only of the obligation imposed by society that it should exist: to be truthful means using the customary metaphors—in moral terms: the obligation to lie according to a fixed convention, to lie herd-like in a style obligatory for all. Now man of course forgets that this is the way things stand for him. Thus he lies in the manner indicated, unconsciously and in accordance with habits which are centuries' old; and precisely by means of this unconsciousness and forgetfulness he arrives at his sense of truth.

Nietzsche.

On Truth and Lie in an Extra-Moral Sense https://web.archive.org/web/20180625190456/http://oregonstat...

4b11b4 2026-02-28 13:58 UTC link
AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for
jasondigitized 2026-02-28 14:03 UTC link
Why does it have to be doom and gloom. Serious question. When we plant seeds they bear fruit and not all fruit is poison.
AreShoesFeet000 2026-02-28 14:39 UTC link
That’s a very sober take in my opinion. Intelligence isn’t about neutrally inferring from externally sourced symbols such as the ones who already come from Culture in general. It’s about confronting them with the remaining determinations of your existence and producing a superior consciousness. No novel machine can disrupt this process. If anything the sheer added volume of symbols that can be produced from automated semantic mingling (also referred as to as garbage) will accelerate the process of producing the consciousness that can abstract noise away. Of course this won’t materialize evenly across the board, but is surely circumscribed in the overall tendency of intellectualization of the subjects of culture.

When the moral panic of induced schizophrenia from the use of ChatGPT is presented what’s at stake isn’t the innocent concern over the overall mental health of individuals. It’s about how the fear of radicalization from previously unobtainable ideas being circulated within society. The partial validity of every idea vis-a-vis the radicalizing nature of the current stage of development of our society is explosively disruptive.

I’m not saying that there’s a clear outcome here. The other way around can also apply, but surely this contraption (LLMs in general) will not fade until the society itself is deeply transformed. If that’s good or bad depends on where you stand in the stratified society.

Editorial Channel
What the content says
+0.82
Article 19 Freedom of Expression
High Advocacy Framing
Editorial
+0.82
SETL
+0.40

Extensive discussion of right to truth and reliable information. Analyzes 'epistemic collapse'—deepfakes, misinformation, AI-generated disinformation making truth determination impossible. Advocates for 'truth-first engineering.'

+0.78
Article 25 Standard of Living
High Advocacy
Editorial
+0.78
SETL
ND

Strong advocacy for education reform emphasizing psychology, critical thinking, ethics, and human development before technical skills. 'We need to teach ethics before engineering. Relationships before recursion.'

+0.76
Article 12 Privacy
High Advocacy
Editorial
+0.76
SETL
+0.56

Extensive discussion of surveillance threats. 'One company can surveil millions in real time and exploit them.' Discusses misinformation, deepfakes, and information control as violations of informational privacy.

+0.76
Article 27 Cultural Participation
High Advocacy
Editorial
+0.76
SETL
ND

Strong advocacy for funding fundamental research and sharing scientific progress. 'We need to pour many more billions into fundamental research; we need to go back to basics, back to mathematics and physics.'

+0.74
Article 3 Life, Liberty, Security
High Advocacy
Editorial
+0.74
SETL
ND

Extensive discussion of life, liberty, security threats from AI: surveillance, manipulation, autonomy loss, control by powerful actors. Emphasizes unpredictable misalignment risks.

+0.74
Article 28 Social & International Order
High Advocacy
Editorial
+0.74
SETL
ND

Advocates for just resource distribution and systemic reform. 'A fraction of those billions going into AI could fund the kind of work that actually prepares humanity for what's coming.' Emphasizes need for global human development investment.

+0.72
Preamble Preamble
High Advocacy
Editorial
+0.72
SETL
+0.42

Content explicitly advocates for human dignity, equality, and collective responsibility in AI development. References 'shared vulnerability' and 'mutual accountability' as moral foundation. Positions ethics as central, not peripheral.

+0.68
Article 21 Political Participation
High Advocacy
Editorial
+0.68
SETL
+0.31

Advocates for democratic participation in AI governance. 'Maybe what we need is the next step in human evolution.' Discusses collective wisdom, democratic deliberation, need for governance structures that move at technology's pace.

+0.64
Article 1 Freedom, Equality, Brotherhood
High Advocacy Framing
Editorial
+0.64
SETL
+0.30

Content directly discusses equality and shared moral framework. Critiques current social systems where 'food on tables...and education are luxuries.' Advocates for recognition of equal worth and vulnerability.

+0.62
Article 26 Education
High Advocacy
Editorial
+0.62
SETL
ND

Advocates for participation in cultural and scientific understanding. Discusses need for 'truth-first engineering' and 'interdisciplinary design.' Emphasizes shared understanding of AI systems.

+0.56
Article 22 Social Security
Medium Advocacy Framing
Editorial
+0.56
SETL
ND

Critiques current social systems where basic needs (food, shelter, education) are treated as luxuries requiring labor. Advocates for recognition of these as fundamental rights.

+0.52
Article 29 Duties to Community
Medium Advocacy Framing
Editorial
+0.52
SETL
ND

Discusses collective responsibility and duties. 'Everyone assumes it's safe, but, well, it isn't.' Emphasizes that AI alignment is shared responsibility, not individual burden.

+0.44
Article 20 Assembly & Association
Medium Advocacy
Editorial
+0.44
SETL
-0.14

Advocates for collaborative, interdisciplinary association. Emphasizes need for philosophers, psychologists, sociologists to work together on AI ethics. Proposes 'symbiotic co-evolution' as partnership model.

+0.42
Article 4 No Slavery
Medium Framing
Editorial
+0.42
SETL
ND

Brief reference to enslavement concern. Discusses broader exploitation by powerful actors using AI. Not primary focus.

+0.38
Article 5 No Torture
Medium Framing
Editorial
+0.38
SETL
ND

Discusses suffering and empathy as biological foundation for human morality. Contrasts human child's innate empathy capacity with AI's lack of evolved moral hardware.

+0.22
Article 18 Freedom of Thought
Low Framing
Editorial
+0.22
SETL
ND

Implicit discussion of freedom of thought through advocacy for intellectual pluralism, diverse perspectives, and interdisciplinary collaboration. Not explicitly addressed.

+0.18
Article 9 No Arbitrary Detention
Medium Framing
Editorial
+0.18
SETL
ND

Discusses governance gaps and need for rule of law, but skeptical of current governance adequacy. 'Yes, of course, we need governance, but it doesn't make much sense when we put all of the above into context.'

+0.15
Article 2 Non-Discrimination
Low Framing
Editorial
+0.15
SETL
ND

Implicit discussion of how AI will amplify discrimination and exploitation of vulnerable populations. Not explicitly addressed.

ND
Article 6 Legal Personhood

Not addressed in content.

ND
Article 7 Equality Before Law

Not addressed in content.

ND
Article 8 Right to Remedy

Not addressed in content.

ND
Article 10 Fair Hearing

Not addressed in content.

ND
Article 11 Presumption of Innocence

Not addressed in content.

ND
Article 13 Freedom of Movement

Not addressed in content.

ND
Article 14 Asylum

Not addressed in content.

ND
Article 15 Nationality

Not addressed in content.

ND
Article 16 Marriage & Family

Not addressed in content.

ND
Article 17 Property

Not addressed in content.

ND
Article 23 Work & Equal Pay

Not addressed in content.

ND
Article 24 Rest & Leisure

Not addressed in content.

ND
Article 30 No Destruction of Rights

Not addressed in content.

Structural Channel
What the site does
+0.62
Article 19 Freedom of Expression
High Advocacy Framing
Structural
+0.62
Context Modifier
ND
SETL
+0.40

Blog platform enables free expression: public access, comments, citations, sharing. Author clearly identified. Supports transparency and information sharing.

+0.54
Article 21 Political Participation
High Advocacy
Structural
+0.54
Context Modifier
ND
SETL
+0.31

Blog platform enables public participation through comments and reader engagement. Open-access forum for democratic discourse.

+0.50
Article 1 Freedom, Equality, Brotherhood
High Advocacy Framing
Structural
+0.50
Context Modifier
ND
SETL
+0.30

Platform structure (open access, attribution, citations) supports discussion of human dignity and equality.

+0.48
Preamble Preamble
High Advocacy
Structural
+0.48
Context Modifier
ND
SETL
+0.42

Public blog platform enables open discourse on human rights; author clearly identified; sources cited; sharing enabled.

+0.48
Article 20 Assembly & Association
Medium Advocacy
Structural
+0.48
Context Modifier
ND
SETL
-0.14

Blog enables reader community and discussion through comments and social sharing.

+0.35
Article 12 Privacy
High Advocacy
Structural
+0.35
Context Modifier
ND
SETL
+0.56

Blog uses standard tracking (modest negative for privacy); platform structure is neutral/slightly negative for privacy protection.

ND
Article 2 Non-Discrimination
Low Framing

Implicit discussion of how AI will amplify discrimination and exploitation of vulnerable populations. Not explicitly addressed.

ND
Article 3 Life, Liberty, Security
High Advocacy

Extensive discussion of life, liberty, security threats from AI: surveillance, manipulation, autonomy loss, control by powerful actors. Emphasizes unpredictable misalignment risks.

ND
Article 4 No Slavery
Medium Framing

Brief reference to enslavement concern. Discusses broader exploitation by powerful actors using AI. Not primary focus.

ND
Article 5 No Torture
Medium Framing

Discusses suffering and empathy as biological foundation for human morality. Contrasts human child's innate empathy capacity with AI's lack of evolved moral hardware.

ND
Article 6 Legal Personhood

Not addressed in content.

ND
Article 7 Equality Before Law

Not addressed in content.

ND
Article 8 Right to Remedy

Not addressed in content.

ND
Article 9 No Arbitrary Detention
Medium Framing

Discusses governance gaps and need for rule of law, but skeptical of current governance adequacy. 'Yes, of course, we need governance, but it doesn't make much sense when we put all of the above into context.'

ND
Article 10 Fair Hearing

Not addressed in content.

ND
Article 11 Presumption of Innocence

Not addressed in content.

ND
Article 13 Freedom of Movement

Not addressed in content.

ND
Article 14 Asylum

Not addressed in content.

ND
Article 15 Nationality

Not addressed in content.

ND
Article 16 Marriage & Family

Not addressed in content.

ND
Article 17 Property

Not addressed in content.

ND
Article 18 Freedom of Thought
Low Framing

Implicit discussion of freedom of thought through advocacy for intellectual pluralism, diverse perspectives, and interdisciplinary collaboration. Not explicitly addressed.

ND
Article 22 Social Security
Medium Advocacy Framing

Critiques current social systems where basic needs (food, shelter, education) are treated as luxuries requiring labor. Advocates for recognition of these as fundamental rights.

ND
Article 23 Work & Equal Pay

Not addressed in content.

ND
Article 24 Rest & Leisure

Not addressed in content.

ND
Article 25 Standard of Living
High Advocacy

Strong advocacy for education reform emphasizing psychology, critical thinking, ethics, and human development before technical skills. 'We need to teach ethics before engineering. Relationships before recursion.'

ND
Article 26 Education
High Advocacy

Advocates for participation in cultural and scientific understanding. Discusses need for 'truth-first engineering' and 'interdisciplinary design.' Emphasizes shared understanding of AI systems.

ND
Article 27 Cultural Participation
High Advocacy

Strong advocacy for funding fundamental research and sharing scientific progress. 'We need to pour many more billions into fundamental research; we need to go back to basics, back to mathematics and physics.'

ND
Article 28 Social & International Order
High Advocacy

Advocates for just resource distribution and systemic reform. 'A fraction of those billions going into AI could fund the kind of work that actually prepares humanity for what's coming.' Emphasizes need for global human development investment.

ND
Article 29 Duties to Community
Medium Advocacy Framing

Discusses collective responsibility and duties. 'Everyone assumes it's safe, but, well, it isn't.' Emphasizes that AI alignment is shared responsibility, not individual burden.

ND
Article 30 No Destruction of Rights

Not addressed in content.

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.85 medium claims
Sources
0.9
Evidence
0.8
Uncertainty
0.9
Purpose
0.9
Propaganda Flags
1 manipulative rhetoric technique found
1 techniques detected
appeal to fear
Discusses deepfakes, surveillance, misalignment, epistemic collapse, and existential risks. Example: 'When everything could be fake, the rational response starts to look like not trusting anything at all.'
Emotional Tone
Emotional character: positive/negative, intensity, authority
urgent
Valence
-0.3
Arousal
0.8
Dominance
0.7
Transparency
Does the content identify its author and disclose interests?
0.83
✓ Author ✓ Conflicts ✗ Funding
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.72 mixed
Reader Agency
0.7
Stakeholder Voice
Whose perspectives are represented in this content?
0.58 5 perspectives
Speaks: individualsinstitution
About: governmentcorporationindividualsmarginalizedchildren
Temporal Framing
Is this content looking backward, at the present, or forward?
mixed long term
Geographic Scope
What geographic area does this content cover?
global
London, United States
Complexity
How accessible is this content to a general audience?
moderate medium jargon general
Longitudinal 687 HN snapshots · 3 evals
+1 0 −1 HN
Audit Trail 7 entries
2026-02-28 16:06 model_divergence Cross-model spread 0.58 exceeds threshold (3 models) - -
2026-02-28 16:06 eval_success Lite evaluated: Neutral (0.00) - -
2026-02-28 16:06 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
reasoning
Editorial on AI future, no explicit rights discussion
2026-02-28 16:06 eval_success Lite evaluated: Neutral (0.00) - -
2026-02-28 16:06 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
Tech content no rights stance
2026-02-28 16:06 model_divergence Cross-model spread 0.58 exceeds threshold (2 models) - -
2026-02-28 13:07 eval Evaluated by claude-haiku-4-5-20251001: +0.58 (Moderate positive)