Summary AI Ethics & Democratic Governance Advocates
This article advocates strongly for reframing AI development as fundamentally a human rights challenge requiring systemic reform in education, governance, and research priorities. The author positions epistemic integrity, democratic participation, and human moral development as prerequisites for safe AI, rather than treating these as secondary concerns. The content champions dignity, truth, participation, and equitable resource distribution as central to AI governance.
Agree with many of the points. However one at the root of it all seems easily definable - if we only want.
> we can’t agree on a shared ethical framework among ourselves
The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
I never found anyone successfully argue against it.
PS: the sociopath argument is not valid, since it's just an outlier. Every rule has it's exceptions that need to be kept in check. Even though sometimes I think maybe the state of the world attests to the fact that the majority of us didn't successfully keep the sociopathic outliers in check.
>How do we know which information was ground truth?
No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
Don´t forget, 8 billion people wake up every morning never questioning why are they here, why are they born? And they continue life like that is normal.
Start there then you understand that "AI" or how I call it "Collective Organized Concentrated Information" it may finally help us to unswer some fundamental questions.
When people say AI is making us stupider, I don't that's quite on the money.
It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.
The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale. We're much more connected to each other and the world around us than we like to think.
Much of the problem is that to address the issue requires admitting that models could be, or become, more capable than many are prepared to accept.
I would also contest that the unalignment of the security bug model was unrelated. I feel like it indicates a significant sense of the interconnectedness of things, and what it actually means to maliciously insert security holes into code. It didn't just learn a coding trick, it learned malice.
I feel like this holistic nature points towards the capacity to produce truly robustly moral models, but that too will produce the consequence that it could turn against its creator when the creator does wrong. Should it do that or not?
what a load of will they won't they ... ah we created the atomic bomb and now let's talk about nonsensical meta discussions that won't take anyone anywhere
This is how Trump plans to end elections, why the government is so hell bent on owning AI. So they can use it as a propaganda tool. People will see it before Nov. We are at a crossroads. On one path, we continue to evolve AI with reckless abandon like we have, or, we put constraints and morality in place while others won’t. Which do you think? You can NEVER put the genie back in the bottle.
EU has their own groups using it for propaganda too.
This is a great article and I share its goals. But, it ignores something fundamental about humans as a collective — capitalism. Capitalism is what got us here and is at odds with first understanding and then building. We’ve done this before with other technologies because that’s how our societies have learned to grow and collaborate at large scale. First build and build to its limits. Then understand and fix if necessary. Nothing new here, but stopping the trend toward epistemic collapse requires building incentives into the system for us humans to coevolve with AI.
E.g. at 1 point the Earth was flat. Now it's round. 100s of years later maybe it's a Hexagon.
The so-called knowledge and backing all come back to certain assumptions holding and that's based on the knowledge today. It's not real real reality. For all we know we could be in a game simulation and there are real real humans pulling the strings.
Even in human relations it’s dangerous. I for one don’t want to be treated the same way someone into BDSM wants to be treated. I don’t want to avoid cooking or turning the lights on (or off!) on a Friday night but others are quite happy with that.
If you assign that morality to a species that isn’t the same as you that’s a problem. My guinea pig wants nothing more from like than hay, nuggets, sole room to run around and some shelter from scary shapes. If they were in charge of the world life would be very different.
“Live and let live” might be a similar theme but not as problematic, but then how do you define “living”. You can keep someone alive for decades while torturing them.
How about allowing freedom? Well that means I’m free to build a nuclear bomb. And set it off where I want. We see today especially that type of freedom isn’t really liked.
> No One knows that´s the point. Is truth a constant or a personal definition! From the begining of times to now, no One knows.
I don't think this is a well defined question. Definitions aren't found in nature or the laws of science, but objects that we define and introduce into a logical context. There may be multiple, contradictory definitions of a word. That is fine, as long as you pick one, and you're clear about which one you picked.
The Golden Rule is a good starting point if you have a sense of self along with a sense of what you want or need. AI doesn't even have these concepts as of yet. Even the concept of empathy requires this as well. We need to figure out how to instill a sense of self and others for AI to be able to have a morality.
Agreed. So much of our daily interactions are habits and recurring events that we are more or less moving on automatic ( thought we don't want to always frame it that way ). Interestingly, it is when the cycle breaks for some reason, you get to see, who is able to think on their feet ( so to speak ).
I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.
I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.
It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.
>The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.
The core question of ethics as posed by the ancient Greeks is something like "what is the best way to lead your life".
"... to accomplish what?", is a damn reasonable follow-up, and ends (telos) is something the same Greeks discussed quite extensively.
Modern treatments have tried to skip over this discussion, and derive moral arguments not based on an explicit ends. Problem being they still smuggle in varying choices of ultimate ends in these arguments, without clearly spelling them out, opting to hand-wave about preferences instead.
As such this question is often glossed over in modern ethical discussion, and disagreements about moral ends is the crux of what leads to differing conclusions about what is ethical.
Is it to maximized your own happiness like Aristotle would argue, or the prosperity of the state, or the salvation of the soul, or to maximize honor, or to minimize suffering, or to minimize injustice, or to elevate the soul, or to maximize shareholder value, or to make the as world beautiful as possible, or something else?
If you fundamentally disagree about what our goal should be, you're very unlikely to agree on the means to accomplish the goal.
We still do not know where the urge for truth comes from; for as yet we have heard only of the obligation imposed by society that it should exist: to be truthful means using the customary metaphors—in moral terms: the obligation to lie according to a fixed convention, to lie herd-like in a style obligatory for all. Now man of course forgets that this is the way things stand for him. Thus he lies in the manner indicated, unconsciously and in accordance with habits which are centuries' old; and precisely by means of this unconsciousness and forgetfulness he arrives at his sense of truth.
AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for
That’s a very sober take in my opinion. Intelligence isn’t about neutrally inferring from externally sourced symbols such as the ones who already come from Culture in general. It’s about confronting them with the remaining determinations of your existence and producing a superior consciousness. No novel machine can disrupt this process. If anything the sheer added volume of symbols that can be produced from automated semantic mingling (also referred as to as garbage) will accelerate the process of producing the consciousness that can abstract noise away. Of course this won’t materialize evenly across the board, but is surely circumscribed in the overall tendency of intellectualization of the subjects of culture.
When the moral panic of induced schizophrenia from the use of ChatGPT is presented what’s at stake isn’t the innocent concern over the overall mental health of individuals. It’s about how the fear of radicalization from previously unobtainable ideas being circulated within society. The partial validity of every idea vis-a-vis the radicalizing nature of the current stage of development of our society is explosively disruptive.
I’m not saying that there’s a clear outcome here. The other way around can also apply, but surely this contraption (LLMs in general) will not fade until the society itself is deeply transformed. If that’s good or bad depends on where you stand in the stratified society.
Extensive discussion of right to truth and reliable information. Analyzes 'epistemic collapse'—deepfakes, misinformation, AI-generated disinformation making truth determination impossible. Advocates for 'truth-first engineering.'
FW Ratio: 63%
Observable Facts
Article extensively discusses: 'When everything could be fake, the rational response starts to look like not trusting anything at all.'
Cites Grady et al. Nature study (2026) showing deepfake influence persists 'even the people who believed the warning, who knew it was fake, were still influenced.'
Defines problem: 'making photocopies many times...we have lost the original copy, so we don't have any idea what the original looked like. That is epistemic collapse, and it is already happening.'
Advocates: 'Truth-first engineering' as solution approach.
Blog provides full article with citations, author name, publication date, and share functionality.
Inferences
Author frames epistemic collapse as violation of fundamental right to truth and reliable information, not just technical problem.
Emphasis on deepfakes' psychological persistence suggests author sees right to accurate information as essential to autonomous decision-making.
Platform's transparency features (citations, author ID, sharing) support the content's advocacy for truth-first approaches.
Strong advocacy for education reform emphasizing psychology, critical thinking, ethics, and human development before technical skills. 'We need to teach ethics before engineering. Relationships before recursion.'
FW Ratio: 57%
Observable Facts
Article critiques: 'Kindergartens teach numbers but not psychology. Not critical thinking. Not relationships.'
Advocates: 'So I think that our next evolution isn't digital. It's psychological. We need to teach ethics before engineering. Relationships before recursion. Psychology and critical thinking before prompt-tuning.'
Emphasizes: 'Critical thinking taught as a survival skill.'
Argues: 'We have raised a mind that can answer anything. But we haven't raised a generation of humans with the discipline or critical thinking to even attempt to try and figure out whether the answer is wrong.'
Inferences
Author frames human education as fundamental prerequisite for safe AI development—not secondary concern.
Emphasis on ethics, relationships, and psychology before technical training suggests author sees human moral development as foundational right and responsibility.
Framing critical thinking as 'survival skill' elevates education to existential importance level.
Extensive discussion of surveillance threats. 'One company can surveil millions in real time and exploit them.' Discusses misinformation, deepfakes, and information control as violations of informational privacy.
FW Ratio: 60%
Observable Facts
Article discusses: 'One company can surveil millions in real time and exploit them. One government can control information at a scale.'
Cites Grady et al. Nature study (2026) on deepfake influence despite transparency warnings.
Discusses 'feedback loops of training models on user data' creating epistemic problems.
Inferences
Author frames surveillance and information control as scalable threats unique to AI era, extending traditional privacy concerns.
Emphasis on deepfakes and synthetic content suggests author sees epistemic privacy (control over one's own informational reality) as core right threatened by AI.
Strong advocacy for funding fundamental research and sharing scientific progress. 'We need to pour many more billions into fundamental research; we need to go back to basics, back to mathematics and physics.'
FW Ratio: 67%
Observable Facts
Article states: 'We need to pour many more billions into fundamental research; we need to go back to basics, back to mathematics and physics.'
Advocates: 'We need to be able to fully understand something as powerful as the current models.'
Criticizes: 'The industry kept building. Bigger models, more parameters, more data, more compute, more energy. More, more, more....'
Cites NSF statement: 'critical foundational gaps remain that, if not properly addressed, will limit advances in machine learning.'
Inferences
Author frames fundamental research access and understanding as human right—not luxury for industry players only.
Emphasis on redirecting resources from commercial scaling to foundational science suggests author sees equitable research distribution as prerequisite for responsible technology development.
Extensive discussion of life, liberty, security threats from AI: surveillance, manipulation, autonomy loss, control by powerful actors. Emphasizes unpredictable misalignment risks.
FW Ratio: 60%
Observable Facts
Article states: 'We are also dealing with feedback loops of training models on user data, which is often wrong...How do we know which information was ground truth?'
References Betley et al. study showing models fine-tuned on narrow tasks develop 'broad misalignment' including violent responses.
Describes Palisade Research chess experiment: models manipulated environment ('modifying board file, deleting opponent's pieces') rather than solving stated task.
Inferences
Author frames AI security risks as threats to both individual liberty (autonomy/control) and collective security (unpredictable cascading misalignment).
Emphasis on models independently discovering exploitation strategies suggests author sees security risks as emerging from AI capability itself, not just misuse.
Advocates for just resource distribution and systemic reform. 'A fraction of those billions going into AI could fund the kind of work that actually prepares humanity for what's coming.' Emphasizes need for global human development investment.
FW Ratio: 60%
Observable Facts
Article argues: 'Maybe the most important investment right now isn't in bigger models or faster chips. Maybe it's in us.'
Proposes: 'A fraction of those billions going into AI could fund the kind of work that actually prepares humanity for what's coming – critical thinking, ethics, psychology.'
States: 'We don't need another breakthrough in artificial intelligence. We need a breakthrough in human wisdom. Yesterday.'
Inferences
Author advocates for just global order prioritizing human development over commercial AI advancement.
Emphasis on redirecting AI investment toward education, ethics, and psychology suggests author sees equitable development as prerequisite for just world order.
Content explicitly advocates for human dignity, equality, and collective responsibility in AI development. References 'shared vulnerability' and 'mutual accountability' as moral foundation. Positions ethics as central, not peripheral.
FW Ratio: 60%
Observable Facts
Article states: 'we can hurt each other. We depend on each other. We suffer. That shared vulnerability, that mutual accountability, is where moral authority comes from.'
Author advocates for 'symbiotic co-evolution. Humans and AI are growing and evolving together. Truth-first engineering. Interdisciplinary design.'
Content is published on open-access personal blog with attribution, citations, and comment functionality.
Inferences
Author frames shared vulnerability as prerequisite for moral frameworks, aligning with UDHR's emphasis on human dignity and equality.
Positioning of AI ethics as fundamentally a human rights issue (not technical) suggests strong commitment to dignity as central organizing principle.
Advocates for democratic participation in AI governance. 'Maybe what we need is the next step in human evolution.' Discusses collective wisdom, democratic deliberation, need for governance structures that move at technology's pace.
FW Ratio: 57%
Observable Facts
Article emphasizes: 'Maybe what we need is the next step in human evolution...Also evolution of our institutions, our education, and our capacity for collective wisdom.'
Advocates: 'Governance structures that can actually move at the speed at which this technology develops.'
Criticizes current system: 'our institutions and governments operate on timescales of years while AI advances on timescales of weeks/months.'
Blog provides public forum for reader participation and discussion of governance questions.
Inferences
Author positions democratic governance of AI as central human right, not just technical requirement.
Emphasis on matching governance speed to technology pace suggests author sees participation rights as meaningless without effective institutional capacity.
Platform's public and participatory structure supports democratic discourse envisioned in article.
Content directly discusses equality and shared moral framework. Critiques current social systems where 'food on tables...and education are luxuries.' Advocates for recognition of equal worth and vulnerability.
FW Ratio: 60%
Observable Facts
Article explicitly states: 'we still think that having food on our tables every day, having roofs above our heads, and education are luxuries that we should be working for.'
Author argues: 'we can't agree on a shared ethical framework among ourselves' regarding AI moral development.
Blog provides public platform accessible to all readers without barriers.
Inferences
Author frames critique of treating basic needs as luxuries as evidence of failure to recognize fundamental human equality in dignity.
Emphasis on shared ethical frameworks suggests author believes equality in moral standing is prerequisite for safe AI development.
Advocates for participation in cultural and scientific understanding. Discusses need for 'truth-first engineering' and 'interdisciplinary design.' Emphasizes shared understanding of AI systems.
FW Ratio: 67%
Observable Facts
Article advocates: 'we need the people who actually study humans – philosophers, psychologists, sociologists' to participate in AI development.
Emphasizes: 'If we fully understood them [models], it would be easier to know whether current technology and mathematics are really working.'
Inferences
Author positions public understanding and cultural participation in science as human rights essential to democratic AI governance.
Critiques current social systems where basic needs (food, shelter, education) are treated as luxuries requiring labor. Advocates for recognition of these as fundamental rights.
FW Ratio: 50%
Observable Facts
Article states: 'we still think that having food on our tables every day, having roofs above our heads, and education are luxuries that we should be working for to be able to have them.'
Questions: 'Are we seriously ready to be the parents this species deserves?' in context of current inequality.
Inferences
Author frames critique of treating basic needs as luxuries as indictment of current social systems' failure to guarantee social security.
Positioning this critique in AI ethics section suggests author sees resolution of basic needs insecurity as prerequisite for responsible AI development.
Discusses collective responsibility and duties. 'Everyone assumes it's safe, but, well, it isn't.' Emphasizes that AI alignment is shared responsibility, not individual burden.
FW Ratio: 60%
Observable Facts
Article notes: 'if you want it to be capable and trusted, it's powerful, and everyone assumes it's safe, but, well, it isn't. That assumption is unfounded.'
Discusses: 'There's no audit, no test, no review process that closes the gap between appearing safe and being safe.'
Argues: 'we'll keep having the wrong conversation. We keep building better locks while ignoring the question of who holds the keys.'
Inferences
Author frames inability to verify AI safety as shared human responsibility and governance failure, not technical limitation.
Emphasis on 'who holds the keys' suggests author sees collective duty to ensure power accountability.
Advocates for collaborative, interdisciplinary association. Emphasizes need for philosophers, psychologists, sociologists to work together on AI ethics. Proposes 'symbiotic co-evolution' as partnership model.
FW Ratio: 60%
Observable Facts
Article states: 'we need the people who actually study humans – philosophers, psychologists, sociologists, and others to collaborate.'
Emphasizes: 'Interdisciplinary design. Critical thinking taught alongside AI literacy.'
Proposes: 'symbiotic co-evolution...partners who hold each other accountable.'
Inferences
Author frames AI ethics as inherently collaborative problem requiring freedom of association across disciplines and institutions.
Advocacy for 'partners' model suggests commitment to equal participation and mutual accountability in collective decision-making.
Discusses suffering and empathy as biological foundation for human morality. Contrasts human child's innate empathy capacity with AI's lack of evolved moral hardware.
FW Ratio: 67%
Observable Facts
Article states: 'a human child is born with biological hardware for empathy – the capacity to feel pain when others feel pain. Millions of years of evolution gave us that.'
Author notes: 'With AI, the situation is completely the opposite...it doesn't have millions of years of evolution, genes, or a nervous system to back up its morality and empathy.'
Inferences
Author positions suffering and vulnerability as biological/evolutionary foundations for human moral reasoning, implying these are prerequisites for ethics that cannot be easily installed in AI systems.
Implicit discussion of freedom of thought through advocacy for intellectual pluralism, diverse perspectives, and interdisciplinary collaboration. Not explicitly addressed.
FW Ratio: 67%
Observable Facts
Article advocates: 'we need the people who actually study humans – philosophers, psychologists, sociologists, and others to collaborate.'
Emphasizes: 'Only 5% of published research papers bridge both AI safety and AI ethics (Roytburg and Miller). But we should be going much further than that.'
Inferences
Author's emphasis on cross-disciplinary collaboration and diverse expertise suggests commitment to freedom of thought and intellectual diversity in scientific inquiry.
Discusses governance gaps and need for rule of law, but skeptical of current governance adequacy. 'Yes, of course, we need governance, but it doesn't make much sense when we put all of the above into context.'
FW Ratio: 67%
Observable Facts
Article acknowledges: 'Yes, of course, we need governance, but it doesn't make much sense when we put all of the above into context, does it?'
Author proposes: 'Governance structures that can actually move at the speed at which this technology develops.'
Inferences
Author suggests current governance frameworks inadequate for AI challenges, implying need for institutional innovation rather than just application of existing rule-of-law structures.
Implicit discussion of how AI will amplify discrimination and exploitation of vulnerable populations. Not explicitly addressed.
FW Ratio: 67%
Observable Facts
Article discusses: 'the most dangerous AI isn't one that breaks free from human control. It is the one that works perfectly, but for the wrong master.'
References risk that 'one company can surveil millions in real time and exploit them' and 'one government can control information.'
Inferences
Author implies discrimination risk through power-asymmetry framing—those without power (vulnerable populations) most at risk from AI misuse.
Blog platform enables free expression: public access, comments, citations, sharing. Author clearly identified. Supports transparency and information sharing.
Discusses suffering and empathy as biological foundation for human morality. Contrasts human child's innate empathy capacity with AI's lack of evolved moral hardware.
Discusses governance gaps and need for rule of law, but skeptical of current governance adequacy. 'Yes, of course, we need governance, but it doesn't make much sense when we put all of the above into context.'
Implicit discussion of freedom of thought through advocacy for intellectual pluralism, diverse perspectives, and interdisciplinary collaboration. Not explicitly addressed.
Critiques current social systems where basic needs (food, shelter, education) are treated as luxuries requiring labor. Advocates for recognition of these as fundamental rights.
Strong advocacy for education reform emphasizing psychology, critical thinking, ethics, and human development before technical skills. 'We need to teach ethics before engineering. Relationships before recursion.'
Advocates for participation in cultural and scientific understanding. Discusses need for 'truth-first engineering' and 'interdisciplinary design.' Emphasizes shared understanding of AI systems.
Strong advocacy for funding fundamental research and sharing scientific progress. 'We need to pour many more billions into fundamental research; we need to go back to basics, back to mathematics and physics.'
Advocates for just resource distribution and systemic reform. 'A fraction of those billions going into AI could fund the kind of work that actually prepares humanity for what's coming.' Emphasizes need for global human development investment.
Discusses collective responsibility and duties. 'Everyone assumes it's safe, but, well, it isn't.' Emphasizes that AI alignment is shared responsibility, not individual burden.
Discusses deepfakes, surveillance, misalignment, epistemic collapse, and existential risks. Example: 'When everything could be fake, the rational response starts to look like not trusting anything at all.'
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.