414 points by hubraumhugo 4 days ago | 305 comments on HN
| Mild positive Low agreement (3 models)
Editorial · v3.7· 2026-03-16 00:43:05 0
Summary Free Expression & Digital Commons Advocates
This blog post advocates for authentic human expression and communication in digital spaces, documenting how AI-generated content is degrading shared internet communities. The author catalogs examples from major platforms (HackerNews, Reddit, LinkedIn, GitHub) where automated systems and bots are flooding discourse, leading platforms to implement restrictions protecting human-to-human conversation. While the content critiques the loss of human-centered digital commons and expresses pessimism about reversing the trend, it champions Articles 19, 20, and 27 (free expression, assembly, and cultural participation) by documenting their erosion.
Rights Tensions1 pair
Art 19 ↔ Art 27 —Content frames AI-generated expression as legitimate free speech exercise but resolves tension against it by arguing that authentic human cultural participation (Article 27) requires protecting digital spaces from AI content, subordinating machine expression rights to community cultural benefit.
I think next step will be an isolated version of invite-only internet where you have to be physically present with your invitee to give them access. There will be a beautiful navigation widget where you can access a unified "addon" to any page: community moderated comment section, version history of that page, backlinks, carefully curated "related" section(so that you can continue browsing beautiful human written content on 1910 era steam locomotives, similar to 90s era webrings), donate button so that you can support he author and much more! Oh, the dream
I only see two outcomes for this problem : an internet of verified identities (start by uploading your ID card). Or a paid internet, where it doesn't matter who you are, but since you're going to pay for that email or that reddit account, the probability that it's AI spam is greatly reduced.
Maybe the only parts of a future internet people will actually hang out in is going to be one where any profit-making is completely de-incentivized. No recommendations. No product reviews. No opinions on companies or services. More slow web. Maybe we'll slowly head back to what websites used to look like when Yahoo was the biggest search engine.
Just yesterday in a local non profit organization's Signal groupchat a user who had just offered to take meeting minutes the day prior emitted an open claw error message to the chat. They are now banned from the organization.
I see many, many startups that promise to be an automated marketing agent that will do this exact thing: scour sites for conversations and post links to your product.
Obviously that burns down the human Internet, but it’s also a business that will have a short lifespan and bring about its own demise.
I guess they don’t care about anything enduring as long as they can grab some quick cash on the way out.
do you think small, invite-only communities will end up being the last holdout for genuine human conversation online? or will bots eventually infiltrate those too?
The only place that reminds me of the old Internet is VRChat, funny enough. You're guaranteed to be interacting with a nerdy, culturally similar human who's present in the moment.
Why is it being called dead internet theory when, as far as I can tell, what's really happening is that big centralized systems are being overrun with bots? The internet existed and was pretty great before these large centralized systems came into being.
Anyone can still run a blog/website, and/or their own discourse server. There's no need to mourn for these centralized systems that largely existed only to exploit us in some way. Let's celebrate "small internet theory", an internet where exploitation is effectively impossible because every company that tries it is overrun with AI bots. That sounds awesome to me personally, but I was also up late last night watching clips of Conan O'Brien from 1999 and the nostalgia for that era / what the internet was like back then hit me so hard it was almost painful.
The Internet was always full of bots. Not chatbots, but bots like crawlers, scrapers, automated scripts. That was fine.
What the OP is talking about is bots that participate in public discourse. That's the actual problem.
I think it can be handled to a degree though. Private communities, private Internet on top of existing Internet, and social media platforms without public APIs and with strict, enforceable ToS would all help.
Reddit in particular is overwhelmed by bots. There are small niche communities where it’s mostly people talking to people, but the vast majority of popular posts are made by bots, voted on by bots and commented on by bots.
It’s not even like commercial astroturfing, it’s just karma farming and public sentiment manipulation.
Everyone here is so far from a normie it is almost painful. Dead internet is an outcome of supply and demand.
The fundamental issue is that a plurality of humans pref the direction things have gone and are moving in. Is it a good direction? By this crowd’s standards, no.
To be clear, i dont like either but when i watch the speed kids swap between 5 insta accounts and 3 reddit accounts, it seems the majority are happy with it.
I just searched for a video game tip: "Bannerlord II where to sell clay?" and google's top result was an AI generated page FOR THIS GAME that directed me to ebay.
Also, I forgot to mention: google AI overview included the AI garbage page as it's answer.
I think that we are going to see more and more of this. To the point where most interactions you have online will likely be with bots. So I started building something that actually has a chance of fixing it: a social network for only humans.
People need to look long and hard at how they are using technology, and ask how technology should be used. Every single technological trend for the past 10 years has been smoke and mirrors, promising utility of an iPhone but with deliverables closer to a blockchain full of links to jpegs.
For a while video was a holdout of sorts - e.g. if someone posted video content of themselves or their voice you could trust a real person was behind it.
But now convincing fake video generation is easily accessible, so one more holdout stands to fall.
It does seem like some kind of ID system is going to be the only way. Sucky but inevitable.
I often have the following thought: technological advancement, for all its boons, inevitably leads down destructive roads in the long run. Sooner or later we open a pandora's box.
You know.. I keep thinking this might be a good thing in some ways. AI spam could save us from the worst of the current social media status quo, the toxicity of the attention "economy", but flooding it so thoroughly nobody wants to engage with it anymore. Maybe the world can collectively "wake up" and "go outside" by turning towards local and more intimate communities for social interactions..
It's a shame though that this is gonna kill so many sites and projects. Sure we have ChatGPT, but also with things like Google AI summary getting so much better traffic to sites is going to plummet. Without people visiting I think the incentive, heck even motivation, for a ton of the sites is gone. We've seen it with sites like Stack Overflow, but it's probably going to happen to just about everything..
Things are definitely going to change in significant ways. The internet of the past is definitely dead, it just doesn't know it yet.
This post's title is hyperbolic at best. At best the author is noticing what most people have known for a long time, there are bots on the internet. Most interactions I have online are with real people. Maybe we will end up with a dead internet, but moderation is still possible currently.
The elephant in the room is that a lot of social media companies have a conflict of interest. They can juice their user metrics by not moderating bots as well as they could be.
optional de-centralized hosting, unified cryptocurrency as payment tokens, single open LLM as summary and search-indexing tool, specialized toolkits for journals and social networks (livejournal, early twitter, early fb). Most importantly: you can post anonymously where its allowed (there could be areas where it can be disallowed entirely, like a public square), but your account will take the punishment, so no edgy shitposting behind throwaways.
Someone, somewhere, salivating at the idea of combining both ideas. A paid for digital ID service that you can use as authentication for the web.
Actually, if I'm thinking about it. Social Media platforms already started this with the paid blue badge for verification, and it's also monthly subscription. But it's for their respective platform only, not universal.
I want cool cryptography where I can, e.g. verify where I'm writing from and what my age is without giving away any other information.
Or if I want, I can verify that I'm myself, and eschew anonymity, and certain platforms should only accept contributions from people who don't hide their identity.
Bots will absolutely infiltrate them eventually, but I think it's the only solution.
Internet promised ability to connect with anyone anywhere around the world. It felt limitless and infinite.
Turns out in an infinite world, the loudest voices are the ragebaits, the algorithmically-amplified, or the outright scammers.
Human social brain doesn't work in an infinite world, it works for a Dunbar's Number world. And we all like our psuedo-anonymous soapboxes (I'm standing on one right now), but trick will be to realize that the glitter of infinite quantity isn't the same as small-scale connection.
“A social networking system simulates a user using a language model trained using training data generated from user interactions performed by that user. The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased” [1].
Back then the day Yahoo was a manually curated index of submitted & verified sites with search capabilities.
Wild-ass business idea: what if Yahoo 2026 recreated Yahoo 1996 and also any of the video sites it bought up back in the day get relaunched as deshittified ad-selling mechanisms to fund the whole thing… there’s gotta be Yahoo 1996 money in whatever scraps YouTube is missing.
It used to be faster and easier to follow actual content.
> I guess they don’t care about anything enduring as long as they can grab some quick cash on the way out.
As far as I can tell, that is basically all AI-related businesses. Including those non-AI ones jumping on the bandwagon to throw all their employees in the bin and expect 10x productivity somehow: if they are right and these tools do become that good, well the economy as we know it is over as white collar knowledge work disappears.
I think most small communities will stand bot-free because there's little incentive to have bot engage with it.
But I wonder if there's a size of conversation after which people will still choose AI assisted summaries. Discord had/(has?) a feature where it used LLMs summarize and then notify you about a discussion happening.
Let’s not kid ourselves: Every day, multiple “I just asked the LLM to clean up my notes” posts are voted up to the front page here, often with highly engaged, appreciative comment sections.
LLM’s for all their faults are well-trained to produce what we want.
No, the old internet wasn't that great. There were so many problems. Finding things was hard, buying things was hard, integrating things was hard, compatibility was hard, everything was super fractured. It felt great at the time because you discovered all these random things and it was all novel at the time. Centralized (Or decentralized collaborative services like IRC or Usenet) really unlocked the power of the internet.
In some ways it might be positive. My girlfriend had a small addiction to Instagram reels. The flood of AI generated videos on there just killed the magic for her and she stopped using it
>Anyone can still run a blog/website, and/or their own discourse server.
And those will also get chocked with fake bot "members" and bot comments.
Plus, if "anyone can still run a blog/website", this includes bots. AI created and operated blogs/websites, luring in people who think they're reading actual human posts.
The technology isn't inherently evil. The actual problem is the way our societies are set up, ironically incentivizing sociopathic behaviour even among members of a single nation, nevermind when geopolitics get involved.
Paid option doesn't really deter this behavior, it encourages it - a botter will see a price tag on a "real" account (see what happened to twitter's blue checkmark sub) and go oh goody, I can pay for people to think I'm real.
If you make the price high enough sure, but I'm unsure you can find the right price to simultaneously 1) deter bot traffic and 2) be appealing to actual users.
> convincing fake video generation is easily accessible, so one more holdout stands to fall.
Is it though? I have absolutely no doubt we'll get there but I haven't seen any evidence of this in the wild. My Youtube feed is becoming overrun with content with clearly generated scripts and often generated narration. But I haven't seen a single instance (that I'm aware of) of generated video being passed off as real.
Yes I have seen hundreds of tweets and reddit posts showcasing game-changing video technologies like AI face replacement and yes they look incredible in the 45 second demo reels, but every instance I have seen of real-world usage was comically bad.
> It's a shame though that this is gonna kill so many sites and projects. Sure we have ChatGPT, but also with things like Google AI summary getting so much better traffic to sites is going to plummet. Without people visiting I think the incentive, heck even motivation, for a ton of the sites is gone. We've seen it with sites like Stack Overflow, but it's probably going to happen to just about everything..
As I see it, this is just an extra step in a long series of tools to just serve information more quickly. Search snippets for search results have always (?) been displayed for each link/page returned. If the information you were looking for was included in those snippets, then you wouldn't need to visit the actual site.
Then at some point there were knowledge cards/panels. Again, if the information you were looking for was in those cards/panels, then you didn't need to click on the links.
Now with LLMs/Gemini, the information is sometimes summarized at the top of the page. You need even less to visit the search results.
Google has always been a kind of cache for the Internet. It's just way more efficient at extracting and displaying information from that cache now.
So, yes, traffic keeps going down. But new knowledge will still need to be produced, right?
I don't know that the influx of AI spam would necessarily result in people disengaging and choosing to seek out real content, though. Social media feeds have been serving up less and less content from our actual real life contacts for a while now (partly because people seem to be posting less). As long as it's engaging I think a significant chunk of people aren't going to care whether it's AI
(anecdotally, my mother loves AI generated videos, perhaps it's just novelty at the moment and it will wear off)
I see a simpler outcome, smaller communities where you can verify humans are human. I've already started doing this, and mostly with people that already live in my community.
The corporate internet was never good to begin with, it was just forced on the masses.
That just reminded me of Chat Roulette for some reason. It seems that one is still around, as well. I'd guess not many bots on there, either (though potentially plenty of other unpleasant things).
Content strongly advocates for human expression and authentic human communication as foundation of meaningful discourse. Critiques AI-generated content as degrading freedom of expression by flooding platforms and drowning out human voices. Implicitly argues that meaningful expression requires human agency and authenticity.
FW Ratio: 57%
Observable Facts
HackerNews updated guidelines to explicitly state 'Don't post generated comments or AI-edited comments. HN is for conversation between humans.'
Author describes LinkedIn timeline as 'mostly AI-generated slop among very few actually interesting professional updates,' contrasting authentic human expression with machine-generated content.
Author's own blog platform displays no paywalls, registration requirements, or tracking barriers to reading.
Inferences
The framing of AI-generated content as 'slop' and 'nonsensical' reflects advocacy position that human expression has greater value and authenticity than machine-generated alternatives.
Documenting platform policy changes to restrict AI content reveals implicit endorsement of human-centered expression policies.
The accessible structure of author's own blog mirrors advocacy position by removing barriers to human expression.
Content strongly advocates for cultural participation and community benefit in digital spaces. Mourns loss of shared internet spaces where humans could authentically participate in culture and knowledge-sharing (referenced through examples of GitHub, HackerNews, Reddit, LinkedIn as communities). Implicitly argues AI-generated content denies communities the benefit of authentic human cultural participation.
FW Ratio: 60%
Observable Facts
Content documents open-source community (GitHub) being invaded by AI-generated pull requests, disrupting peer review and knowledge-sharing culture.
HackerNews community rules updated to explicitly protect conversation 'between humans,' indicating effort to preserve community cultural space.
Author references returning to earlier internet model, implicitly viewing shared community spaces as cultural good that has been lost.
Inferences
The strong critique of AI-generated content flooding communities reflects advocacy for preserving cultural commons and authentic community participation.
Specific focus on GitHub and open-source community suggests belief in communities' right to benefit from authentic peer knowledge-sharing.
Content advocates for education in digital literacy and awareness of AI-generated content, implicitly arguing that meaningful participation in digital commons requires understanding threats to authentic communication. References to 'AI slop detection' suggest belief in educating users about identifying machine-generated content.
FW Ratio: 50%
Observable Facts
Author mentions 'my AI slop detection didn't go off' regarding job applicant CV, suggesting author has developed or uses detection methods.
Content lists specific examples from multiple platforms with evidence of problems, providing educational documentation of AI-generated content proliferation.
Inferences
The detailed cataloging of AI problems across platforms serves educational purpose, raising reader awareness of threats to digital commons.
Reference to personal 'slop detection' suggests importance of user education in recognizing AI-generated content.
Content implicitly affirms equality and dignity by treating AI-generated bot content as unequal participation — human commenters and contributors should receive equal voice. Concern about astroturfing and fake profiles suggests belief in equal standing in discourse.
FW Ratio: 67%
Observable Facts
Author notes Reddit bots 'astroturfing a SaaS product' with hundreds of similar hidden comments, framing this as anomalous and manipulative.
Content criticizes AI-edited and AI-generated comments as violations of platforms' stated purposes (e.g., 'HN is for conversation between humans').
Inferences
The concern about bot manipulation and fake profiles reflects implicit commitment to equal participation and non-discrimination in digital spaces.
Content implicitly addresses work and choice of employment through example of job applicant responding with AI-generated text rather than human engagement. Suggests concern that AI automation undermines meaningful human participation in employment process and right to work free from deceptive practices.
FW Ratio: 67%
Observable Facts
Author invited job applicant to interview but received automated AI-generated response instead of human reply.
Author describes this experience as catalyst for realizing 'dead Internet arrived faster than expected.'
Inferences
The concern about AI-generated job responses reflects implicit belief that employment interactions should involve genuine human agency and authentic commitment.
Content critiques discrimination against human contributors by AI systems flooding platforms with low-quality content. Implicitly argues for non-discrimination based on origin (human vs. machine).
FW Ratio: 67%
Observable Facts
Author describes 'influx of vibe-coded and low-quality ShowHN submissions' and HackerNews restricting new accounts in response.
GitHub example notes 'nonsensical PRs' from AI spamming open-source repositories.
Inferences
Concern about quality discrimination suggests implicit belief that participation should be equally accessible and that platforms should protect against unfair participation methods.
Content advocates for freedom of assembly and association by critiquing platforms where bot activity and AI-generated content undermine genuine community gathering and peer communication. References to HackerNews, Reddit, GitHub, and LinkedIn suggest these as communities where authentic association is being compromised.
FW Ratio: 67%
Observable Facts
Author describes astroturfing on Reddit as organized bot activity manipulating community discourse.
GitHub community described as facing invasion of AI-generated pull requests, disrupting peer review community practice.
Inferences
Concern about bot astroturfing and AI manipulation reflects belief in authentic community association free from artificial interference.
Content describes deterioration of internet commons and human-to-human communication spaces. Implicitly critiques loss of human dignity and agency in digital spaces through framing of AI-generated content as degradation of discourse quality.
FW Ratio: 60%
Observable Facts
Author describes inviting job applicant to interview, receiving AI-generated reply instead of human response.
Content catalogs examples from multiple platforms (HackerNews, Reddit, LinkedIn, GitHub) where AI-generated content displaces human communication.
Author concludes 'we can't' return to earlier internet, expressing pessimism about reversing trend.
Inferences
The framing treats AI-generated content as inherently degrading to human discourse, reflecting concern for human autonomy and authentic communication.
The resignation in 'Can we go back...? I guess we can't' suggests loss of agency and hope for restoring human-centered digital spaces.
Content addresses interference with privacy implicitly through concern about astroturfing and hidden bot accounts. Reddit example of bots hiding comments on their accounts suggests unauthorized surveillance and interference with personal communication spaces.
FW Ratio: 50%
Observable Facts
Author notes Reddit bot profiles 'hide their comments on their accounts' while deploying hundreds of identical comments across platform.
Author describes receiving automated AI response to job interview invitation, indicating automated engagement with personal communication.
Inferences
The concern about hidden bot accounts operating at scale suggests worry about privacy violation and deceptive interference in personal communication channels.
AI-generated reply to job invitation exemplifies unauthorized automated intrusion into personal employment communication.
Content critiques breakdown of social and international order necessary for human rights realization. The 'dead Internet' represents failure of institutional order to maintain spaces where human rights (particularly Articles 19, 20, 27) can be exercised. Resignation at conclusion ('Can we go back...? I guess we can't') suggests author views current order as unable to maintain conditions for rights realization.
FW Ratio: 50%
Observable Facts
Content documents failures of platform governance (HN, Reddit, GitHub, LinkedIn) to maintain spaces for authentic human participation.
Author concludes article expressing pessimism: 'Can we go back to an internet like this? I guess we can't.'
Inferences
The documentation of platform governance failures reflects concern that social and institutional order is breaking down.
The resignation in the conclusion suggests loss of faith in ability of institutions to restore conditions supporting human rights exercise.
No privacy policy linked or visible on content page.
Terms of Service
—
No terms of service visible on content page.
Identity & Mission
Mission
—
No mission or values statement visible on content page.
Editorial Code
—
No editorial guidelines or code of conduct visible.
Ownership
—
Author Adrian Krebs identified; ownership structure not disclosed.
Access & Distribution
Access Model
+0.15
Article 19 Article 26
Content freely accessible with no paywall, registration requirement, or tracking barriers observed. Supports universal access to expression.
Ad/Tracking
—
No advertising or tracking pixels visible in provided content.
Accessibility
+0.10
Article 2 Article 26
Site implements sr-only utility class and semantic HTML structure, indicating accessibility awareness. Responsive design supports multiple device types.
Site structure and access model support free expression: content freely accessible without paywall, registration, or tracking; no editorial restrictions visible; author identified and retains full editorial control of platform.
Site's accessible design and clear navigation support general public education access. Author's documentation of platform-specific problems (HN, Reddit, LinkedIn, GitHub) serves educational function for readers learning about widespread AI infiltration.
Content describes deterioration of internet commons and human-to-human communication spaces. Implicitly critiques loss of human dignity and agency in digital spaces through framing of AI-generated content as degradation of discourse quality.
Content implicitly affirms equality and dignity by treating AI-generated bot content as unequal participation — human commenters and contributors should receive equal voice. Concern about astroturfing and fake profiles suggests belief in equal standing in discourse.
Content addresses interference with privacy implicitly through concern about astroturfing and hidden bot accounts. Reddit example of bots hiding comments on their accounts suggests unauthorized surveillance and interference with personal communication spaces.
Content advocates for freedom of assembly and association by critiquing platforms where bot activity and AI-generated content undermine genuine community gathering and peer communication. References to HackerNews, Reddit, GitHub, and LinkedIn suggest these as communities where authentic association is being compromised.
Content implicitly addresses work and choice of employment through example of job applicant responding with AI-generated text rather than human engagement. Suggests concern that AI automation undermines meaningful human participation in employment process and right to work free from deceptive practices.
Content strongly advocates for cultural participation and community benefit in digital spaces. Mourns loss of shared internet spaces where humans could authentically participate in culture and knowledge-sharing (referenced through examples of GitHub, HackerNews, Reddit, LinkedIn as communities). Implicitly argues AI-generated content denies communities the benefit of authentic human cultural participation.
Content critiques breakdown of social and international order necessary for human rights realization. The 'dead Internet' represents failure of institutional order to maintain spaces where human rights (particularly Articles 19, 20, 27) can be exercised. Resignation at conclusion ('Can we go back...? I guess we can't') suggests author views current order as unable to maintain conditions for rights realization.
Use of terms like 'AI slop,' 'dead Internet,' and 'nonsensical' to describe AI-generated content carries negative valence that frames rather than neutrally describes the phenomenon.
appeal to fear
The overall framing that AI-generated content is rapidly degrading internet communities and that 'we can't' reverse the trend appeals to reader concern about loss of authentic communication spaces.