This Twitter/X individual tweet URL provides no substantive content for human rights evaluation; the page consists primarily of schema.org markup. Within the broader platform context, Twitter/X enables free expression (Article 19) and supports peaceful association through discourse (Article 20), but structural elements—extensive data tracking, private corporate ownership, content moderation authority, and limited transparency—create significant tension with privacy rights (Article 12) and full expression protections. The domain-level cached DCP reveals a platform that advocates for public discourse while constraining it through corporate governance and surveillance practices.
It would be nice if there were an easier way to detect and filter those "reply guys." If LLMs were forced to watermark their output (possibly by using randomly-selected nonstandard ASCII characters in inconspicuous places, like "s" instead of "s") it would have been trivial, but that ship has sailed. The most anybody can do is train another LLM to find offenders and make a list. Bot vs bot.
now ive been wondering - what is the polite way to exit a conversation when it becomes obvious that your fellow interlocutor is merely a chunk of electric meat redirecting the output of sam altman? im talking blatantly obvious eg. 'its not x, its y' multiple times in the same paragraph.
> Moving forward, replies via the API will only be permitted if the replier has been explicitly summoned by the original post’s author. This means:
The original author @mentions the replying user/account in their post, or
The original author quotes a post from the replying user/account.
Frankly, I think AI-generated content is the least of Twitter's concerns ... I'd wager it is actually raising the average quality of content over there.
Just had a colleague discover how to copy paste ChatGPT output into teams this morning. So now I’m getting fed whatever semi relevant gibberish she gets out of her LLM (and likely didnt even read herself)
FML we better develop social norms around this asap because this fuckin blows
So, one of the main problems Elon promised to solve is rampant since his takeover. Even before "AI wave".
I still don't understand why people use his platform and give him power he has, and we have seen that he is using that to reduce children's access to food, promote people who are examples of no ethics whatsoever and is actively working on destroying numerous democracies by spreading propaganda from right wing.
One thing giving him power to do this are users of his platforms, and anyone still on Twitter is contributing to this.
If you follow the link to the tweet but don't have an account there you'll miss a joke, because Twitter doesn't show threaded replies to logged out users. The xcancel link shows it. Here's the two tweet sequence:
> AI-generated replies really are the scourge of Twitter these days. Anyone know if it's from packaged solutions being sold as a product or if it's people mainly rolling their own custom reply-bots
> ... and I just found out the category name for this is "reply guy" tools which is so on the nose it hurts
(You can confirm this by Google searching "reply guy service".)
I love AI-generated replies. I use it on all cold mailers who try to sell me shit. I just tell the AI to give me a one a4 response, and to gently string them along with vague interest, but not committing to anything.
The more determined salesmen last for 3-4 emails, but most drop off after 2 or so.
Back when I first heard the term "Dead Internet Theory" I thought it was silly, because to that time language generation wasn't really as sophisticated. But nowadays it is really more and more difficult to know.
I've noticed that I've recently (had the urge to and) spent a lot more time with people in real life, not sure if there is a causative effect. The illusion of social interaction on the internet is fading.
When I look at sites like Reddit I have a strong feeling, at least with some of the bigger subs, that there's definitely a substantial percentage of bots talking to each other there. More on some subs, less on others. Definitely on the political ones.
The problem is trust on most sites is attributed to account history, which is cheaper than ever with these reply-guy services. Twitter/Meta verified badges help, but IMHO the only solution is something invite-only like lobsters, where you can easily weed out invite-rings etc...
One needs to consider why the usage of automated responses.
Is it engagement drive? Is it inflating metrics? Is it manipulation? I do not see a scenario where it is purely done because someone wants to be nice.
Don't believe it, MY FELLOW OXYGEN CONVERTING FRIENDS! This is just outrageous conspiracy-theory-nonsense! This person is clearly and obviously a botist attempting to create a narrative that makes artificial intelligence look bad!
I, A GENUINE FELLOW HUMAN, just like yourselves, have not ever noticed any replies written by any so called scripts, bots, robots, AI, LLMs anywhere!
Yeah exactly, it's best to keep track and be aware of common tropes used in AI writing so that you don't end up 5 responses deep and emotionally invested in a conversation before you realise you've been fooled into speaking to a bot.
I built this tool primarily to identify AI writing in articles and posts but it's proven useful for comments/responses too: https://tropes.fyi/vetter
A crazy thought I had is that agents without a link to human identity might need to be treated as illegal. That human identity would be blamed the for the agent's actions.
This raises a rats nest of issues, but will we be able to avoid this necessity?
Eh, I am kind of liking the pasting back and forth of replies or Git comments. It means that they can indulge their little whims and fussiness about variable names or whether something is an edge case and I don't need to build in delays to frustrate them to go away.
AI in the middle makes colleagues more tolerable if you didn't really get along with them well originally.
I don't think this is productive. You can already adjust the style of LLMs and it's only going to get better over time. Any tool or strategy you come up with for detecting a bot can then be turned into an generative adversarial network to effectively create a system that breaks the tool.
The bots are going to win this war. I'm not sure of the implications of what this means though.
I'm sure there are other tells, like delay between post and reply, or time of day, etc. Epidemiology of bots is just getting started but the tools have to have detectable patterns.
It's ridiculously toxic. If you do not wish to participate in any form of internet cultural wars or politics it is virtually not possible there. For me the feed is mainl ridiculosuly stupid russian propaganda or politicians tilting each other. The "Do not recommend" button does nothing.
The problem is that he doesn't care about the money, so he can fuel his rage bait machine as long as he wants which would be normally not possible.
What an odd question. If the other entity is an AI, there is no need to be polite.
But personally, if I get value out of a conversation, I will continue. If I don't, I'll stop responding. Whether or not the other side is an AI is only relevant if I think I'm building some kind of rapport or friendship with someone. Otherwise what matters is if the comments makes me think, or makes me want to write something. If only AI bots were reading the comments, that would be a bigger issue than if the specific comment I'm replying to is AI-written.
I find it odd that, when it comes to natural language, we all agree that the LLM is stuck in an uncanny valley, yet no one is acknowledging that the code it generates has a similar alien feel to it.
1. Get more followers. A lot of people see follower count as a goal that matters to them. Replying to high follower counts may earn you a follow from them or from someone reading their replies who doesn't catch that you are a bot.
2. Establish account credibility. Does Twitter's algorithm rank posts higher from accounts that have a long history of engaging with other accounts? I don't know for sure, neither do they but they may believe it's worth trying anyway.
3. Accounts for sale. There's a market for used Twitter accounts with plenty of realistic looking activity. Maybe these spammers are building inventory.
I think you've just thought of CAPTCHAs? Unfortunately, AI have increasingly become better than humans at solving the tasks we throw at them for such tests.
They wouldn't have problems telling apart bots and spammers from regular user activities. Lots of them still have problems just interpreting tweets and their replies make no sense. Just removing out-of-place replies using ML will fix most of problems, or even just restricting mass registrations from narrow ranges of IPs.
They don't do that because spams are their means to achieve something else, specifically to get rid of left wing tech anime porn otakus. The comedy of that is that they've been attempting this by complicating the system, which is like reverse chemotherapy that are nicer to cancer tissues than to the body so that cancer grows faster. I guess they take that as a win as it's a positive action with positive reaction albeit with negative amounts in lieu of negative action with negative reaction with a positive amount.
What's really going to be nice is Twitter transferred to someone else. That will at least stop the stupidity of reverse chemotherapy.
I've been trying to will a web of trust style system into existence for a while now, I lack both the marketing skills and programming know-how to actually create it though =)
Basically a way to see on every web page whether an actual human (or more) in your network has vouched for the content to be written by a person.
Platform enables unrestricted posting and public discourse; however, Terms of Service permit content moderation and account suspension with limited transparency, and private ownership constrains editorial independence. Domain-level moderation practices and ownership create structural tension with Article 19.
Platform permits community organizing and collective discourse; however, Terms of Service restrict certain forms of association and platform enforcement discretion limits Article 20 protections. Private ownership creates structural asymmetry in associational power.
Platform architecture enables extensive user tracking via behavioral signals, data collection for profiling, and limited user control mechanisms. Domain-level privacy practices (data sharing, tracking) inherently undermine Article 12 protections.
Platform provides some accessibility features supporting participation across ability levels per DCP; gaps remain but baseline accessibility supports broader participation.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.