This blog post critiques AI-aided development as producing intellectually shallow work that undermines original thinking and creative quality. The author argues that relying on LLMs for cognitive work prevents the deep problem engagement necessary for intellectual development (Article 26), autonomous thought (Article 18), and high-quality cultural production (Article 27). While the piece itself exercises free expression, its core thesis frames AI-assisted intellectual participation as incompatible with human intellectual flourishing.
I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.
That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.
Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).
AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.
> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.
And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"
We are going to have to find new ways to correct for low-effort work.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that "
Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.
One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture.
Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know.
I was once told by people in the video game industry that games were usually buggy because they were short lived.
Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.
This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
Yeah, if anything it might make sense to do the opposite. Use LLMs to do research, ruthlessly verify everything, validate references and help you guide you in some structure, but then actually write your own words manually with your little fingers and using your brain.
Most ideas aren't interesting. Implementations are interesting. I don't care if you worked hard on your implementation or not, but I do care if it solves the problem in a novel or especially efficient way. These are not the hallmarks of AI solutions.
Highly doubt that since its the complete opposite for coding. Whats missing for people of all skill levels is that writing helps you organize your thoughts, but that can happen at prompt time?
> That may be, but it's also exposing a lot of gatekeeping
"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".
If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.
I had to write a difficult paragraph that I talked through with copilot. I think it made one sentence I liked but found GPTZero caught it. I would up with 100% sentences I wrote but that I reviewed extensively with Copilot and two people.
Most ideas people have aren't original, but the original ideas people do have come after struggling with a lot of unoriginal ideas.
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.
> AI writing will make people who write worse than average, better writers.
Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.
> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.
The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.
AI enables the stereotypical "idea guy" to suddenly be a "builder". Of course, they are learning in realtime that having the idea was always the easy part...
> I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up.
Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
i think a lot of people that use AI to help them write want it specifically BECAUSE it makes them boring and generic.
and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?
for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.
office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.
Don't you think their is an opposite of that effect too?
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
While at first glance LLMs do help expose and even circumvent gatekeeping, often it turns out that gatekeeping might have been there for a reason.
We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if briefly); the cue is just replaced with something else that is more difficult to circumvent.
I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.
This is precisely it. If anything, AI gives me more freedom to think about more novel ideas, both on the implementation and the final design level, because I'm not stuck looking up APIs and dealing with already solved problems.
> Writing and programming are both a form of working at a problem through text…
Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
I think that having some difficulty and having to "bloody your forehead" acts as a filter that you cared enough to put a lot of effort into it. From a consumer side, someone having spent a lot of time on something certainly isn't a guarantee that it is good, but it provides _some_ signal about the sincerity of the producer's belief in it. IMO it's not gatekeeping to only want to pay attention to things that care went into: it's just normal human behavior to avoid unreasonable asymmetries of effort.
Yesterday I had two hours to work on a side project I've been dreaming about for a week. I knew I had to build some libraries and that it would be a major pain. I started with AI first, which created a script to download, extract, and build what needed. Even with the script I indeed encountered problems. But I blitzed through each problem until the libraries were built and I could focus on my actual project, which was not building libraries! I actually reached a satisfying conclusion instead of half-way through compiling something I do not care about.
What's interesting is how AI makes this problem worse but not actually "different", especially if you want to go deep on something. Like listicles were always plentiful, even before AI, but inferior to someone in substack going deep on a topic. AI generated music will be the same way, there's always been an excessive abundance of crap music, and now we'll just have more more of it. The weird thing is how it will hit the uncanny valley. Potentially "Better" than the crap that came before it, but significantly worse than what someone who cares will produce.
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
you can prompt it to stop doing that, and to behave exactly how you need it. my prompts say "no flattery, no follow up questions, PhD level discourse, concise and succinct responses, include grounding, etc"
You don't fit the profile OP is complaining about. You might not even be "vibe" coding in the strictest sense of that word.
For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.
It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.
> The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs.
The author distinguishes deeply engaged thinkers from AI-assisted users, implicitly suggesting unequal intellectual contribution and quality.
FW Ratio: 50%
Observable Facts
The author states: 'They generally don't have a lot of work put into them, and as a result, the author hasn't generally thought too much about the problem space.'
Inferences
The framing distinguishes between high-engagement and low-engagement intellectual participation, treating them as unequal in capacity and value.
The content critiques shallow expression and reduced originality in AI-aided work, framing expression quality as degraded and unoriginal.
FW Ratio: 50%
Observable Facts
The page publishes the author's critical perspective without editorial gatekeeping.
The author explicitly states: 'The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had.'
Inferences
The editorial framing treats AI-aided expression as shallow and derivative.
The structural practice of unrestricted publication supports freedom of expression for critical analysis.
The author explicitly argues that genuine education requires immersion, articulation through writing, and deep thinking—presented as incompatible with AI-assisted work.
FW Ratio: 67%
Observable Facts
The author states: 'The way human beings tend to have original ideas is to immerse in a problem for a long period of time.'
The author writes: 'This is why we make students write essays. It's also why we make professors teach undergraduates.'
Inferences
The author frames AI-assisted intellectual work as fundamentally incompatible with genuine educational development.
The piece directly critiques AI-assisted thinking as preventing deep problem immersion necessary for original thought development, thus constraining intellectual autonomy.
FW Ratio: 67%
Observable Facts
The author states: 'Original ideas are the result of the very work you're offloading on LLMs' and 'AI models are extremely bad at original thinking.'
The author writes: 'Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable.'
Inferences
The author frames using AI for thinking as fundamentally incompatible with genuine intellectual freedom and autonomous thought development.
The piece argues AI produces shallow, derivative culture rather than original creative output, undermining human cultural participation.
FW Ratio: 67%
Observable Facts
The author states: 'AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original.'
The author writes: 'I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects.'
Inferences
The author frames AI-aided cultural and creative production as necessarily shallow, thus diminishing cultural participation quality.
The piece directly critiques AI-assisted thinking as preventing deep problem immersion necessary for original thought development, thus constraining intellectual autonomy.
The author explicitly argues that genuine education requires immersion, articulation through writing, and deep thinking—presented as incompatible with AI-assisted work.
Repeated emotionally charged terms: 'boring,' 'vibe coded,' 'shallow,' 'fatal flaw,' 'boring people with boring projects' applied throughout.
false dilemma
Presents original intellectual work as possible only through deep unmediated immersion, implying AI-assisted work cannot produce originality by definition.
causal oversimplification
Title and central claim 'AI makes you boring' suggests direct causation without acknowledging confounding factors (user skill, application context, problem domain).
appeal to authority
References educational practices (student essays, professor-undergraduate teaching) as implicit proof that deep work requires unmediated human effort.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 11:31:12 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.