This technical essay on software development history documents repeated industry promises to automate programming work, showing that employment demand has actually increased despite multiple waves of simplification tools. While primarily a historical and technical analysis rather than human rights advocacy, the article implicitly supports freedom of expression, employment continuity, and informed participation in cultural discourse through its accessible, critical examination of technology industry narratives and persistent automation hype cycles.
The article talks about 'software development will be democratized' but the current LLM hype is quite the opposite. The LLMs are owned by large companies and are quite impossible to train by any individual, if only because of energy costs. The situation where I am typing my code on my linux machine is much more democratic.
All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.
AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.
This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.
Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.
I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.
Until a year ago I believed as the author did. Then LLMs got to the point where they sit in meetings like I do, make notes like I do, have a memory like I do, and their context window is expanding.
Only issue I saw after a month of building something complex from scratch with Opus 4.6 is poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.
It won’t be long before AI employees are going to join daily standup and deliver work alongside the team with other users in the org not even realizing or caring that it’s an AI “staff member”.
It won’t be much longer after that when they will start to tech lead those same teams.
Developers are “unwanted overhead” until the customer money threatens to walk out the door. They’re going to damage their future products and probably reduce their customer base (fewer consumers) and then sit there looking like gaffed fish when the budget ink turns read. “Who would have thought…”
It is democratising from the perspective of non-programmers- they can now make their own tools.
What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers
After 2 years of using all of these tools (Claude C, Gemini cli, opencode with all models available) I can tell you it is a huge enabler, but you have to provide these "expert guardrails" by monitoring every single deliverable.
For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.
Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.
So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.
We have yet to invent ground breaking tech that transcends either human nature or the banal depravity that stems from the profit motive at scale. Prior history of major tech innovations therefore may have some insight to offer regarding expected outcomes of the current hype wave around AI. The notion that technology so cleanly breaks from underlying social paradigms as to be wholly unpredictable is one of the tech industries most persistently naive and destructive mythologies.
Programmers have enjoyed an occupation with solid stability and growing opportunities. AI challenging this virtually over night is a tough pill to swallow. Naturally, many subscribe to the hope that it will fail.
How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.
The thing about talking to computers is less the formality and more the specificity. People don't know what they want. To use an LLM effectively, you need to think about what you want with enough clarity to ask for it and check that you're getting it. That LLMs accept your wishes in the form of natural language instead of something with a LALR(1) grammar doesn't magically obviate the need for specificity and clarity in communication.
A manager is not going to handle all the nitty gritty details, that an engineer knows, fine say, they can ask a LLM to make a web portal.
Does he know about SQL injection? XSS?
Maybe he knows slightly about security stuffs and asks the LLM to make a secure site with all the protection needed. But how the manager knows it works at all? If you figure out there's a issue with your critical part of the software, after your users data are stolen, how bad the fallback is going to be?
How good a tool is also depends on who's using it. Managers are not engineers obviously unless he was
an engineer before becoming a manager, but you are saying engineers are not needed. So, where's the engineer manager is going to come from? I'm sure we're not growing them in some engineering trees
The closer you get to releasing software, the less useful LLMs become. They tend to go into loops of 'Fixed it!' without having fixed anything.
In my opinion, attempting to hold the hand of the LLM via prompts in English for the 'last mile' to production ready code runs into the fundamental problem of ambiguity of natural languages.
From my experience, those developers that believe LLMs are good enough for production are either building systems that are not critical (e.g. 80% is correct enough), or they do not have the experience to be able to detect how LLM generated code would fail in production beyond the 'happy path'.
I spent the last two weeks at work building a whole system to deploy automated claude code agents in response to events and even before i finished it was already doing useful work and now it is automatically handling jira tickets and making PRs.
Funny part is we've already had this exact thing happen with outsourcing. It sure looked like a bargain until you got to such pesky details as correctness and maintainability.
Right, people misuse this term "democratized" all the time. Because it sounds nice. But it's incorrect.
Democracy is about governance, not access.
A "democratized" LLM would be one in which its users collectively made decisions about how it was managed. Or if the companies that owned LLMs were ran democratically.
High P: freely publishes critical analysis without apparent censorship
Editorial
+0.30
SETL
+0.17
The article exercises freedom of expression by presenting detailed historical analysis and critical commentary on technology industry narratives without self-censorship or promotional bias
FW Ratio: 60%
Observable Facts
The article presents critical analysis of AI industry hype cycles with specific historical examples and dates
The site publishes the article without apparent registration or paywall requirements
The author explicitly questions industry claims about programmer elimination across multiple generations, stating 'promises deserve scrutiny'
Inferences
The public accessibility indicates the author and platform exercise freedom to publish critical content without censorship
The skeptical framing suggests the author intended to inform readers about recurring patterns rather than promote uncritical acceptance of industry narratives
Medium F: frames technology history as supporting employment continuity despite automation
Editorial
+0.15
SETL
ND
The article documents how multiple generations of automation tools and code-generation promises have failed to reduce programmer employment, showing that market demand for developers continues to grow
FW Ratio: 60%
Observable Facts
The article explicitly states 'no-code has not reduced the demand for traditional developers' and 'the market for software developers continues to grow'
Multiple historical examples show that tools promised to eliminate programmers instead created new categories of programming work
The article documents how increased digital applications have driven continued hiring demand despite 60+ years of automation promises
Inferences
The documented pattern of sustained employment demand suggests automation fears regarding programmer job elimination may be overstated based on historical precedent
The analysis implies that software development work has intrinsic complexity that resists complete automation, supporting employment security
Medium C: covers software development history enabling informed cultural discourse
Editorial
+0.15
SETL
-0.10
The article contributes to informed public participation in cultural discourse about technology by providing historical context and critical analysis of AI hype cycles spanning six decades
FW Ratio: 60%
Observable Facts
The article provides detailed historical examples spanning 60+ years of software development narratives from COBOL through large language models
The site publishes this content freely without access restrictions or paywalls
The article explicitly encourages critical thinking about technology industry claims, stating that 'understanding this history is essential'
Inferences
The accessible, detailed analysis enables readers to engage more critically with current AI and automation narratives
The public availability supports participation in cultural discourse about technology cycles, hype patterns, and realistic expectations
Low A: advocates for intellectual honesty in professional discourse
Editorial
+0.10
SETL
ND
The article implicitly advocates for honest analysis and critical thinking about technology claims rather than uncritical acceptance of industry marketing narratives
FW Ratio: 60%
Observable Facts
The article examines hype critically, noting that promises 'deserve scrutiny' and systematically documenting repeated false predictions
The author's site tagline is 'Honest takes on code, AI, and what actually works'
The article concludes with principles-based analysis of fundamental software complexity rather than promotional narratives
Inferences
The emphasis on honest debunking and critical examination supports professional responsibility and truthfulness in discourse
The historical documentation serves to promote evidence-based reasoning over marketing claims and unfounded hype
Medium F: frames complexity barriers as persistent obstacles to knowledge democratization
Editorial
0.00
SETL
ND
The article documents how tools promised to democratize programming through simplification have consistently failed to eliminate the need for specialized expertise, showing education barriers persist despite technological advancement
FW Ratio: 60%
Observable Facts
The article states 'COBOL did not eliminate programmers' and discusses how 4GLs and no-code platforms also failed to democratize software creation
The article concludes 'translating human intent into correct, efficient, maintainable, secure software is hard' due to inherent complexity
Multiple examples show that sophisticated users of simplification tools were 'specialized developers who understood both the tools and underlying principles'
Inferences
The historical pattern suggests that knowledge democratization faces fundamental limits rooted in software complexity rather than tool design
The article presents neutral documentation of failed democratization rather than advocacy for education rights or solutions
The phrases 'the pattern repeats,' 'The pattern continued,' and 'the pattern holds' appear approximately 4 times as a central rhetorical device to establish the article's cyclical thesis about technology hype
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.