194 points by speckx 3 days ago | 378 comments on HN
| Moderate positive
Contested
Editorial · v3.7· 2026-03-01 00:52:47 0
Summary Information & Critical Thinking Advocates
Terence Eden's blog post advocates for critical evaluation of technology hype cycles, citing historical examples of failed predictions to challenge the claim that 'this time is different' with artificial intelligence. The content champions informed opinion-formation and resistance to marketing manipulation, while the site's accessible design and free distribution support broad access to information. The strongest engagement is with Article 19 (freedom of expression and information) and Article 26 (education and critical thinking).
"All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.
> No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.
By the looks of it, 2026 might be the year where reality and fiction will finally collide with AI and we'll be able to see if all the hype was warranted.
But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.
I get that everyone has a strong opinion on whats-going-to-happen-with-AI, but I really think nobody knows.
We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.
The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.
If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.
Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.
[p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]
Honestly, the remixes this generation suck compared to priors.
"This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.
"NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.
"Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.
And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.
I truly hate how stupidly people with money actually behave.
LLMs have not radically transformed the world yet because the number of people capable of solving problems by typing into a blinking cursor on a blank screen is actually quite small. Take that subset of the population and reduce it to those that can effectively write communicative prose, and its even smaller still.
It's just an interface problem. The VT100 didn't change the world overnight either.
I’m doing enterprise coding tasks that used to take a month of whole team coordination from mockups to through development and testing in 3 days now. It’s all test driven development, codex 5.3 and a small team of two people who know how to hold it right orchestrating the agents. There’s no reason not to work this way. The sociotechnical engineering aspects of this change are fascinating and rewarding to solve.
>Blockchain... NFTs
>The problem is, the same dudes who were pumped for all of that bollocks now won't stop wanging on about Artificial Intelligence.
I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?
The hype around AI is admittedly annoying - especially from the Wall St crowd who don't know how to pronounce 'Nvidia' correctly, and who haven't managed to internalize the fact that the chatbots they use hallucinate.
It really is 'different', though, in the same way the Internet was.
It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.
AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.
I’ve never heard half of the things and the other half is mostly consumer electronics or specific product names. The closest example here is Quantum Computing, which is also a serious technology in development. I think for the OP these are all tech buzzwords that he invests in without understanding what they really are. That’s why he thinks all these unrelated things are the same.
To my mind at least, it is different. I lean heavily on AI for both admin and coding tasks. I just filled out a multipage form to determine my alimony payments in Germany. Gemini was an absolute godsend, helping answer questions in, translate to English, draft explanations, emails requesting time extensions to the Jugendamt case worker.
This is super scary stuff for an ADHDer like me.
I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.
My daily todos are now being handled by NanoClaw.
These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.
But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.
If you can't distinguish the actual utility and progress of AI from it's annoying hype-men then it's hard to take your dismissal of AI seriously.
Failure to appreciate changes in AI will have left you calling every shot wrong over the past 5 years. While AI models continue to improve at an exponential rate, you'll cling to your facile maxims like "dude it's just predicting the next token it isn't real intelligence".
Actually IT IS different. Actually if they manage to create a viable small nuclear reactors or Quantum computers the world will change like it changed with the Watt thermal engine.
Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.
All of them were bubbles at the time and they changed the world forever. AI is changing the world AND it is a bubble.
AI is here to stay. It will improve and it will have consequences. The fact that a robot could do things with its hands is actually significant, whenever you like it or not.
"Today, I'm speaking with Stephen C. Meyer, Director of The Discovery Institute's Center for Science and Culture, and and George D. Montañez, Director of the AMISTAD Lab at Harvey Mudd College–both of whom are extremely knowledgable on the topic of artificial intelligence. During the course of our conversation, they discuss the asymmetry between human intelligence & AI, the inability of AI to ascribe meaning to raw data, and the limitations of large language models. The real question though is: are we screwed? Let's find out."
> and we'll be able to see if all the hype was warranted.
Umm, what? For the past 3 years, every year I've said something along the lines of "even if models stop improving now, we'll be working on this for years, finding new ways to use it and make cool stuff happen". The hype is already warranted. To have used these tools and not be hyped is simply denial at this point.
Their Ninebot escooters are pretty damn good, far better than most random brands.
I spent most of Covid in VRChat and met my current live-in gf, so the metaverse was real for me too.
I also made decent money selling crypto, so that part was real for me too.
And AI coding, for as dumb as even the best models are, still enabled me to create things that I wanted to, but wouldn't have had time or gotten nearly as far without.
I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.
Maybe if they could let go of some of the cynicism, they could find something to be optimistic about. Nothing ever goes exactly as planned, but that doesn't mean nothing is good.
AI is real but the socio-political environment is far from conductive to some form of productive use of it - as opposed to using it as a war-machine - AI isn't going to fail in that role but very few will be happy about it.
I mean, disillusionment is the least of my worries.
> most of the people that were the loudest won't say they were wrong
I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.
It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.
It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.
I work for an old enterprise, so far rather conservative with LLM/AI usage. However the copilot cli adoption in the last 2 weeks is spreading light wild fire. Codex 5.3, a good instructions file and it works. Features are getting done and delivered in days, proper test coverage is done, proper documentation in place. Onboarding to it is also very fast.
Many of my industry friends and I were skeptics about all the things the OP mentions, still am. And yet, I am able to push 30-40K lines of nearly perfect code a day now.
It's different just like the steam engine was different, except technology moves much 100x faster now than it did then. It's different and the same.
There are all sorts of algorithms in use that were once thought of as AI, but transitioned to being mere algorithms well before they entered public awareness, if they did that at all. Some are still useful and used everywhere, but they have never been thought of as AI by the public. For them, AI is a term that has long been reserved for some far-off, sci-fi future.
LLM's are not artificial general intelligence (i.e. not sci-fi AI). Why haven't they transitioned to being mere algorithms by now? Why is the public being told AI is finally arriving when it's really just another algorithm?
We have some truly slick and shady corporations involved in the bubble right now and they're marketing LLM's like tobacco. LLM's have been pushed out, at immense cost, to the public in a way that makes them more directly accessible to average people than any past algorithm. Young children can ask a LLM to do their homework for them. Middle managers can ask a LLM to create a (shitty) ad campaign for them. Corporations have gone to tremendous expense to make that widely available and, for the moment, mostly free. They seem to be following the Joe Camel school of marketing. Get them hooked while they're young so they come to you first when they're older! The only difference is that nobody is stepping in to stop the new Joe Camel from handing out free samples to kids.
Then there's the "go big" aspects of the bubble. The major competitors are trying to out-spend each other to dominance, but the suckers are so colossally big that their bubble is affecting global GPU, memory, and storage prices. This bubble is going to stress power grids wherever it operates and do considerable environmental harm. The financial games being played behind the bubble are absolutely stupid. The results, so far, are tantalizing for billionaires. LLM's offer the promise of being able to fire all their pesky and annoying human workers. It won't deliver on that, and none of these companies is ever going to make enough to pay their debts. There might be "too big to fail" government bailouts, but there are going to be some big bankruptcies too.
Useful algorithms will come out of all this, a lot of tears too, but not "AI".
There's another point, too. Detractors say LLMs will never advance to whatever threshold they consider meaningful. Fine. We're working on other paradigms, too, though. Just because a lot of o people are productizing LLMs doesn't mean the state of the art isn't advancing in parallel and AGI isn't in the cards.
Few seem to understand that both of the above can be true. The parallel you draw to the internet revolution is apt; dot-coms were both a bubble and changed everything.
Yeah that comparison doesn't pass the smell test. Blockchain/crypto were purely financial instruments and for better or worse, a new financial instrument is very different than a new tech innovation; tbh there was a thin veneer of tech when it comes to crypto/blockchain, but the magic was because of the money, not because of the tech.
AI is different because the magic clearly is because of the tech. The fact that we get this emergent behavior out of (what essentially amounts to) polynomial fitting is pretty surprising even for the most skeptical of critics.
Agree, LLMs are just another tool. Treating them as chatbots is a very basic way of using them. The future is intelligent engineers embedding them in traditional systems and having them perform specific roles.
I think the author is saying that a specific crowd, which happened to be very vocal and excited about web3 and NFTs, is also very vocal and excited about AI. In my personal experience they are right, a lot of the hustler types around me who were trying to get everyone to "invest" in digital land are now doomposting about AI.
It's not a very legible situation for people outside of the profession, and a lot of them believe it's just another grift that will blow up in a few years.
I think a good analogy will be the way word processors changed printing. Suddenly anyone with access to a computer had the ability to do professional level editing and layout. Most of them didn’t have the taste or skills to use the tools to the fullest, but it still opened up a ton of possibilities that weren’t available before because it was never practical to hire an actual professional to do a poster for a dinky church bake sale before. But now, church bake sales can have pretty slick looking posters (and websites) depending on whether any of the volunteers cares enough to get.
The stuff LLMs will democratize will be a lot more impactful than nice posters for car wash fundraisers though. So in that sense it will be different, but I don’t think it will crack the market for proficient experts in the field in the same way photoshop didn’t destroy graphic design and CAD didn’t destroy drafting. It may get rid of the market for a lot of the second-tier bootcamp grad talent though, so I wouldn’t be getting into that right now if I could help it.
I'd say AR & VR were hyped to be as big as AI is now, but just haven't fully delivered on the promise yet. 3D printing was similarly hyped for a time. Same with blockchain. Nuclear power in the 50s was hyped to be the future of energy.
The point is to take the hype with a grain of salt and knowledge that not all hyped technologies transformed the world as promised. Maybe AI is like the internet or electricity. But maybe the claims about AGI/ASI and full automation are just hype.
"This time is different" has been correct for every major technological shift in history. Electricity was different. Antibiotics were different. Semiconductors were different.
Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.
The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.
The only people underwhelmed by AI in February 2026 are people who have formed an identity around being AI skeptics over the last couple years and are struggling to shed it. I haven't met anyone who has seriously used the new models who isn't a at least a bit awed and disturbed.
> I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?
Squares are rectangles. The existence of rectangles that aren't squares doesn't negate that.
You need to re-evaluate your logic here; if you were a Blockchain / NFT booster who doesn't believe AI is different you could argue you've disproved their argument. You have not.
Content champions critical evaluation of information and resistance to manufactured consensus. Author explicitly interrogates tech hype narratives and advocates for independent thinking, demonstrating commitment to informed discourse.
FW Ratio: 67%
Observable Facts
Post directly critiques the repeated use of 'This time is different' as a manipulation tactic in tech promotion.
Author presents counterargument: 'Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.'
Social sharing buttons visible for Bluesky and other platforms.
Inferences
The systematic enumeration of failed predictions serves as a fact-based challenge to credulous acceptance of hype, supporting informed opinion-formation.
Accessible presentation formats and free distribution suggest structural commitment to broad-based information access.
Post resists interpretations that would distort UDHR principles. Author defends against misuse of 'this time is different' framing to justify uncritical acceptance of hype, protecting rights to informed discourse.
FW Ratio: 50%
Observable Facts
Post explicitly rejects distorted claims that AI is uniquely revolutionary, placing it within historical pattern of overstated tech promises.
Inferences
Defense against ideological capture of discourse about technology aligns with protection against misuse of rights concepts.
Post advocates for critical thinking and intellectual development, opposing uncritical acceptance of marketing narratives. Encourages readers to develop capacity for independent reasoning about technological claims.
FW Ratio: 60%
Observable Facts
Author presents evidence-based counter-narrative to tech enthusiasm, demonstrating reasoned argumentation.
Use of quotations from Terry Pratchett and Sir John Templeton provides illustrative examples for learning.
Multiple theme options support different cognitive and visual accessibility needs.
Inferences
Systematic deconstruction of hype cycles models critical analytical thinking for readers.
Accessible format facilitates educational engagement for diverse learners.
Content critiques uncritical enthusiasm for technology hype cycles and advocates for rational skepticism, which aligns with UDHR values of informed decision-making and resistance to manipulation.
FW Ratio: 60%
Observable Facts
The post lists multiple failed or overhyped technologies: 3D TV, Blockchain, NFTs, Quibi, Metaverse, Stadia, among others.
Author quotes Sir John Templeton: 'This time is different' when in fact it's a repeat of earlier situations has uttered among the four most costly words in investing.
The post argues the ideology of 'winner takes all' is unsustainable and not supported by reality.
Inferences
The enumeration of failed hypes frames technology promises with skepticism, suggesting critiques of propaganda and false certainty.
The post implicitly advocates for critical thinking and awareness of manipulation tactics used in tech promotion.
Post asserts limitations on unfounded claims: 'the ideology of winner takes all is unsustainable and not supported by reality.' This challenges extreme ideological positioning that threatens collective flourishing.
FW Ratio: 50%
Observable Facts
Final statement: 'The ideology of "winner takes all" is unsustainable and not supported by reality.'
Inferences
Assertion that ideologies detached from reality are unsustainable reflects commitment to collective well-being over unrestrained individualism.
Implicit advocacy for equal dignity through mocking the uncritical acceptance of hype by 'dudes' and appeal to collective human reasoning against manipulation.
FW Ratio: 50%
Observable Facts
Author notes that the same group of people ('nearly always dudes') repeatedly promote failed technologies without learning.
Inferences
Critiquing blind repetition of behavior patterns suggests concern for rational equality and shared dignity in discourse.
No privacy policy or data handling statements observed on-domain.
Terms of Service
—
No terms of service detected on-domain.
Identity & Mission
Mission
—
No explicit mission statement observed on-domain.
Editorial Code
—
No editorial standards or codes of conduct observed on-domain.
Ownership
—
Ownership not clearly identified on page; appears to be individual blog.
Access & Distribution
Access Model
+0.10
Article 19 Article 26
Content appears freely accessible without paywall or registration, supporting broad access to information and ideas.
Ad/Tracking
—
No advertising or tracking mechanisms observed on-domain.
Accessibility
+0.15
Article 19 Article 26
Theme switcher with dark/light/eInk/xterm/nude modes demonstrates commitment to accessible presentation for diverse users including those with vision impairments or specific accessibility needs.
Site provides free public access to editorial content without paywalls; accessible theme switcher expands ability for diverse readers to consume information; social sharing buttons facilitate expression and dissemination.
Quotation from Sir John Templeton: '16 rules for investment success' - uses authority figure to support skepticism about tech hype.
loaded language
Use of 'bollocks', 'wanging on', and 'utterly underwhelming' to characterize tech hype; emotionally charged language to frame tech enthusiasm negatively.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.