70 points by harran 3 days ago | 40 comments on HN
| Neutral Editorial · v3.7· 2026-03-01 09:45:10 0
Summary Digital Access Neutral
The content is a business strategy article analyzing a CEO survey on AI adoption, aimed at solo founders. It provides actionable advice and tool recommendations. The evaluation finds the content largely neutral regarding human rights, with mild positive signals for the right to work, education, and cultural participation due to its empowering, educational, and enabling nature, and a mild negative structural signal for privacy due to embedded tracking scripts.
I've been building implementation guides for solo founders and small businesses
trying to use AI practically, so I read the PwC CEO Survey closely when it dropped.
The headline number (12% of CEOs generating measurable returns) gets cited a lot, but I think the more revealing finding is the 56% with zero financial impact.
These are companies with enterprise AI budgets, dedicated teams, and access to every tool on the market and the majority are getting nothing back.
PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue. internal tooling, content drafts, meeting summaries while the 12% they call the "Vanguard" are using AI in the product and customer experience itself (44% of Vanguard vs 17% of everyone else).
What I found interesting from a solo founder angle: the structural barriers causing large companies to fail at this “bureaucracy, legacy systems, misaligned incentives, multi-department approval processes” don't exist at the one-person scale.
The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.
Curious if others have a take on why the enterprise failure rate is this high despite the investment, and whether the Vanguard pattern (AI into the product, not just the back office) matches what people are seeing in practice.
The average person is not ready for AI yet. Microsoft's Copilot has a low adoption rate. Data Centers have big energy bills and a lack of clients, and have no ROI for most of them.
The question is whether legacy players can drive strategic growth that changes their trajectory to meet the AI-native disrupters. This is a data point.
> Here's the thing. If you're a solo founder watching this play out from the sidelines, this isn't discouraging. It's the biggest competitive window you've had in years. And most people aren't looking at it that way.
The vast majority of people I'm coming across, both online and here, where I live, have absolutely no knowledge or understanding of how to work with AI.
From Perplexity/Sonar and GPT5 I've learned that most people do not treat it like an intelligence, they treat it like a search engine with better text output.
This article reminded me of that.
I find it extremely inaccurate to claim that the issue with big companies is structure, because that - as happens far too often - ignores the root cause:
The people in charge, who don't make the necessary smart and radical-seeming decisions.
I know it's nowadays rather unpopular to point at actual, real shortcomings of people, but that's how it is. Someone, at some point, made dumb decisions or failed to make smart decisions.
"Let's put humanities greatest invention, a functional artificial intelligence, to the task of doing paperwork."
Why aren't they making smart decision? Well ... because they can't!
It's not about structure, it's about the failure to recognize potential and ability. When you're the boss, then you make decisions which make things happen.
They can make dumb decisions, like using AI solely for paperwork, or they can make smart decisions, like causing changes in the company that enable the gigantic potential.
Or, in other words:
Handing a monkey a book doesn't magically make the monkey grasp the power it's holding in its hands.
> Not because you have more resources. Because you have fewer barriers.
No. It's all about decisions, decision-making and the ability to make smart decisions. When you're the person who makes the decisions, then you can take down the barriers, work around them or at least start trying figuring out how to do so. Everything else is just excuses.
Barriers don't make decisions. People do. The barriers exist in their heads more than anywhere else. When you're incapable of making smart decisions, then the problem is you.
I'm not saying that CEOs (or devs, for that matter) lie. But on AI I don't think we can rely on any self-reported results, positive or negative, based on surveys.
There is just too much incentive to say... no, to BELIEVE... both that AI yields 10x productivity that AI is useless.
I am swinging wildly between the two too, personally. The more time I spend with AI, the more I am developing this split personality where one part of me says "I hope this thing blows up before I lose my job and my children never have the chance to have an office job again" and the other one says "AI is actually not easy! You have to know how to use it well, deveop tools, plan, curate your context... This means I am acquiring useful skills here, tring to port Flappy Bird to COBOL".
And obviously, depending which side controls my cortex in that moment, I may err on the "AI is useless crap" or the "AI all the things!" side
I think you’re pointing at something real. Adoption lag matters.
If the end user doesn't change behavior, ROI won’t show up no matter how much infrastructure gets built.
I’d add another layer though: expectations. Many CEOs implicitly treat AI like deterministic software. install it, flip the switch, get linear productivity gains.
But these systems are probabilistic. They’re "slippery" Output quality varies, edge cases multiply, and oversight is required. That makes ROI non-linear.
> The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.
Are you saying that from what you see, small operators also fail to get ROI, but for different reasons?
I work in a large enterprise. On one hand, we’re being told we should think of ways to use AI more. On the other hand, to even start (beyond just using Copilot to develop what I’m already working on), I need to have an idea and sell it to some AI board to get their blessing. At that point, I will have a microscope on me, tracking everything, to watch if this wild experiment is a success or failure. No thanks.
If they really want me to try something new, they will give me the space to try things where I am free to fail quietly and privately, pivot, and continue trying things. Asking for ship dates on day one is no way to operate projects with so many unknown unknowns. No one wants to learn and fail with an audience.
The following is my take on what's happening — outside the software-development domain, which is special vis-à-vis LLMs for obvious reasons.
Given worker access to generative LLMs, plus training and motivation to use them, LLMs are effective for certain workflows. Those workflows tend to be personal, one-offs, or summarization in nature: write a bash script for this headache I have every day; tell me what colleague X is trying to say in his 1200-word email, since his writing is garbage and he can't get to the point; "what's the Excel formula syntax for this other thing that I keep forgetting?"; etc.
So the time and mental-energy savings inures to the workers, mostly from coordination tasks that don't directly create core value. And then those savings aren't "reinvested" into value-producing activities whose benefits would inure to the firm because the workers have no incentive to do so; don't know how to create core value; don't have the skills to create core value; or aren't permitted to do those activities by higher-ups.
Bottom line: LLMs are eating busywork coordination activities — hence no impact on most firms' bottom lines.
Exactly!
having the budget isn't enough. Legacy players need to adapt processes and incentives to turn AI investment into real strategic advantage, or AI-native disruptors will outpace them.
Piggybacking off what you said we should circle back, lean in and look for synergies, shift the paradigm and do a deep dive on leveraging the low hanging fruit deliverables.
Let's take this offline and put it on the backburner.
It’s quite possible you are correct, but since gen-ai is good at generating stuff, it is taking on busy work, which might bring the ROI back to ~zero. In my business I suspect that gen ai has provided a modest productivity boost (single digit) but, due to other factors, I can’t quantify the revenue impact.
I think an interesting analogy for what many of us are experiencing here is the phenomena of Doom Scrolling; deep down we know we should put it down (and go outside), but the immediate experience of it and the value it feels like it’s offering in the moment has you keep scrolling and scrolling.
Similarly many have reported a sense of say programming productivity but a more objective reflection later on reveals the myriad issues with constantly and subtly heralding in large quantities of lower quality code and blowing past any caution or rigourkus discipline that would come with the laying down of lines of code “by hand”.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.