This technical guide on agentic engineering practices advocates for thoughtful adoption of AI-assisted coding by emphasizing developer responsibility, code quality standards, and human decision-making authority. The content supports freedom of expression and information access, with positive signals for education rights and labor dignity, though minimal engagement with broader social implications of automation.
I don't agree that the code is cheap.
It doesn't require a pipeline of people to be trained and that is huge, but it's not cheap.
Tokens are expensive.
We don't know what the actual cost is yet.
We have startups, who aren't turning a profit, buying up all the capacity of the supply chain.
There are so many impacts here that we don't have the data on.
I'm very curious to see how this will affect the job market. All the recent CS grads, all the coding bootcamp graduates - where would they end up in? And then there's medium/senior engineers that would have to switch how they work to oversee the hordes of AI agents that all the hype evangelists are pushing on the industry.
I'm going to shill my own writing here [1] but I think it addresses this post in a different way. Because we can now write code so much faster and quicker, everything downstream from that is just not ready for it. Right now we might have to slow down, but medium and long term we need to figure out how to build systems in a way that it can keep up with this increased influx of code.
> The challenge is to develop new personal and organizational habits that respond to the affordances and opportunities of agentic engineering.
I don't think it's the habits that need to change, it's everything. From how accountability works, to how code needs to be structured, to how languages should work. If we want to keep shipping at this speed, no stone can be left unturned.
Code generation is cheap in the same way talk is cheap.
Every human can string words together, but there's a world of difference between words that raise $100M and words that get you slapped in the face.
The raw material was always cheap. The skill is turning it into something useful. Agentic engineering is just the latest version of that. The new skill is mastering the craft of directing cheap inputs toward valuable outcomes.
I think there's a good parallel with AI images - generating pictures has gotten ridiculously easy and simple, yet producing art that is meaningful or wanted by anyone has gotten only mildly easier.
Despire the explosion of AI art, the amount of meaningful art in the world is increased only by a tiny amount.
It's like the allegory of the retired consultant's $5000 invoice (hitting the thing with a hammer: $5, knowing where to hit it: $4995).
Yeah, coding is cheaper now, but knowing what to code has always been the more expensive piece. I think AI will be able to help there eventually, but it's not as far along on that vector yet.
> Code has always been expensive. Producing a few hundred lines of clean, tested code takes most software developers a full day or more. Many of our engineering habits, at both the macro and micro level, are built around this core constraint.
> ...
> Writing good code remains significantly more expensive
I think this is a bad argument. Code was expensive because you were trying to write the expensive good code in the first place.
When you drop your standards, then writing generated code is quick, easy and cheap. Unless you're willing to change your standard, getting it back to "good code" is still an equivalent effort.
There are alternative ways to define the argument for agentic coding, this is just a really really bad argument to kick it off.
The cost of code never lived in the typing — it lived in the intent, the constraints, and the reasoning that shaped it.
LLMs make the typing cheap, but they don’t make the reasoning cheap.
So the economics shift, but the bottleneck doesn’t disappear.
I basically fully agree with this. I am not sure how to handle the ramifications of this in my day to day work yet. But at least one habit I have been forming is sometimes I find that even though the cost of writing code is immensely cheap, reviewing and validating that it works in certain code bases (like the millions of line mono repo I work in at my job) is extremely high. I try to think through, and improve, our testability such that a few hundred line of code change that modifies the DB really can be a couple of hours of work.
Also, I do want to note that these little "Here is how I see the world of SWE given current model capabilities and tooling" posts are MUCH appreciated, given how much you follow the landscape. When a major hype wave is happening and I feel like I am getting drowned on twitter, I tend to wonder "What would Simon say about this?"
Every modern (and not so modern) software development method hinge on one thing: requirements are not known and even if known they'll change over time. From this you get the goal of "good" code which is "easy to change code".
Do current LLM based agents generate code which is easy to change? My gut feeling is a no at the moment. Until they do I'd argue code generated from agents is only good for prototypes. Once you can ask your agent to change a feature and be 100% sure they won't break other features then you don't care about how the code looks like.
Here's an easy to understand example. I've been playing EvE Online and it has an API with which you can query the game to find information on its items and market (as well as several other unrelated things).
It seems like a prime example for which to use AI to quickly generate the code. You create the base project and give it the data structures and calls, and it quickly spits out a solution. Everything is great so far.
Then you want to implement some market trading, so you need to calculate opportunities from the market orders vs their buy/sell prices vs unit price vs orders per day etc. You add that to the AI spec and it easily creates a working solution for you. Unfortunately once you run it it takes about 24 hours to update, making it near worthless.
The code it created was very cheap, but also extremely problematic. It made no consideration for future usage, so everything from the data layer to the frontend has issues that you're going to be fighting against. Sure, you can refine the prompts to tell it to start modifying code, but soon you're going to be sitting with more dead code than actual useful lines, and it will trip up along the way with so many more issues that you will have to fix.
In the end it turns out that that code wasn't cheap at all and you needed to spend just as much time as you would have with "expensive code". Even worse, the end product is nearly still just as terrible as the starting product, so none of that investment gave any appreciable results.
> Code has always been expensive. Producing a few hundred lines of clean, tested code takes most software developers a full day or more. Many of our engineering habits, at both the macro and micro level, are built around this core constraint.
Wasn't writing code always cheap? I see this more like a strawmen argument. What is clean code? Tested code? Should each execution path of a function be tested with each possible input?
I think writing tests is important but you can over do it. Testing code for every possible platform takes of course much time and money.
Another cost factor for code is organization overhead, if adding a new feature needs to go through each layer of the organization signing it off before a user can actually see it. Its of course more costly than the alternative of just pushing to production with all its faults.
There is a big difference of short term cost and long term ones. I think LLMs reduce the short time cost immensely but may increase the long term costs. It will take some real long studies to really show the impact.
I think it’s funny that we’re all measuring lines of code now and smiling.
It was/is expensive because engineers are trying to manage the liability exposure of their employers.
Agents give us a fire hose of tech debt that anyone can point at production.
I don’t think the tool itself is bad. But I do think people need to reconsider claims like this and be more careful about building systems where an unaccountable program can rewrite half your code base poorly and push it to production without any guard rails.
Not sure if “code has always been expensive” is the right framing.
Typing out a few hundred lines of code was never the real bottleneck. What was expensive was everything around it: making it correct, making it maintainable (often underestimated), coordinating across teams and supporting it long term.
You can also overshoot: Testing every possible path, validating across every platform, or routing every change through layers of organizational approval can multiply costs quickly. At some point, process (not code) becomes the dominant expense.
What LLMs clearly reduce is the short-term cost of producing working code. That part is dramatically cheaper.
The long-term effect is less clear. If we generate more code, faster, does that reduce cost or just increase the surface area we need to maintain, test, secure, and reason about later?
Historically, most of software’s cost has lived in maintenance and coordination, not in keystrokes. It will take real longitudinal data to see whether LLMs meaningfully change that, or just shift where the cost shows up.
ridiculous asks are expensive. Not understanding limitations of computer systems are expensive.
The main problem is, and always will be communication. engineers are in general are quick to say "that won't work as you described" because they can see the steps that it takes to get there. Sales guys (CEOs) live a completely different world and they "hear" "I won't do that" from technical types. It's the ultimate impedance mismatch and the subject of countless seminars.
AI writing code at least reduces the cost of the inevitable failures, but doesn't solve the root problem.
Successful business will continue be those who's CTO/CEO relationship is a true partnership.
There's a lot of misconception about the intrinsic economic value of 'writing code' in these conversations.
In software, all the economic value is in the information encoded in the code. The instructions on precisely what to do to deliver said value. Typically, painstakingly discovered over months or years of iteration. Which is exactly why people pay for it when you've done it well, because they cannot and will not rediscover all that for themselves.
Writing code, per se, is ultimately nothing more than mapping that information. How well that's done is a separate question from whether the information is good in the first place, but the information being good is always the dominant and deciding factor in whether the software has value.
So obviously there is a lot of value in the mapping - that is writing the code - being done well and, all else being equal, faster. But putting that cart before the horse and saying that speeding this up (to the extent this is even true - a very deep and separate question) has some driving impact on the economics of software I think is really not the right way to look at it.
You don't get better information by being able to map the information more quickly. The quality of the information is entirely independent of the mapping, and if the information is the thing with the economic value, you see that the mapping being faster does not really change the equation much.
A clarifying example from a parallel universe might be the kind of amusing take about consultancy that's been seen a lot - that because generative AI can produce things like slides, consultancies will be disrupted. This is an amusingly naive take precisely because it's so clear that the slides in and of themselves have no value separate from the thing clients are actually paying for: the thinking behind the content in the slides. Having the ability to produce slides faster gets you nothing without the thinking. So it is in software too.
I’d like to add an obvious point that is often overlooked.
LLMs take on a huge portion of the work related to handling context, navigating documentation, and structuring thoughts. Today, it’s incredibly easy to start and develop almost any project. In the past, it was just as easy to get overwhelmed by the idea of needing a two-year course in Python (or any other field) and end up doing nothing.
In that sense, LLMs help people overcome the initial barrier, a strong emotional hurdle, and make it much easier to engage in the process from the very beginning.
One of the most interesting aspects is when LLMs are cheap and small enough so that apps can ship with a builtin one so that it can adjust code for each user based on input/usage patterns.
Indeed: The act of actually typing the code into an editor was never the hard or valuable part of software engineering. The value comes from being able to design applications that work well, with reasonable performance and security properties.
I don’t think we can expect all workers at all companies to just adopt a new way of working. That’s not how competition works.
If agentic AI is a good idea and if it increases productivity we should expect to see some startup blowing everyone out of the water. I think we should be seeing it now if it makes you say ten times more productive. A lot of startups have had a year of agentic AI now to help them beat their competitors.
>but medium and long term we need to figure out how to build systems in a way that it can keep up with this increased influx of code.
Why? Why do we need to "write code so much faster and quicker" to the point we saturate systems downstream? I understand that we can, but just because we can, does'nt mean we should.
This is the thing I don't really get. I enjoy tinkering with AI and seeing what it comes up with to solve problems. But when I need to write working code that does anything beyond simple CRUD, it's faster for me to write the code than it is to (1) describe the problem in English with sufficient detail and working theory, then (2) check the AI's work, understand what it's written, de-duplicate and dry it out.
I guess if I skipped step 2, it might save time, but it would be completely irresponsible to put it into production, so that's not an option in any world where I maintain code quality and the trust of my clients.
Plus, having AI code mixed into my projects also leaves me with an uneasy sense of being less able to diagnose future bugs. Yes, I still know where everything is, but I don't know it as well as if I'd written it myself. So I find myself going back and re-reviewing AI-written code, re-familiarizing myself with it, in order to be sure I still have a full handle on everything.
To the extent that it may save me time as an engineer, I don't mind using it. But the degree to which the evangelists can peddle it to the management of a company as a replacement for human coders seems highly correlated with whether that company's management understood the value of safe code in the first place. If they didn't, then their infrastructure may have already been garbage, but it will now become increasingly unusable garbage. At some point, I think there will be a backlash when the results in reality can no longer be denied, and engineers who can come in and clean up the mess will be in high demand. But maybe that's just wishful thinking.
I would normally agree, but I think the "code is a liability" quote assumes that humans are reading and modifying the code. If AI tools are also reading and modifying their own code, is that still true?
the top % of talent is still extremely hard to get, perhaps moreso
saw an article recently where every sector is seeing a reduction in IT/devs except for tech and ai companies
if your company is in a sector where eng is a cost-center and the product is not directly tied to your engineers / your company is pushing for efficiency it's an employer's market
Yeah, it’s odd watching the outsourcing debate play out again. The results are gonna be the same.
Which is a shame, cause I think LLMs have a lot more use for software dev than writing code. And that’s really what’s going to shift the industry - not just the part willing to cut on quality.
I think we’re falling into a trap of overestimating the value of incrementally directing it. The output is all coming from the same brain so what stops someone just getting lucky with a prompt and generation that one-shots the whole thing you spent time breaking down and thinking about. The code quality will be the same, and unless you’re directing it to the point where you may as well be coding the old way, the decision-making is the same too.
> The new skill is mastering the craft of directing cheap inputs toward valuable outcomes.
Strongly agree with this. It took me awhile to realize that "agentic engineering" wasn't about writing software it was about being able to very quickly iterate on bespoke tools for solving a very specific problem you have.
However, as soon as you start unblocking yourself from the real problem you want to solve, the agentic engineering part is no longer interesting. It's great to be solving a problem and then realize you could improve it very quickly with a quick request to an agent, but you should largely be focused on solving the problem.
Yet I see so many people talking about running multiple agents and just building something without much effort spent using that thing, as though the agentic code itself is where the value lies. I suspect this is a hangover from decades where software was valuable (we still have plenty of highly valued, unprofitable software companies as a testament to this).
I'm reminded a bit of Alan Watts' famous quote in regards to psychedelics:
> If you get the message, hang up the phone.
If you're really leveraging AI to do something unique and potentially quite disruptive, very quickly the "AI" part should become fairly uninteresting and not the focus of your attention.
We kind of do? Local models (thought no state of the art) set a floor on this.
Even if prices are subsidized now (they are) that doesn't mean they will be more expensive later. e.g. if there's some bubble deflation then hardware, electricity, and talent could all get cheaper.
Possibly even more important than knowing where to hit it (what to code), is knowing where not to hit it (what not to code). Hitting the thing in the wrong place can lead to catastrophe. Making a code change you don't need can blow up production or paint your architecture into a corner.
AIs so far seem to prefer addition by addition, not addition by subtraction or addition by saying "are you sure?".
This doesn't mean that "code is cheap" is bad. Rather, it means that soon our primary role will be to guide AIs to produce a high proportion of "code that was cheap", while being able to quickly distinguish, prevent, and reject "cheap code".
Code is cheaper. Simple code is cheap. More complex code may not be cheaper.
The reason you pay attention to details is because complexity compounds and the cheapest cleanup is when you write something, not when it breaks.
This last part is still not fully fleshed out.
For now. Is there any reason to not expect things to improve further?
Regardless, a lot of code is cheap now and building products is fun regardless, but I doubt this will translate into more than very short-term benefits. When you lower the bar you get 10x more stuff, 10x more noise, etc. You lower it more you get 100x and so on.
Dollars to donuts that at some point someone is going to discover that senior engineers spend just as much time reviewing, fixing, and dealing with blowups caused by, shitty AI-generated code produced by more junior coders....as they did providing various forms of mentoring of said junior coders, except those junior coders become better developers in the latter case, whereas the AI generates the same shitty results or even worse, inconsistent quality code.
Or another way of looking at it: just because digging a ditch became cheap and fast with the backhoe doesn't mean you can just dig a bunch of ditches and become rich.
I was careful to say "Good code still has a cost" and "delivering good code remains significantly more expensive than [free]" rather than the more aesthetically pleasing "Good code is expensive.
I chose this words because I don't think good code is nearly as expensive with coding agents as it was without them.
You still have to actively work to get good code, but it takes so much less time when you have a coding agent who can do the fine-grained edits on your behalf.
I firmly believe that agentic engineering should produce better code. If you are moving faster but getting worse results it's worth stopping and examining if there are processes you could fix.
> LLMs make the typing cheap, but they don’t make the reasoning cheap.
LLMs lower the cost of copy/pasting code around, or troubleshooting issues using standard error messages.
Instead of going through Stack Overflow to find how to use a framework to do some specific thing, you prompt a model. You don't even need to know a thing about the language you are using to leverage a feedback loop.
LLMs lower the cost of a multitude of drudge work in developing software, such as having to read the docs to learn how a framework should be used to achieve a goal. You still need to know what you are doing, but you don't need to reinvent the wheel.
For most non-hobby project, the cost of code was in breaking a working system (whether by a bona fide bug, or a change in some unspecified implicit assumption). That made changes to code incredibly expensive - often much more than the original implementation.
It sounds harsh, but over the lifetime of a project, 10-lines/person/day is often a high estimate of the number of lines produced. It’s not because humans type so slow - it is because after a while, it’s all about changing previously written lines in ways that don’t break things.
LLMs are much better at that than humans, if the constraints and tests are reasonably well specified.
Code you can’t just throw away is a liability because you have to keep supporting it / servicing it. Claude Code and friends also change that part of the cost equation:
You might not get gcc/llvm level optimization from a newly built compiler - but if you had a home-built one, which took $15,000/month engineer to support (for years!) you can now get a new one for $20,000 every 3 months, for a 50% cost saving, each time changing your requirements (which you couldn’t do before).
Code used to be a liability, like a car or an apartment for the average person. Now it’s a liability, like a car or apartment for Bill Gates.
But the amount of pleasing useful art has gone up x1000; If I had a blog, I would now have access to art that would be a perfect fit for my words, whereas 5 years ago, I would have to do with a my own (talent-lacking) doodles.
Would some people prefer no art/illustration to AI generated art? Sure. But even more would prefer no art to my doodles.
In my experience, it’s even more effort to get good code with an agent-when writing by hand, I fully understand the rationale for each line I write. With ai, I have to assess every clause and think about why it’s there. Even when code reviewing juniors, there’s a level of trust that they had a reason for including each line (assuming they’re not using ai too for a moment); that’s not at all my experience with Codex.
Last month I did the majority of my work through an agent, and while I did review its work, I’m now finding edge cases and bugs of the kind that I’d never have expected a human to introduce. Obviously it’s on me to better review its output, but the perceived gains of just throwing a quick bug ticket at the ai quickly disappear when you want to have a scalable project.
> I find that even though the cost of writing code is immensely cheap, reviewing and validating that it works in certain code bases (like the millions of line mono repo I work in at my job) is extremely high.
That is my observation as well. Churning code is easy, but making sure the code is not total crap is a completely new challenge and concern.
It's not like prior to LLMs code reviews didn't required work. Far from it. It's just that how the code is generated in a completely different way, and in some cases with barely any oversight from vibecoders who are trying to punch way above their weight. So they generate these massive volumes of changes that fail in obvious and subtle ways, and the flow is relentless.
> Once you can ask your agent to change a feature and be 100% sure they won't break other features then you don't care about how the code looks like.
That bar is unreasonably high.
Right now, if I ask a senior engineer to change a feature in a mature codebase, I only have perhaps 70% certainty they won't break other features. Tests help, but only so far.
All the hype is on how fast it is to produce code. But the actual bottleneck has always been the cost of specifying intent clearly enough that the result is changeable, testable, and correct AND that you build something that brings value.
Content explicitly supports freedom of opinion and expression by sharing technical knowledge and reasoning transparently. Author presents opinions ('code is cheap now,' 'we need to build new habits') and encourages reader engagement through open discussion of emerging practices.
FW Ratio: 63%
Observable Facts
Page publishes detailed technical opinion and analysis without editorial censorship or gatekeeping.
Author openly states personal viewpoint: 'These best practices are still being figured out across our industry. I'm still figuring them out myself.'
Content is structured as a guide within a larger series, inviting reader engagement and iteration.
Subscribe link and tag navigation indicate ongoing public dialogue mechanism.
Page includes ARIA labels, keyboard navigation, and theme toggle supporting diverse communication preferences.
Inferences
The author's transparent acknowledgment of uncertainty supports freedom of opinion by modeling intellectual humility.
Open access and accessible design reduce barriers to participating in information exchange and expression.
The guide structure and cross-linking encourage collaborative knowledge-building aligned with free expression values.
Content strongly supports education and professional development by providing freely accessible, detailed technical guidance on emerging engineering practices. Author shares knowledge to help developers build competence in agentic engineering.
FW Ratio: 63%
Observable Facts
Page is published as part of a guide series with structured chapters (Testing and QA, Red/green TDD, Linear walkthroughs, etc.) organized hierarchically.
Content is freely accessible without paywall, registration, or prerequisite.
Author provides detailed explanation of technical concepts and best practices, including definitions ('here's what I mean by good code') and reasoning.
Page includes timestamps (Created, Last modified) and revision history ('7 changes'), indicating iterative knowledge development.
Semantic HTML with heading anchors and navigation links support self-directed learning and reference.
Inferences
The guide format and open access support right to education by enabling self-directed learning in emerging technical domain.
Author's explicit reasoning and definitions support knowledge comprehension, not just information transfer.
Revision history signals commitment to maintaining educational accuracy as the field evolves.
Content addresses labor implicitly by discussing changing work conditions in software development. Author frames thoughtful practice and human oversight as requirements ('the developer driving those tools to ensure that the produced code is good code'), supporting fair working conditions and meaningful work.
FW Ratio: 60%
Observable Facts
Author states: 'there is still a substantial burden on the developer driving those tools to ensure that the produced code is good code.'
Content discusses 'macro level' design, estimation, and planning—work practices—and 'micro level' decisions about tradeoffs.
Guide emphasizes quality standards (testing, documentation, error handling) as essential, not optional, suggesting defense of work standards.
Inferences
By emphasizing developer responsibility for code quality despite automation, the author advocates for meaningful work and professional standards.
Discussion of changing labor economics supports worker understanding of how technology transforms working conditions.
Content addresses cultural and scientific interests implicitly by discussing engineering practices and best practices. Author frames agentic engineering as an emerging field requiring thoughtful development ('best practices are still being figured out'), supporting participation in cultural and scientific advancement.
FW Ratio: 60%
Observable Facts
Content is tagged as part of multiple knowledge domains: 'generative-ai', 'ai-assisted-programming', 'llms', 'agentic-engineering'.
Author models scientific thinking: 'These best practices are still being figured out across our industry. I'm still figuring them out myself.'
Guide is part of a broader 'Agentic Engineering Patterns' collection, suggesting participation in knowledge development.
Inferences
The scientific framing (acknowledging uncertainty, inviting collective problem-solving) supports participation in scientific advancement.
Open sharing of emerging knowledge contributes to cultural development in technology.
Content supports social and economic rights implicitly by discussing how agentic engineering changes labor economics ('code is cheap now') and encouraging thoughtful practices that protect worker dignity and decision-making authority.
FW Ratio: 60%
Observable Facts
Content explicitly discusses how technology changes the economics of software development labor, stating 'Coding agents dramatically drop the cost of typing code into the computer.'
Author emphasizes developer responsibility and agency: 'the challenge is to develop new personal and organizational habits.'
Guide addresses quality standards and worker decision-making rather than pure efficiency maximization.
Inferences
By framing the developer's role as central to ensuring 'good code,' the content advocates for worker agency and decision-making authority in the context of automation.
Discussion of labor economics and best practices supports informed participation in changing work conditions.
Content implicitly addresses duties and responsibilities by emphasizing developer responsibility to ensure code quality and thoughtful practice. Author frames agentic engineering adoption as requiring ethical consideration of tradeoffs.
FW Ratio: 60%
Observable Facts
Author states: 'the developer driving those tools to ensure that the produced code is good code' bears responsibility for quality outcomes.
Content implicitly affirms human dignity by framing technology (agentic engineering) as a tool to augment human capability and decision-making rather than replace it. Author emphasizes developer responsibility and thoughtful practice.
FW Ratio: 60%
Observable Facts
Page presents a guide on agentic engineering practices without paywall or registration requirement.
Author states 'the challenge is to develop new personal and organizational habits' and encourages engineers to 'second guess ourselves,' framing human judgment as central.
Navigation structure and chapter links are hierarchically organized with ARIA labels and keyboard accessibility.
Inferences
The framing of technology as a tool requiring human oversight suggests respect for human agency and decision-making capacity.
Open access to technical guidance supports the principle of equal opportunity to participate in shaping technological development.
Content implicitly supports freedom of peaceful assembly by presenting collaborative knowledge-building and community engagement (via subscription, tags, chapter navigation).
FW Ratio: 60%
Observable Facts
Page includes 'Subscribe' link enabling readers to join a community of followers.
Tag system ('coding-agents', 'ai-assisted-programming', 'generative-ai', etc.) enables community self-organization around shared interests.
Chapter navigation shows this is part of a larger guide, supporting collective engagement with content.
Inferences
Subscription and tag features enable community formation around shared interests in AI and engineering practices.
The guide format supports collaborative learning and assembly of knowledge-seekers.
Content does not address UDHR limitations; however, the emphasis on unrestricted adoption of agentic engineering ('fire off a prompt anyway') without explicit discussion of potential harms or safeguards could be interpreted as underestimating the need for thoughtful limitations. This is mild and contextual.
FW Ratio: 50%
Observable Facts
Author recommends: 'any time our instinct says don't build that, it's not worth the time, fire off a prompt anyway, in an asynchronous agent session.'
Inferences
The recommendation to override initial judgment in favor of rapid experimentation might diminish consideration of potential harms or safeguards, though the broader context emphasizes code quality.
No explicit privacy policy or tracking disclosure visible on page; localStorage usage for theme preference is non-identifying.
Terms of Service
—
No terms of service visible on page; footer indicates copyright and disclosures link present.
Identity & Mission
Mission
—
Personal blog/guide; no explicit organizational mission statement present.
Editorial Code
—
No visible editorial code or conflicts disclosure on page itself.
Ownership
—
Single author blog (Simon Willison); ownership clear from domain and byline.
Access & Distribution
Access Model
+0.10
Article 19 Article 26
Content is freely accessible with no paywall, subscription requirement, or registration barrier. Supports broad information access.
Ad/Tracking
-0.05
Article 12
Page mentions sponsor (Teleport) in header; tracking scripts present (localStorage, theme toggle). Minimal but present commercial relationship.
Accessibility
+0.15
Article 2 Article 19
Page implements theme toggle with ARIA labels, heading anchors with keyboard navigation, and structured semantic HTML. Supports diverse user preferences and assistive technology.
Site provides free, publicly accessible platform for expression; semantic HTML and accessibility features enable broad audience participation. Navigation and chapter structure support information-seeking.
Theme toggle with ARIA labels and semantic HTML support diverse user preferences and assistive technology use, reducing discrimination based on disability or sensory preference.
Author positions themselves as an insider in emerging agentic engineering field ('I'm still figuring them out myself'), implicitly appealing to reader trust in expert guidance.
causal oversimplification
Statement 'Code has always been expensive' as opening generalization without acknowledgment of historical variation (different languages, domains, team structures).
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 11:31:12 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.