+0.23 Ask HN: How is AI-assisted coding going for you professionally?
204 points by svara 7 hours ago | 339 comments on HN | Mild positive Moderate agreement (2 models) Community · v3.7 · 2026-03-15 22:10:07 0
Summary Free Expression & Scientific Knowledge Sharing Advocates
This Hacker News discussion thread advocates for free expression, knowledge sharing, and community participation in professional AI development discourse. The post explicitly invites transparent, evidence-based sharing of concrete experiences with AI tools, rejecting polarized narratives. The content and platform structure strongly support Articles 19 (free expression), 26 (education), and 27 (participation in scientific/cultural life), with moderate positive signals across freedom of thought, assembly, and labor rights.
Rights Tensions 2 pairs
Art 12 Art 19 Request for professional context (team size, stack, experience level) to support learning (Article 19, 26) conflicts with privacy protection (Article 12) when such details may identify individuals or reveal employer information.
Art 12 Art 27 Desire for participation in scientific knowledge commons (Article 27) requires sharing details that may expose individual professional information and intellectual property, creating tension with privacy rights (Article 12).
Article Heatmap
Preamble: +0.19 — Preamble P Article 1: +0.08 — Freedom, Equality, Brotherhood 1 Article 2: -0.01 — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: -0.32 — Privacy 12 Article 13: +0.33 — Freedom of Movement 13 Article 14: +0.12 — Asylum 14 Article 15: +0.14 — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: +0.38 — Freedom of Thought 18 Article 19: +0.75 — Freedom of Expression 19 Article 20: +0.28 — Assembly & Association 20 Article 21: +0.13 — Political Participation 21 Article 22: +0.23 — Social Security 22 Article 23: +0.26 — Work & Equal Pay 23 Article 24: +0.13 — Rest & Leisure 24 Article 25: +0.26 — Standard of Living 25 Article 26: +0.37 — Education 26 Article 27: +0.54 — Cultural Participation 27 Article 28: +0.12 — Social & International Order 28 Article 29: +0.22 — Duties to Community 29 Article 30: +0.23 — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
E
+0.23
S
+0.21
Weighted Mean +0.26 Unweighted Mean +0.22
Max +0.75 Article 19 Min -0.32 Article 12
Signal 20 No Data 11
Volatility 0.21 (Medium)
Negative 2 Channels E: 0.6 S: 0.4
SETL +0.02 Editorial-dominant
FW Ratio 51% 45 facts · 43 inferences
Agreement Moderate 2 models · spread ±0.129
Evidence 37% coverage
4H 12M 8L 11 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.09 (3 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.07 (4 articles) Personal: 0.38 (1 articles) Expression: 0.39 (3 articles) Economic & Social: 0.22 (4 articles) Cultural: 0.46 (2 articles) Order & Duties: 0.19 (3 articles)
HN Discussion 20 top-level · 19 replies
onlyrealcuzzo 2026-03-15 18:29 UTC link
I work at a FAANG.

Professionally, I have had almost no luck with it, outside of summarizing design docs or literally just finding something in the code that a simple search might not find: such is this team's code that does X?

I am yet to successfully prompt it and get a working commit.

Further, I will add that I also don't know any ICs personally who have successfully used it. Though, there's endless posts of people talking about how they're now 10x more productive, and everyone needs to do x y an z now. I just don't know any of these people.

Non-professionally, it's amazing how well it does on a small greenfield task, and I have seen that 10x improvement in velocity. But, at work, close to 0 so far.

Of the posts I've seen at work, they typically tend to be teams doing something new / greenfield-ish or a refactor. So I'm not surprised by their results.

hdhdhsjsbdh 2026-03-15 18:35 UTC link
It has made my job an awful slog, and my personal projects move faster.

At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.

In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.

I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.

QuadrupleA 2026-03-15 19:00 UTC link
As a veteran freelance developer - aside from some occasional big wins, I'd say it's been net neutral or even net negative to my productivity. When I review AI-generated code carefully (and if I'm delivering it to clients I feel that's my responsibility) I always find unnecessary complexity, conceptual errors, performance issues, looming maintainability problems, etc. If I were to let it run free, these would just compound.

A couple "win" examples: add in-text links to every term in this paragraph that appears elsewhere on the page, plus corresponding anchors in the relevant page parts. Or, replace any static text on this page with any corresponding dynamic elements from this reference URL.

Lose examples: constant, but edit format glitches (not matching searched text; even the venerable Opus 4.6 constantly screws this up), unnecessary intermediate variables, ridiculously over-cautious exception-handling, failing to see opportunities to isolate repeated code into a function, or to utilize an existing function that exactly implements said N lines of code, etc.

simonw 2026-03-15 19:02 UTC link
The majority of code I've written since November 2025 has been created using agents, as opposed to me typing code into a text editor. More than half of that has been done from my iPhone via Claude Code for web (bad name, great software.)

I'm enjoying myself so much. Projects I've been thinking about for years are now a couple of hours of hacking around. I'm readjusting my mental model of what's possible as a single developer. And I'm finally learning Go!

The biggest challenge right now is keeping up with the review workload. For low stakes projects (small single-purpose HTML+JS tools for example) I'm comfortable not reviewing the code, but if it's software I plan to have other people use I'm not willing to take that risk. I have a stack of neat prototypes and maybe-production-quality features that I can't ship yet because I've not done that review work.

I mainly work as an individual or with one other person - I'm not working as part of a larger team.

fastasucan 2026-03-15 20:55 UTC link
It makes my work suck, sadly. Team dynamics also contributes to that, admittedly.

Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.

When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.

Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.

I had appreciated any contribution to my code, but thinking that it would be so easy to just take my code and finishing it by asking Claude was rather insulting.

wk_end 2026-03-15 20:58 UTC link
Around a year ago I started a new position at a very large tech company that I won't name, working on a pre-existing web project there. The code base isn't terrible - though not very good either, by-and-large - but it's absolutely massive, often over-engineered, pretty unorthodox, and definitely has some questionable design decisions; even after more than a year of working with it I still feel like a beginner much of the time.

This year I grudgingly bit the bullet and began using AI tools, and to my dismay they've been a pretty big boon for me, in this case. Not just for code generation - they're really good at probing the monolith and answering questions I have about how it works. Before I'd spend days pouring over code before starting work to figure out the right way to build something or where to break in, pinging people over in India or eastern Europe with questions and hoping they reply to me overnight. AI's totally replaced that, and it works shockingly well.

When I do fall back on it for code generation, it's mostly just to mitigate the tedium of writing boilerplate. The code it produces tends to be pretty poor - both in terms of style and robustness - and I'll usually need to take at least a couple of passes over it to get it up to snuff. I do find this faster than writing everything out by hand in the end, but not by a lot.

For my personal projects I don't find it adds much, but I do enjoy rubber ducking with ChatGPT.

Izkata 2026-03-15 21:13 UTC link
I don't use it.

I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.

One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.

Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.

A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.

shmel 2026-03-15 22:00 UTC link
I got insanely more productive with Claude Code since Opus 4.5. Perhaps it helps that I work in AI research and keep all my projects in small prototype repos. I imagine that all models are more polished for AI research workflow because that's what frontier labs do, but yeah, I don't write code anymore. I don't even read most of it, I just ask Claude questions about the implementation, sometimes ask to show me verbatim the important bits. Obviously it does mistakes sometimes, but so do I and everyone I have ever worked with. What scares me that it does overall fewer mistakes than I do. Plan mode helps tremendously, I skip it only for small things. Insisting on strict verification suite is also important (kind of like autoresearch project).
greenpizza13 2026-03-15 22:00 UTC link
I work at a very prominent AI company. We have access to every tool under the sun. There are various levels of success for all levels — managers, PMs, engineers.

We have cursor with essentially unlimited Opus 4.6 and it’s fundamentally changed my workflow as a senior engineer. I find I spend much more time designing and testing my software and development time is almost entirely prompting and reviewing AI changes.

I’m afraid my coding skills are atrophying, in fact I know the are, but I’m not sure if the coding was the part of my job I truly enjoyed. I enjoy thinking higher-level: architecture, connecting components, focusing on the user experience. But I think using these AI tools is a form of golden handcuffs. If I go work at a startup without the money I pay for these models, I think for the first time in my career I would be less likely to be able to successfully code a feature than I could last year.

So professionally there are pros and cons. My design and architecture skills have greatly improved as I am spending more time doing this.

Personally it’s so much fun. I’ve made several side projects I would have never done otherwise. Working with Claude code on greenfield projects is a blast.

piker 2026-03-15 22:01 UTC link
I am working on a sub 100KLOC Rust application and can't productively use the agentic workflows to improve that application.

On the other hand, I have tried them a number of times in greenfield situations with Python and the web stack and experienced the simultaneous joy and existential dread of others. They can really stand new projects up quick.

As a founder, this leaves me with what I describe as the "generation ship" problem. Is it possible that the architecture we have chosen for my project is so far out of the training data that it would be faster to ditch the project and reimplement it from scratch in a Claude-yolo style? So far, I'm convinced not because the code I've seen in somewhat novel circumstances is fairly mid, but it's hard to shake the thought.

I do find chatting with the models incredibly helpful in all contexts. They are also excellent at configuring services.

humbleharbinger 2026-03-15 22:01 UTC link
I'm an engineer at Amazon - we use Kiro (our own harness) with Opus 4.6 underneath.

Most of my gripes are with the harness, CC is way better.

In terms of productivity I'm def 2-4X more productive at work, >10x more productive on my side business. I used to work overtime to deliver my features. Now I work 9-5 and am job hunting on the side while delivering relatively more features.

I think a lot of people are missing that AI is not just good for writing code. It's good for data analysis and all sorts of other tasks like debugging and deploying. I regularly use it to manage deployment loops (ex. make a code change and then deploy the changes to gamma and verify they work by making a sample request and verifying output from cloudwatch logs etc). I have built features in 2 weeks that would take me a month just because I'd have to learn some nitty technical details that I'd never use again in my life.

For data analysis I have an internal glue catalog, I can just tell it to query data and write a script that analyzes X for me.

AI and agents particularly have been a huge boon for me. I'm really scared about automation but also it doesn't make sense to me that SWE would be automated first before other careers since SWE itself is necessary to automate others. I think there are some fundamental limitations on LLMs (without understanding the details too much), but whatever level of intelligence we've currently unlocked is fundamentally going to change the world and is already changing how SWE looks.

INTPenis 2026-03-15 22:15 UTC link
I'm always skeptical to new tech, I don't like how AI companies have reserved all memory circuits for X years, that is definitely going to cause problems in society when regular health care sector businesses can't scale or repair their infra, and the environmental impact is also a discussion that I am not qualified to get into.

All I can say for sure is that it is absolutely useful, it has improved my quality of life without a doubt. I stick to the principle that it's here to improve my work life balance, not increase output for our owners.

And that it has done, so far. I can accomplish things that would have taken me weeks of stressful and hyperfocused work in just hours.

I use it very carefully, and sparingly, as a helpful tool in my toolbox. I do not let it run every command and look into every system, just focused efforts to generate large amounts of boilerplate code that would require me to have a lot of docs open if I were to do it myself.

I definitely don't let it read or write my e-mails, or write any text. Because I always loved writing, and will never stop loving it.

It's here to stay, because I'm not alone in feeling this way about it. So the staunch AI-deniers are just wasting their time. Just like any other tech, it's going to be used against humans, against the already oppressed.

I definitely recognize that the tech has made some people lose their minds. Managers and product owners are now vibe coding thinking they can replace all their developers. But their code base will rot faster than they think.

tryauuum 2026-03-15 22:16 UTC link
I had a couple of nice moments, like claude helping me with rust (which I don't understand) and claude finding a bug in a python library I was using

Also some not so nice moments (small rust changes were OK, but with a big one claude fumbled + I couldn't really verify that it worked so I didn't merge to code to master even when it seemingly worked)

I think it really helps to break the ice so to say. You no longer feel the tension, the pain of an empty page. You ask claude to write something, and improving something is so mentally easier

Also I mostly use claude as a spell checker / linter for the projects I'm too lazy to install proper tools for that. vim + claude, what else would you need

Luckily my company pays for the subscription, speding personal money on LLMs (especially on US LLMs) would feel strange for some reason. Ideally I want to own an LLM, have it at home but I am too lazy

ecopoesis 2026-03-15 22:19 UTC link
I'm a manager at a large consumer website. My team and I have built a harness that uses headless Claude's (running Opus) to do ticket work, respond to and fix PR comments, and fix CI test failures. Our only interaction with code is writing specs in Jira tickets (which we primarily do via local Claudes) and adding PR comments to GitHub PRs.

The speed we can move at is astounding. We're going to finish our backlog next quarter. We're conservatively planning on launching 3x as many features next quarter.

Claude is far from perfect: it's made us reassess our coding standards since code is primarily for Claude now, not for humans. So much of what we did was to make code easier for the next dev, and that just doesn't matter anymore.

keithnz 2026-03-15 22:22 UTC link
Pretty good, we have a huge number of projects, some more modern than others. For the older legacy systems, it's been hugely useful. Not perfect, needs a bit more babysitting, but a lot easier to deal with than doing it solo. For the newer things, they can mostly be done solely by AI, so more time spent just speccing / designing the system than coding. But every week we are working out better and batter ways of working with AI, so it's an evolving process at the moment
PerryStyle 2026-03-15 22:23 UTC link
I work in HPC and I’ve found it very useful in creating various shell scripts. It really helps if you have linters such as shellcheck.

Other areas of success have been just offloading the typing/prototyping. I know exactly how the code should look like so I rarely run into issues.

giancarlostoro 2026-03-15 22:23 UTC link
My current employer is taking a long time to figure out how they think they want people to use it, meanwhile, all my side projects for personal use are going quite strong.
gamerDude 2026-03-15 22:24 UTC link
I find it useful. It has been a big solve from a motivation perspective. Getting into bad API docs or getting started on a complex problem, it's easy to have AI go with me describing it. And the other positive is front end design. I've always hated css and it's derivatives and AI makes me now decent.

The negatives are that AI clearly loves to add code, so I do need to coach it into making nice abstractions and keeping it on track.

ramoz 2026-03-15 22:25 UTC link
Right now I'm really enjoying the labs' cli harnesses, Claude Code, and Codex (especially for review). I do a bunch of niche stuff with Pi and OpenCode.

My workday is fairly simple. I spend all day planning and reviewing.

1. For most features, unless it's small things, I will enter plan mode.

2. We will iterate on planning. I built a tool for this, and it seems that this is a fairly common and popular workflow, given the popularity through organic growth. https://github.com/backnotprop/plannotator - This is a very simple tool that captures through a hook (ExitPlanMode) the plan creates a UI for me to actually read the plan and annotate, with things like viewing plan diffs so I can see what the agent changed.

3. After plan's approved, we hit eventual review of implementation. I'll use AI reviewers, but I will also manually review using the Sane tool to create annotations and iteration feedback loops with the agents.

4. Do a lot of multitasking with work trees now.

Work trees weren't something I truly understood the value of for a while, until a couple weeks ago, embarrassingly enough: https://backnotprop.com/blog/simplifying-git-worktrees/

hasahmed 2026-03-15 22:26 UTC link
Hating it TBH. I feel like it took away a lot of what I enjoyed about programming but its often so effective and I'm under so much pressure to be fast that I can't not use it.
eranation 2026-03-15 18:53 UTC link
Wow, that's such a drastic different experience than mine. May I ask what toolset are you using? Are you limited to using your home grown "AcmeCode" or have full access to Claude Code / Cursor with the latest and greatest models, 1M context size, full repo access?

I see it generating between 50% to 90% accuracy in both small and large tasks, as in the PRs it generates range between being 50% usable code that a human can tweak, to 90% solution (with the occasional 100% wow, it actually did it, no comments, let's merge)

I also found it to be a skillset, some engineers seem to find it easier to articulate what they want and some have it easier to think while writing code.

phyzix5761 2026-03-15 18:59 UTC link
> did not obey the API design of the main project

If they're handing you broken code call them out on it. Say this doesn't do what it says it does, did you want me to create a story for redoing all this work?

goalieca 2026-03-15 20:02 UTC link
I can second this. I’ve never had a problem writing short scripts and glue code in stuff ive mastered. In places I actually need help, I’m finding it slows me down.
ramraj07 2026-03-15 20:20 UTC link
If you dont take a stand and refuse to clean their mess, aren't you part of the problem? No self respecting proponent of AI enabled development should suggest that the engineers generating the code are still not personally responsible for its quality.
Cyphase 2026-03-15 20:21 UTC link
How often do you find issues during review? What kinds of issues?
theshrike79 2026-03-15 20:31 UTC link
Just reply with this to every AI programming task: https://simonwillison.net/2025/Dec/18/code-proven-to-work/

It's just plain unprofessional to just YOLO shit with AI and force actual humans to read to code even if the "author" hasn't read it.

Also API design etc. should be automatically checked by tooling and CI builds, and thus PR merges, should be denied until the checks pass.

wombat-man 2026-03-15 20:32 UTC link
Also at FAANG. I think I am using the tools more than my peers based on my conversations. The first few times I tried our AI tooling, it was extremely hit and miss. But right around December the tooling improved a lot, and is a lot more effective. I am able to make prototypes very quickly. They are seldom check-in ready, but I can validate assumptions and ideas. I also had a very positive experience where the LLM pointed out a key flaw in an API I had been designing, and I was able to adjust it before going further into the process.

Once the plan is set, using the agentic coder to create smaller CLs has been the best avenue for me. You don't want to generate code faster than you and your reviewers can comprehend it. It'll feel slow, but check ins actually move faster.

I will say it's not all magic and success. I have had the AI lead me down some dark corners, assuring me one design would work when actually it is a bit outdated or not quite the right fit for the system we are building for because of reasons. So, I wouldn't really say that it's a 10x multiplier or anything, but I'm definitely getting things done faster than I could on my own. Expertise on the part of the user is still crucial.

One classic issue I used to run into, is doing a small refactor and then having to manually fix a bunch of tests. It is so much simpler to ask the LLM to move X from A to B and fix any test failures. Then I circle back in a few minutes to review what was done and fix any issues.

The other thing is, it has visibility for the wider code base, including some of our infrastructure that we're dependent on. There have been a couple times in the past quarter where our build is busted by an external team, and I am able to ask the LLM given the timeframe and a description of the issue, the exact external failure that caused it. I don't really know how long it would have taken to resolve the issue otherwise, since the issues were missed by their testing. That said, I gotta wonder if those breakages were introduced by LLM use.

My job hasn't been this fun in a long, long time and I am a little uneasy about what these tools are going to mean for my personal job security, but I don't know how we can put the genie back into the bottle at this point.

visarga 2026-03-15 20:52 UTC link
I think you need coding style guide files in each repo, including preferred patterns & code examples. Then you will see less and less of that.
2muchcoffeeman 2026-03-15 21:07 UTC link
There’s a lot more going on there than AI …
tehjoker 2026-03-15 21:34 UTC link
I use AI to discuss and possibly generate ideas and tests, but I make sure I understand everything and type it in except for trivial stuff. The main value of an engineer is understanding things. AI can help me understand things better and faster. If I just setup plans for AI and vibe, human capital is neglected and declines. I don't think there's much of a future if you don't know what you're doing, but there is always a future for people with deep understanding of problems and systems.
tim-tday 2026-03-15 21:51 UTC link
If someone does that simply say “no. use the latest code”
tim-tday 2026-03-15 21:53 UTC link
I’m the same way. But I took a bite and now I’m hooked.

I started using it for things I hate, ended up using it everywhere. I move 5x faster. I follow along most of the time. Twice a week I realize I’ve lost the thread. Once a month it sets me back a week or more.

boredemployee 2026-03-15 21:58 UTC link
I completely understand.

We're in a phase where founders are obsessed with productivity so everything seens to work just fine and as intended with few slops.

They're racing to be as productive as possible so we can get who knows where.

There are times when I honestly don't even know why we're automating certain tasks anymore.

In the past, we had the option of saying we didn't know something, especially when it was an area we didn't want to know about. Today, we no longer have that option, because knowledge is just a prompt away. So you end up doing front-end work for a backend application you just built, even though your role was supposed to be completely different.

dawnerd 2026-03-15 22:03 UTC link
We’ve had this too and made a change to our code review guidelines to mention rejection if code is clearly just ai slop. We’ve let like four contractors go so far over it. Like ya they get work done fast but then when it comes to making it production ready they’re completely incapable. Last time we just merged it anyways to hit a budget it set everyone back and we’re still cleaning up the mess.
HeavyStorm 2026-03-15 22:07 UTC link
Same here. My take is that the codebase is too large and complex for it to find the right patterns.

It does work sometimes. The smaller the task, the better.

peab 2026-03-15 22:08 UTC link
This seems to be a team problem more than anything? Why are your coworkers taking on your responsibilities? Where's your manager on this?
slurpyb 2026-03-15 22:09 UTC link
It can only result in more work if you freelance because it you disclose that you used llm’s then you did it faster than usual and presumably less quality so you have to deliver more to retain the same income except now your paying all the providers for all the models because you start hitting usage limits and claude sucks on the weekends and your drive is full of ‘artifacts’, which incurs mental overhead that is exacerbated by your crippling adhd

And then all of a sudden you’re just arguing with the terminal all day - the specs are written by gpt, delivered in-the email written by gpt. Sometimes they dont even have the time to slice their prompt from the edges of the paste but the only thing i can think of is “i need to make the most of 0.5x off peak claude rates “

Fuck.

I got lots of pretty TUIs though so thats neat

tim-tday 2026-03-15 22:14 UTC link
This is wild. I’m on the other end.

I’ve probably prompted 10,000 lines of working code in the last two months. I started with terraform which I know backwards and forwards. Works perfectly 95% of the time and I know where it will go wrong so I watch for that. (Working both green field, in other existing repos and with other collaborators)

Moved on to a big data processing project, works great, needed a senior engineer to diagnose one small index problem which he identified in 30s. (But I’d bonked on for a week because in some cases I just don’t know what I don’t know)

Meanwhile a colleague wanted a sample of the data. Vibe coded that. (Extract from zip without decompressing) He wanted randomized. One shot. Done. Then he wanted randomized across 5 categories. Then he wanted 10x the sample size. Data request completed before the conversion was over. I would have worked on that for three hours before and bonked if I hit the limit of my technical knowledge.

Built a monitoring stack. Configured servers, used it to troubleshoot dozens of problems.

For stuff I can’t do, now I can do. For stuff I could do with difficulty now I can do with ease. For stuff I could do easily now I can do fast and easy.

Your vastly different experience is baffling and alien to me. (So thank you for opening my eyes)

jgilias 2026-03-15 22:25 UTC link
I saw somewhere that you guys had All Hands where juniors were prohibited from pushing AI-assisted code due to some reliability thing going on? Was that just a hoax?
Editorial Channel
What the content says
+0.50
Article 19 Freedom of Expression
High Advocacy Framing Practice
Editorial
+0.50
SETL
-0.17

Post is explicitly about free expression and information sharing. Solicits detailed sharing of professional experience without censorship or editorial gatekeeping. Encourages participants to speak openly about AI tool effectiveness.

+0.45
Article 27 Cultural Participation
High Advocacy Framing Practice
Editorial
+0.45
SETL
+0.15

Post is centrally about participation in cultural and scientific life of AI-assisted development community. Invites sharing of professional knowledge and collective learning around emerging technology. Assumes right to participate in cultural and scientific advancement.

+0.40
Article 26 Education
High Advocacy Framing Practice
Editorial
+0.40
SETL
+0.14

Post is explicitly about education and skill development through knowledge sharing. Requests detailed context to enable learning by others. Assumes that participants have right to education and professional development.

+0.35
Article 18 Freedom of Thought
High Advocacy Framing Practice
Editorial
+0.35
SETL
-0.14

Post explicitly invites freedom of thought and conscience by requesting honest, experience-based sharing without ideological conformity. Rejects polarized narratives and seeks authentic perspective.

+0.35
Article 23 Work & Equal Pay
Medium Advocacy Framing
Editorial
+0.35
SETL
+0.23

Post is about professional work and working conditions in AI-assisted development. Invites participants to share challenges they encountered and how they solved them, implying concern for fair working conditions and just remuneration.

+0.30
Article 13 Freedom of Movement
Medium Framing Practice
Editorial
+0.30
SETL
-0.13

Post explicitly invites participants from anywhere with professional coding experience and internet access. No geographic restrictions mentioned. Supports freedom of movement and residence through inclusive framing.

+0.30
Article 25 Standard of Living
Medium Advocacy Framing
Editorial
+0.30
SETL
+0.17

Post addresses working conditions and professional development, which relate to right to adequate standard of living. Request for concrete experience sharing implies concern for material reality of work life.

+0.25
Preamble Preamble
Medium Framing
Editorial
+0.25
SETL
+0.16

Post frames AI tools as subjects of grounded empirical inquiry rather than existential threats or hype. Explicitly seeks to move beyond polarized discourse ('we're all cooked' vs 'AI is useless') toward evidence-based understanding. Does not invoke universal human dignity or inherent rights language.

+0.25
Article 20 Assembly & Association
Medium Framing Practice
Editorial
+0.25
SETL
-0.12

Post creates space for peaceful assembly of professionals around shared interest in AI tool evaluation. Does not restrict participation based on political affiliation or ideology.

+0.25
Article 29 Duties to Community
Medium Framing
Editorial
+0.25
SETL
+0.11

Post assumes human dignity and rights of all participants through respectful, non-exploitative framing. Does not reduce participants to data sources or economic value. Invites authentic voice, implying respect for full humanity.

+0.20
Article 15 Nationality
Medium Framing
Editorial
+0.20
SETL
+0.14

Post does not address nationality directly. Neutral professional framing allows participation regardless of national origin.

+0.20
Article 22 Social Security
Medium Framing Practice
Editorial
+0.20
SETL
-0.11

Post facilitates peer learning and professional community building, which supports realization of rights through collective action and knowledge sharing.

+0.20
Article 30 No Destruction of Rights
Medium Framing Practice
Editorial
+0.20
SETL
-0.11

Post does not invoke or restrict interpretation of UDHR. Neutral framing does not encourage rights-negating interpretation.

+0.15
Article 14 Asylum
Low Framing
Editorial
+0.15
SETL
+0.09

Post does not directly invoke asylum or refuge rights. Neutral professional framing does not explicitly welcome or exclude asylum seekers.

+0.15
Article 28 Social & International Order
Low
Editorial
+0.15
SETL
+0.09

Post does not directly invoke social and international order.

+0.10
Article 2 Non-Discrimination
Medium
Editorial
+0.10
SETL
+0.07

Post does not reference discrimination. Neutral stance toward all professional coding experience, regardless of identity. No protection against discriminatory comments from other users.

+0.10
Article 21 Political Participation
Low Practice
Editorial
+0.10
SETL
-0.09

Post does not address political participation or governance.

+0.10
Article 24 Rest & Leisure
Low Practice
Editorial
+0.10
SETL
-0.09

Post does not directly address rest and leisure rights.

+0.05
Article 1 Freedom, Equality, Brotherhood
Low
Editorial
+0.05
SETL
-0.07

Post does not directly address equality or inherent dignity. Neutral framing of professional experience sharing does not discriminate but also does not affirm equal rights.

-0.15
Article 12 Privacy
Medium Practice
Editorial
-0.15
SETL
+0.16

Post requests personal context (team size, experience level, stack) which reveals professional details and potentially identifiable information. Does not explicitly request privacy safeguards.

ND
Article 3 Life, Liberty, Security
Low

Post does not address right to life, liberty, or security of person.

ND
Article 4 No Slavery

Post does not address slavery or servitude.

ND
Article 5 No Torture

Post does not address torture or cruel treatment.

ND
Article 6 Legal Personhood

Post does not address legal personality or rights before the law.

ND
Article 7 Equality Before Law
Low

Post does not address equal protection before the law.

ND
Article 8 Right to Remedy

Post does not address remedies for rights violations.

ND
Article 9 No Arbitrary Detention

Post does not address arbitrary arrest or detention.

ND
Article 10 Fair Hearing
Medium Practice

Post does not directly invoke due process but assumes fair adjudication of competing claims.

ND
Article 11 Presumption of Innocence

Post does not address criminal liability or presumption of innocence.

ND
Article 16 Marriage & Family

Post does not address marriage, family, or protection of the family.

ND
Article 17 Property
Low Practice

Post does not directly address property rights.

Structural Channel
What the site does
+0.55
Article 19 Freedom of Expression
High Advocacy Framing Practice
Structural
+0.55
Context Modifier
+0.22
SETL
-0.17

Forum structure enables unrestricted posting of comments (subject to community guidelines). No pre-publication review or editorial filter. Users can speak directly to large audience.

+0.40
Article 18 Freedom of Thought
High Advocacy Framing Practice
Structural
+0.40
Context Modifier
0.00
SETL
-0.14

Discussion forum structure enables expression of diverse viewpoints. Voting system allows community to rank ideas by merit, not by conformity. Pseudonymity reduces pressure for ideological conformity.

+0.40
Article 27 Cultural Participation
High Advocacy Framing Practice
Structural
+0.40
Context Modifier
+0.12
SETL
+0.15

Platform enables free participation in knowledge commons. Discussion of AI tools and working methods constitutes participation in scientific and technical culture. No barriers to contributing or accessing.

+0.35
Article 13 Freedom of Movement
Medium Framing Practice
Structural
+0.35
Context Modifier
0.00
SETL
-0.13

Hacker News is globally accessible; no geo-blocking observed. Pseudonymity enables participation from any jurisdiction without fear of local surveillance or retaliation.

+0.35
Article 26 Education
High Advocacy Framing Practice
Structural
+0.35
Context Modifier
0.00
SETL
+0.14

Platform enables free knowledge sharing and peer learning. Discussion threads serve as informal educational resources. No paywall restricts access to learning.

+0.30
Article 20 Assembly & Association
Medium Framing Practice
Structural
+0.30
Context Modifier
0.00
SETL
-0.12

Discussion thread enables collective gathering and deliberation. No restrictions on who can join. Moderation against violence or harassment supports right to peaceful assembly.

+0.25
Article 22 Social Security
Medium Framing Practice
Structural
+0.25
Context Modifier
0.00
SETL
-0.11

Discussion forum enables formation of professional community. Persistent thread and search enable ongoing social participation. Free access supports participation regardless of economic status.

+0.25
Article 30 No Destruction of Rights
Medium Framing Practice
Structural
+0.25
Context Modifier
0.00
SETL
-0.11

Platform has no explicit safeguards against rights-negating interpretation. Moderation is reactive to specific harms, not proactive in protecting rights as indivisible.

+0.20
Article 23 Work & Equal Pay
Medium Advocacy Framing
Structural
+0.20
Context Modifier
0.00
SETL
+0.23

Platform enables workers to discuss working conditions publicly but provides no direct enforcement of labor rights or collective bargaining mechanisms.

+0.20
Article 25 Standard of Living
Medium Advocacy Framing
Structural
+0.20
Context Modifier
+0.02
SETL
+0.17

Discussion enables sharing of information about earning potential, working conditions, and professional growth. However, platform provides no direct support for health, food, or housing rights.

+0.20
Article 29 Duties to Community
Medium Framing
Structural
+0.20
Context Modifier
0.00
SETL
+0.11

Community guidelines prohibit harassment and dehumanizing language. However, platform monetizes user data and comments (via analytics and advertising), creating potential tension between respecting dignity and extracting value.

+0.15
Preamble Preamble
Medium Framing
Structural
+0.15
Context Modifier
0.00
SETL
+0.16

Discussion forum structure enables open participation and peer review of claims. However, platform governance is hierarchical (moderators, voting algorithms), not participatory in rights protection.

+0.15
Article 21 Political Participation
Low Practice
Structural
+0.15
Context Modifier
0.00
SETL
-0.09

Hacker News voting system enables users to collectively rank content, resembling lightweight democratic participation. However, final moderation authority rests with site operators.

+0.15
Article 24 Rest & Leisure
Low Practice
Structural
+0.15
Context Modifier
0.00
SETL
-0.09

Forum structure allows asynchronous participation, enabling users to engage on their own time schedule, which may support rest rights.

+0.10
Article 1 Freedom, Equality, Brotherhood
Low
Structural
+0.10
Context Modifier
0.00
SETL
-0.07

Platform allows pseudonymous participation, reducing discrimination barriers. However, participation requires English literacy and internet access, creating de facto exclusion.

+0.10
Article 14 Asylum
Low Framing
Structural
+0.10
Context Modifier
0.00
SETL
+0.09

Platform is accessible from most countries but does not actively facilitate or protect asylum-related discussions.

+0.10
Article 15 Nationality
Medium Framing
Structural
+0.10
Context Modifier
0.00
SETL
+0.14

Hacker News does not appear to enforce nationality restrictions on account creation or participation.

+0.10
Article 28 Social & International Order
Low
Structural
+0.10
Context Modifier
0.00
SETL
+0.09

Hacker News is a single-platform system with terms of service but no binding international governance framework.

+0.05
Article 2 Non-Discrimination
Medium
Structural
+0.05
Context Modifier
-0.08
SETL
+0.07

Platform voting and flagging systems allow community to downrank overtly discriminatory comments, but moderation is reactive and decentralized.

-0.25
Article 12 Privacy
Medium Practice
Structural
-0.25
Context Modifier
-0.11
SETL
+0.16

Hacker News collects user IP addresses, login data, and comment history. Data is publicly visible (unless account deleted). Third-party tracking and analytics present. No end-to-end encryption.

ND
Article 3 Life, Liberty, Security
Low

Platform hosts content in jurisdictions with varying rule of law protections. Hacker News has moderation policies but no independent appeals process.

ND
Article 4 No Slavery

No observable structural signal regarding forced labor or slavery.

ND
Article 5 No Torture

No observable structural signal regarding torture or cruelty.

ND
Article 6 Legal Personhood

No observable structural signal regarding legal recognition.

ND
Article 7 Equality Before Law
Low

Platform moderation applies community guidelines uniformly (in theory). However, algorithmic ranking and voting systems may create de facto unequal treatment of content based on initial visibility.

ND
Article 8 Right to Remedy

Post does not address remedies for rights violations.

ND
Article 9 No Arbitrary Detention

No observable structural signal regarding detention.

ND
Article 10 Fair Hearing
Medium Practice

Discussion format enables peer review and contestation of claims. Multiple perspectives can present evidence. However, no independent judiciary or formal appeals process; moderation is hierarchical.

ND
Article 11 Presumption of Innocence

No observable structural signal regarding criminal law.

ND
Article 16 Marriage & Family

No observable structural signal regarding family rights.

ND
Article 17 Property
Low Practice

Hacker News allows users to own accounts and retain intellectual property in comments. User-generated content remains attributed to author.

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.71 low claims
Sources
0.7
Evidence
0.7
Uncertainty
0.7
Purpose
0.8
Propaganda Flags
No manipulative rhetoric detected
0 techniques detected
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.3
Arousal
0.3
Dominance
0.2
Transparency
Does the content identify its author and disclose interests?
0.30
✗ Author
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.70 solution oriented
Reader Agency
0.8
Stakeholder Voice
Whose perspectives are represented in this content?
0.35 1 perspective
Speaks: individuals
About: workersdevelopers
Temporal Framing
Is this content looking backward, at the present, or forward?
present immediate
Geographic Scope
What geographic area does this content cover?
global
Complexity
How accessible is this content to a general audience?
moderate medium jargon domain specific
Longitudinal 219 HN snapshots · 11 evals
+1 0 −1 HN
Audit Trail 29 entries
2026-03-15 23:40 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 23:40 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 23:27 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 23:27 model_divergence Cross-model spread 0.26 exceeds threshold (2 models) - -
2026-03-15 23:27 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Discussion on AI-assisted coding, no explicit human rights discussion
2026-03-15 23:27 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 22:10 eval_success Evaluated: Mild positive (0.26) - -
2026-03-15 22:10 eval Evaluated by claude-haiku-4-5-20251001: +0.26 (Mild positive) 12,727 tokens
2026-03-15 22:10 rater_validation_warn Validation warnings for model claude-haiku-4-5-20251001: 0W 4R - -
2026-03-15 21:26 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 21:26 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 21:23 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 21:22 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Discussion on AI-assisted coding, no explicit human rights discussion
2026-03-15 21:22 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 20:46 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 20:46 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 20:43 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 20:43 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Discussion on AI-assisted coding, no explicit human rights discussion
2026-03-15 20:43 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 19:31 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 19:31 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive) 0.00
2026-03-15 19:30 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 19:30 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral) 0.00
reasoning
Discussion on AI-assisted coding, no explicit human rights discussion
2026-03-15 19:30 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -
2026-03-15 18:53 eval_success PSQ evaluated: g-PSQ=0.600 (3 dims) - -
2026-03-15 18:53 eval Evaluated by llama-4-scout-wai-psq: +0.60 (Strong positive)
2026-03-15 18:52 eval_success Lite evaluated: Neutral (0.00) - -
2026-03-15 18:52 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
reasoning
Discussion on AI-assisted coding, no explicit human rights discussion
2026-03-15 18:52 rater_validation_warn Lite validation warnings for model llama-4-scout-wai: 1W 0R - -