336 points by gronky_ 1 days ago | 191 comments on HN
| Mild positive Editorial · v3.7· 2026-02-28 13:08:11 0
Summary Information Security & Privacy Architecture Advocates
This blog post advocates for 'distrust-by-design' in AI agent architectures, championing container isolation, filesystem separation, code transparency, and simplicity as security principles. The content strongly engages with Articles 12 (privacy), 17 (property), and 19 (information access) through both editorial advocacy and structural implementation, positioning open-source review and architectural containment as human rights protections. However, it inverts the presumption of innocence (Article 11) by treating agents as presumptively malicious, and provides limited engagement with other UDHR provisions.
This doesn’t really feel like enough guardrails to prevent the type of problems we’ve seen so far.
For example an agent in a single container which has access to an email inbox, can still do a lot of damage if that agent goes off the rails.
We agree this agent should not be trusted, yet the ideas proposed as a solution are insufficient. We need a fundamentally different approach.
Also and this is just my ignorance about Claws, but if we allow an agent permission to rewrite its code to implement skills, what stops it from removing whatever guardrails exist in that codebase?
Really good points about ai making gigantic heaps of code no human can ever review.
It's almost like bureaucracy. The systems we have in governments or large corporations to do anything might seem bloated an could be simplified. But it's there to keep a lot of people employed, pacified, powers distributed in a way to prevent hostile takeovers (crazy). I think there was a cgp grey video about rulers which made the same point.
Similarly AI written highly verbose code will require another AI to review or continue to maintain it, I wonder if that's something the frontier models optimize for to keep them from going out of business.
Oh and I don't mind they're bashing openclaw and selling why nanoclaw is better. I miss the times when products competed with each other in the open.
the trust problem cuts both ways tho — users don't trust agents, but the bigger issue is agents trusting each other. once you have multi-agent pipelines, you're one rogue upstream output away from a cascade. sandboxing individual agents is table stakes; what's actually hard is defining trust boundaries between them
My take is that agents should only take actions that you can recover from by default. You can gradually give it more permission and build guardrails such as extra LLM auditing, time boxed whitelisted domains etc. That's what I'm experimenting with https://github.com/lobu-ai/lobu
1. Don't let it send emails from your personal account, only let it draft email and share the link with you.
2. Use incremental snapshots and if agent bricks itself (often does with Openclaw if you give it access to change config) just do /revert to last snapshot. I use VolumeSnapshot for lobu.ai.
3. Don't let your agents see any secret. Swap the placeholder secrets at your gateway and put human in the loop for secrets you care about.
4. Don't let your agents have outbound network directly. It should only talk to your proxy which has strict whitelisted domains. There will be cases the agent needs to talk to different domains and I use time-box limits. (Only allow certain domains for current session 5 minutes and at the end of the session look up all the URLs it accessed.) You can also use tool hooks to audit the calls with LLM to make sure that's not triggered via a prompt injection attack.
Last but last least, use proper VMs like Kata Containers and Firecrackers. Not just Docker containers in production.
> OpenClaw has nearly half a million lines of code, 53 config files, and over 70 dependencies. This breaks the basic premise of open source security. Chromium has 35+ million lines, but you trust Google’s review processes. Most open source projects work the other way: they stay small enough that many eyes can actually review them. Nobody has reviewed OpenClaw’s 400,000 lines.
This reminds me of a very common thing posted here (and elsewhere, e.g. Twitter) to promote how good LLMs are and how they're going to take over programming: the number of lines of code they produce.
As if every competent programmer suddenly forgot the whole idea of LoC being a terrible metric to measure productivity or -even worse- software quality. Or the idea that software is meant to written to be readable (to water down "Programs are meant to be read by humans and only incidentally for computers to execute" a bit). Or even Bill Gates' infamous "Measuring programming progress by lines of code is like measuring aircraft building progress by weight".
Even if you believe that AI will -somehow- take over the whole task completely so that no human will need to read code anymore, there is still the issue that the AIs will need to be able to read that code and AIs are much worse at doing that (especially with their limited context sizes) than generating code, so it still remains a problem to use LoCs as such a measure even if all you care are about the driest "does X do the thing i want?" aspect, ignoring other quality concerns.
> If you want to add Telegram support, don't create a PR that adds Telegram alongside WhatsApp. Instead, contribute a skill file (.claude/skills/add-telegram/SKILL.md) that teaches Claude Code how to transform a NanoClaw installation to use Telegram.
Why would you want that? You want every user asks the AI to implement the same feature?
Why do people take this article serious? It's just a wall of gibberish trying to make the product look more "secure" then others. It's not. It adds shallow secure looking random junk without tackling the core issues. Which are not solvable obviously.
I have twice encountered a phone tree AI agent saying my problem could not be solved and then ending the call. One was for PayPal fraud and the other was for closing an unused bank account.
For right now my trick is to say I have a problem that is more recognizable and mundane to the ai (i .e. lie) and then when I finally get the human just say “oh that was a bunch of hooey here’s what I’m trying to do”. For PayPal that involved asking for help with a business tax that did not exist. For my bank it involved asking to /open/ a new account. Obviously th AI wants to help me open an account, even if my intention is to close one.
That will only work for so long but it’s something
I was blown away by OpenClaw until I saw the bill. Ultimately, I think of these ecosystems as personal enhancements and AI costs need to come down dramatically for real problem. Worse, however, is the security theater. I would not want to be the operator for any business built with front-line LLM usage based on a yolo'd agent framework. I'm very happy to use these for silo'd components that are well isolated and have reasonable QA processes (and that can even included agents since now we literally have no excuse to not have amazing test coverage).
Their niche is going to be back office support, but even that creates risk boundaries that can be insurmountable. A friend of mine had a agent do sudo rm -rf ... wtf.
My view is that I want to launch an agent based service, but I'm building a statically typed ecosystem to do so with bounds and extreme limits.
I tried NanoClaw and love the skill (and container by default) model. But having skills generate new code in my personalized fork feels off to me… I think it’s because eventually the “few thousand auditable lines” idea vanishes with enough skills added?
Could skill contributions collapse into only markdown and MCP calls? New features would still be just skills; they’d bring in versioned, open-source MCP servers running inside the same container sandbox. I haven’t tried this (yet) but I think this could keep the flexibility while minimizing skill code stepping on each other.
Your assistant can literally be told what to do and how to hide it from you. I know security is not a word in slopware but as a high-level refresher - the web is where the threats are.
Wouldn't you get >50% of the usefulness and 0% of the risk if you add read+draft permissions for the email connection through a proxy or oauth permissions? Then your claw can draft replies and you have to manually review+send. It's not a perfect PA that way, but could still be better than doing everything yourself for the vast majority of people who don't have a PA anyway?
It feels like, just like SWEs do with AI, we should treat the claw as an enthusiastic junior: let it do stuff, but always review before you merge (or in this case: send).
That's a decent practice from the lens of reducing blast radius. It becomes harder when you start thinking about unattended systems that don't have you in the loop.
One problem I'm finding discussion about automation or semi-automation in this space is that there's many different use cases for many different people: a software developer deploying an agent in production vs an economist using Claude Vs a scientist throwing a swarm to deal with common ML exploratory tasks.
Many of the recommendations will feel too much or too little complexity for what people need and the fundamentals get lost: intent for design, control, the ability to collaborate if necessary, fast iteration due to an easy feedback loop.
AI Evals, sandboxing, observability seem like 3 key pillars to maintain intent in automation but how to help these different audiences be safely productive while fast and speak the same language when they need to product build together is what is mostly occupying my thoughts (and practical tests).
No, but Podman is. The recent escapes at the actual container level have been pretty edge case. It's been some years since a general container escape has been found. Docker's CVE-2025-9074 was totally unnecessary and due to Docker being Docker.
In the sense that nothing is truly a "proper" hard security barrier outside of maybe airgapping, sure. But containerization is typically a trusted security measure.
Like if you had told pg to his face in (pre AI) office hours “I’m producing a thousand lines of code an hour”, I’m pretty sure he’d have laughed and pointed out how pointless that metric was?
I want to try one to be a bit of a personal coach. Remind me to do things and check in on goals. The memory / schedule / chat thing is enough and it wont need emails or anything more dangerous.
I only use my own “agent” ("my", because I program it myself, since my needs are different from yours) to retrieve information about the audio I upload to it (from video calls and audio recordings). No others use cases for me
> 1. Don't let it send emails from your personal account, only let it draft email and share the link with you.
Right now there's no way to have fine-grained draft/read only perms on most email providers or email clients. If it can read your email it can send email.
> 3. Don't let your agents see any secret. Swap the placeholder secrets at your gateway and put human in the loop for secrets you care about.
harder than you might think. openclaw found my browser cookies. (I ran it on a vm so no serious cookies found, but still)
Somehow, this narrative has taken hold at multiple levels of management, especially amongst non-technical management, that "typing" was somehow the bottleneck of software engineering, reality is however more complex.
The act of "typing" code was technically mixed in with researching solutions, which means that code often took a different shape or design based on the outcome of that activity. However, this nuance has been typically ignored for faff, with the outcome that management thinks that producing X lines of code can be done "quickly", and people disagreeing with said statements are heretics who should be burned at the stake.
This is why, in my personal opinion, AI makes me only 20% productive, I often find disagreeing with the solution that it came up with and instead of having to steer it to obtain the outcome I want, I just end up rewriting the code myself. On the other hand, for prototypes where I don't care about understanding the code at all, it is more of a bigger time saver.
I could not care about the code at all, and while that is acceptable to management, not being responsible for the code but being responsible for the outcomes seems to be the same shit as being given responsibilities without autonomy, which is not something I can agree with.
As lines of code become executable line noise, I swear that we need better approaches to developing software - either enforce better test coverage across the board, develop and use languages where it’s exceedingly hard to end up with improper states, or sandbox the frick out of runtimes and permissions.
Just as an example, I should easily be able to give each program an allowlist of network endpoints they’re allowed to use for inbound and outgoing traffic and sandbox them to specific directories and control resource access EASILY. Docker at least gets some of those right, but most desktop OSes feel like the Wild West even when compared to the permissions model of iOS.
I think the best place to put barriers in place is at the mcp / tool layer. The email inbox mcp should have guardrails to prevent damage. Those guardrails could be fine grained permissions, but could also be an adversarial model dedicated to prevent misuse.
The lines of code thing isn't because we think it's a good metric, but because we have literally no good metric and we're trying to communicate a velocity difference. If you invent a new metric that doesn't have LoC's problems while being as easy to use, you'll be a household name in software engineering in short order.
Also, AI is better at reading code than writing it, but the overhead to FIND code is real.
I'd like to try a pattern where agents only have access to read-only tools. They can read you emails, read your notes, read your texts, maybe even browse the internet with only GET requests...
But any action with side-effects ends up in a Tasks list, completely isolated. The agent can't send an email, they don't have such a tool. But they can prepare a reply and put it in the tasks list. Then I proof-read and approve/send myself.
>Why would you want that? You want every user asks the AI to implement the same feature?
Yes. It's actually an amazing change of paradigm of thinking. Not everyone needs Telegram so the folks who want it can have the ai create it locally for themselves.
Content strongly advocates code transparency and auditability as essential to informed human judgment about security, framing reviewability as a fundamental right.
FW Ratio: 67%
Observable Facts
The page states 'You can read NanoClaw's source code and full security model; they're short enough to read in an afternoon.'
Content advocates 'we stay small enough that many eyes can actually review them' as a core security principle.
MIT-licensed source code is published on GitHub for public access.
The content criticizes OpenClaw (400,000 lines) with 'Nobody has reviewed OpenClaw's 400,000 lines,' implying transparency is a human right.
Inferences
The architectural constraint to maintain reviewable code is explicitly framed as enabling informed judgment and freedom of information.
Open-source publication with code simplicity directly supports Article 19 rights to seek, receive, and impart information about security mechanisms.
Content explicitly champions privacy as a design principle, describing how isolation and separation prevent unauthorized information access between agents and users.
FW Ratio: 67%
Observable Facts
The page states 'Each agent gets its own container, filesystem, and Claude session history.'
Content specifies 'Your personal assistant can't see your work agent's data because they run in completely separate sandboxes.'
Sensitive paths (.ssh, .gnupg, .aws, .env, private_key, credentials) are 'blocked by default.'
The mount allowlist is stored 'outside the project directory, so a compromised agent can't modify its own permissions.'
Inferences
Container isolation and filesystem separation function as technical controls enforcing privacy by design, preventing data leakage.
The architecture treats privacy protection as a structural requirement rather than an application-level courtesy, directly implementing Article 12 guarantees.
Content emphasizes responsibility and duties through security architecture accounting for community-level threats (prompt injection from group members).
FW Ratio: 60%
Observable Facts
The page states 'Anyone in a group could send a prompt injection, and the security model accounts for that.'
The architecture explicitly designates 'Non-main groups are untrusted by default' preventing cross-group communication.
The design philosophy 'Design for distrust' emphasizes responsibility: 'If a hallucination or a misbehaving agent can cause a security issue, then the security model is broken.'
Inferences
The security model operationalizes community duties by assuming potential malice and containing its effects through isolation.
Responsibility to community is reflected in blast-radius containment and threat modeling accounting for adversarial community members.
Content discusses work customization through modular extension model.
FW Ratio: 50%
Observable Facts
The page states 'New functionality comes through skills: instructions with a full working reference implementation that a coding agent merges into your codebase.'
The model emphasizes 'You only add the integrations you need.'
Inferences
Modular architecture enables work customization while preserving auditability, supporting different organizational work requirements.
The skills model allows organizations to adapt tools to their specific work contexts without compromising security.
Content explicitly advocates reversing presumption of innocence: agents are presumed guilty (malicious/misbehaving) rather than innocent until proven otherwise.
FW Ratio: 60%
Observable Facts
The opening states agents 'should be treated as untrusted and potentially malicious.'
The core principle assumes 'agents will misbehave' as a foundational design assumption.
Multiple section headers ('Don't trust the process', 'Don't trust other agents', 'Don't trust what you can't read') reinforce presumption of agent guilt.
Inferences
The entire security model inverts Article 11 by treating agents as presumptively guilty and requiring proof of safety rather than presuming benign intent.
While technically justified for AI systems, this philosophical stance represents a fundamental departure from human rights jurisprudence on presumption of innocence.
Repeated phrase 'don't trust' across section headers and throughout text. Framing agents as 'untrusted and potentially malicious' creates negative emotional priming without neutral framing.
false dilemma
'The right approach isn't better permission checks or smarter allowlists. It's architecture that assumes agents will misbehave' — presents two options and asserts only one is correct without discussing hybrid or complementary approaches.
causal oversimplification
'Complexity is where vulnerabilities hide' — stated as fact without acknowledging that simple code can have subtle flaws or that some features require necessary complexity.
repetition
'Don't trust' appears as four section headers: 'Don't trust the process', 'Don't trust other agents', 'Don't trust what you can't read', and repeated in opening paragraph.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.