A technical blog post from Mendral describing their LLM-powered CI debugging system architecture, focused on SQL querying, data compression, and operational patterns. While strong on technical education and public information sharing, the article conspicuously omits discussion of privacy implications in large-scale metadata collection (5.31 TiB uncompressed) and labor considerations in workplace automation, representing gaps by silence rather than explicit opposition to rights.
But does it work? I’ve used LLMs for log analysis and they have been prone to hallucinate reasons: depending on the logs the distance between cause and effects can be larger than context, usually we’re dealing with multiple failures at once for things to go badly wrong, and plenty of benign issues throw scary sounding errors.
I believe this method works well because it turns a long context problem (hard for LLMs) into a coding and reasoning problem (much better!). You’re leveraging the last 18 months of coding RL by changing you scaffold.
We have an ongoing effort in parsing logs for our autotests to speed up debug. It is vary hard to do, mainly because there is a metric ton of false positives or plain old noise even in the info logs. Tracing the culprit can be also tricky, since an error in container A can be caused by the actual failure in the container B which may in turn depend on something entirely else, including hardware problems.
Basically a surefire way to train LLM to parse logs and detect real issues almost entirely depends on the readability and precision of logging. And if logging is good enough then humans can do debug faster and more reliable too :) . Unfortunately people reading logs and people coding them are almost not intersecting in practice and so the issue remains.
SQL is the best exploratory interface for LLMs. But, most of Observability data like Metrics, Logs, Traces we have today are hidden in layers of semantics, custom syntax that’s hard for an agent to translate from explore or debug intent to the actual query language.
Large scale data like metrics, logs, traces are optimised for storage and access patterns and OLAP/SQL systems may not be the most optimal way to store or retrieve it. This is one of the reasons I’ve been working on a Text2SQL / Intent2SQL engine for Observability data to let an agent explore schema, semantics, syntax of any metrics, logs data. It is open sourced as Codd Text2SQL engine - https://github.com/sathish316/codd_query_engine/
It is far from done and currently works for Prometheus,Loki,Splunk for few scenarios and is open to OSS contributions. You can find it in action used by Claude Code to debug using Metrics and Logs queries:
That's in the contrary to my experience. Logs contain a lot of noise and unnecessary information, especially Java, hence best is to prepare them before feeding them to LLM. Not speaking about wasted tokens too...
Lots of logs contain non-interesting information so it easily pollutes the context. Instead, my approach has a TF-IDF classifier + a BERT model on GPU for classifying log lines further to reduce the number of logs that should be then fed to a LLM model. The total size of the models is 50MB and the classifier is written in Rust so it allows achieve >1M lines/sec for classifying. And it finds interesting cases that can be missed by simple grepping
"Logs" is doing some heavy lifting here. There's a very non-trivial step in deciding that a particular subset and schema of log messages deserves to be in its own columnar data table. It's a big optimization decision that adds complexity to your logging stack. For a narrow SaaS product that is probably a no-brainer.
I would like to see this approach compared to a more minimal approach with say, VictoriaLogs where the LLM is taught to use LogsQL, but overall it's a more "out of the box" architecture.
My first take is that you could have 10 TB of logs with just a few unique lines that are actually interesting. So I am not thinking "Wow, what impressive big data you have there" but rather "if you have an accuracy of 1-10^-6 you are still are overwhelmed with false positives" or "I hope your daddy is paying for your tokens"
Interesting article, but there's no rate of investigation success quoted. The engineering is interested, but it's hard to know if there was any point without some kind of measure of the usefulness.
The article doesn't mention about which LLM or total cost. Because if they have used ChatGPT or such, the token cost itself should be very expensive, right?
Forgive me if this is tangential to the debate, but I am trying to understand Mendral's value proposition. Is it that you save users time in setting up observability for CI? Otherwise could you not simply use gh to fetch the logs, their observability system's API or MCP, and cross check both against the code? Or is there a machine learning system that analyzes these inputs beyond merely retrieving context for the LLM? Good luck!
That post reads like fully LLM-generated. It's basically boasting a list of numbers that are supposed to sound impressive. If there's a coherent story, it's well hidden.
SQL has always been my favorite "loaded gun" api. If you have a control plane of RLS + role based auth and you've got a data dictionary it is trivial to get to a data explorer chat interaction with an LLM doing the heavy lifting.
I gave lots of prolog rules to analyze log files of a complicated distributed system with 20 realtime components to find problems and root causes. Worked really well. In 2008 or so
Cannot believe that LLM are that useful.
When ever a component changes or adds a log line, you edit one rule. With an LLM you need weeks of new logs and then weeks to retrain. And a high budget for the H100`s
This seems really weird to me. Isn't that just using LLMs in a specific way? Why come up with a new name "RLM" instead of saying "LLM"? Nothing changes about the model.
It can, like all the other tasks, it's not magic and you need to make the job of the agent easier by giving it good instructions, tools, and environments. It's exactly the same thing that makes the life of humans easier too.
This post is a case study that shows one way to do this for a specific task. We found an RCA to a long-standing problem with our dev boxes this week using Ai. I fed Gemini Deep Research a few logs and our tech stack, it came back with an explanation of the underlying interactions, debugging commands, and the most likely fix. It was spot on, GDR is one of the best debugging tools for problems where you don't have full understanding.
If you are curious, and perhaps a PSA, the issue was that Docker and Tailscale were competing on IP table updates, and in rare circumstances (one dev, once every few weeks), Docker DNS would get borked. The fix is to ignore Docker managed interfaces in NetworkManager so Tailscale stops trying to do things with them.
Mendral co-founder here, we built this infra to have our agent detect CI issues like flaky tests and fix them. Observing logs are useful to detect anomalies but we also use those to confirm a fix after the agent opens a PR (we have long coding sessions that verifies a fixe and re-run the CI if needed, all in the same agent loop).
Yeah it sounds very familiar with what we went through while building this agent.
We're focused on CI logs for now because we wanted something that works really well for things like flaky tests, but planning to expand the context to infrastructure logs very soon.
LLMs are better now at pulling the context (as opposed to feeding everything you can inside the prompt). So you can expose enough query primitives to the LLM so it's able to filter out the noise.
I don't think implementing filtering on log ingestion is the right approach, because you don't know what is noise at this stage. We spent more time on thinking about the schema and indexes to make sure complex queries perform at scale.
This is an interesting approach. I definitely agree with the problem statement: if the LLM has to filter by error/fatal because of context window constraints, it will miss crucial information.
We took a different approach: we have a main agent (opus 4.6) dispatching "log research" jobs to sub agents (haiku 4.5 which is fast/cheap). The sub agent reads a whole bunch of logs and returns only the relevant parts to the parent agent.
This is exactly how coding agents (e.g. Claude Code) do it as well. Except instead of having sub agents use grep/read/tail, they use plain SQL.
Agreed on SQL being the best exploratory interface for agents. I've been building Logchef[1], an open-source log viewer for ClickHouse, and found the same thing — when you give an LLM the table schema, it writes surprisingly good ClickHouse SQL. I support both a simpler DSL (LogchefQL, compiles to type-aware SQL on the backend) and raw SQL, and honestly raw SQL wins for the agent use case — more flexible, more training data in the corpus.
I took this a few steps further beyond the web UI's AI assistant. There's an MCP server[2] so any AI assistant (Claude Desktop, Cursor, etc.) can discover your log sources, introspect schemas, and query directly. And a Rust CLI[3] with syntax highlighting and `--output jsonl` for piping — which means you can write a skill[4] that teaches the agent to triage incidents by running `logchef query` and `logchef sql` in a structured investigation workflow (count → group → sample → pivot on trace_id).
The interesting bit is this ends up very similar to what OP describes — an agent that iteratively queries logs to narrow down root cause — except it's composable pieces you self-host rather than an integrated product.
https://github.com/dx-tooling/platform-problem-monitoring-co... could have a useful approach, too: it finds patterns in log lines and gives you a summary in the sense of „these 500 lines are all technically different, but they are all saying the same“.
I think there’s too many expectations around what logging is for and getting everyone on the same page is difficult.
Meanwhile stats have fewer expectations, and moving signal out of the logs into stats is a much much smaller battle to win. It can’t tell you everything, but what it can tell you is easier to make unambiguous.
Over time I got people to stop pulling up Splunk as an automatic reflex and start pulling up Grafana instead for triage.
Yeah this is my experience with logs data. You only actually care about O(10) lines per query, usually related by some correlation ID. Or, instead of searching you're summarizing by counting things. In that case, actually counting is important ;).
In this piece though--and maybe I need to read it again--I was under the impression that the LLM's "interface" to the logs data is queries against clickhouse. So long as the queries return sensibly limited results, and it doesn't go wild with the queries, that could address both concerns?
I agree with your statement and explained in a few other comments how we're doing this.
tldr:
- Something happens that needs investigating
- Main (Opus) agent makes focused plan and spawns sub agents (Haiku)
- They use ClickHouse queries to grab only relevant pieces of logs and return summaries/patterns
This is what you would do manually: you're not going to read through 10 TB of logs when something happens; you make a plan, open a few tabs and start doing narrow, focused searches.
There is a cost associated with each investigation (that the Mendral agent is doing). And we spend time tuning the orchestration between agents. Yes expensive but we're making money on top of what it costs us. So far we were able to take the cost down while increasing the relevance of each root cause analysis.
We're writing another post about that specifically, we'll publish it sometimes next week
Mendral is replacing a human Platform Engineer. It debugs the CI logs, look at the commit associated, look at the implementation of the tests, etc... It then proposes fixes and takes care of opening a PR.
We did not want to make the post engineering-focused, but we have 18 companies in production today (we wrote about PostHog in the blog). At some point we should post some case studies. The metric we track for usefulness is our monthly revenue :)
Content provides extensive technical education accessible to anyone: denormalization rationale, compression algorithms, query optimization patterns, rate-limiting strategies, and durable execution design. Detailed explanations enable readers to learn and apply these principles.
FW Ratio: 50%
Observable Facts
Article contains pedagogical content: explanations of why denormalization works (301:1 compression on repeated columns), detailed performance metrics, and architectural trade-offs.
Technical knowledge is freely available with no educational gatekeeping, paywalls, or subscription requirements.
Inferences
Free sharing of detailed technical patterns and system design principles constitutes contribution to technical education aligned with Article 26.
Clear pedagogical structure (problem → solution → metrics → results) indicates intent to enable reader learning and knowledge development.
Authors openly share technical and intellectual culture (engineering practices, architectural approaches, decade of CI systems experience). Work is attributed by name to Andrea Luzzardi and Mendral.
FW Ratio: 50%
Observable Facts
Article explicitly attributes authorship: 'Andrea Luzzardi·Feb 11, 2026' and credits company 'Mendral'.
Content demonstrates active participation in technical culture by sharing engineering practices and system design insights with the broader community.
Inferences
Public sharing of technical culture and intellectual work with author attribution aligns with Article 27 principles of cultural participation.
Open contribution of engineering knowledge to collective technical culture represents engagement in cultural life.
Content celebrates automation of engineering investigation work ('an always-on AI DevOps engineer that diagnoses CI failures, catches flaky tests, and fixes them') without discussing labor implications, worker displacement, fair wages, or job impact of workplace automation.
FW Ratio: 50%
Observable Facts
Article frames LLM-driven automation as replacing human investigative work: 'automating it' and positioning the agent as replacing manual engineer debugging tasks.
No discussion of impacts on affected workers, retraining, labor standards, or worker compensation appears in the article.
Inferences
Omission of labor considerations in a context where automation displaces human investigative work suggests indifference to Article 23 protections.
Enthusiastic framing of automation as solution without addressing labor rights represents a gap in rights awareness.
Content celebrates massive data collection (5.31 TiB uncompressed, 48 metadata columns per log line, 300M+ daily lines) without discussing privacy governance, consent, or privacy safeguards. The framing emphasizes technical achievement while omitting privacy rights considerations.
FW Ratio: 50%
Observable Facts
Article states: 'Up to 300 million log lines flow through on a busy day' with '48 columns of metadata' per line, all stored long-term.
No mention of privacy policies, data minimization, user consent, or privacy protection mechanisms appears anywhere in the article.
Inferences
Celebratory tone about massive metadata aggregation without privacy discussion suggests indifference to privacy rights in system design.
Conspicuous omission of data governance in a context of persistent, enriched log retention represents a gap relative to Article 12 protections.
Framing investigation as mystery-solving adventure: 'our agent traced a flaky test...following a trail from job metadata to raw log output...the agent follows a trail, query after query, as it narrows in on a root cause.' Appeals to reader's sense of detective work and discovery.
appeal to authority
Authors invoke decade of experience: 'We spent a decade building and scaling CI systems at Docker and Dagger' establishes credibility through tenure at well-known infrastructure companies.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 13:57:54 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.