+0.21 Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code (github.com S:+0.16 )
80 points by mksglu 5 days ago | 22 comments on HN | Moderate positive Mixed · v3.7 · 2026-02-26 04:43:55 0
Summary Education & Knowledge Access Advocates
This GitHub repository provides a public tool addressing context management in AI interactions, functioning as an educational and technical resource supporting knowledge dissemination. The content promotes equitable access to technical knowledge through public repository structure, contributing to Article 19 (freedom of information), Article 20 (freedom of association), Article 26 (right to education), and Article 27 (participation in scientific progress). The primarily positive orientation is tempered by GitHub's platform control affecting absolute intellectual property protections.
Article Heatmap
Preamble: ND — Preamble Preamble: No Data — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: ND — Privacy Article 12: No Data — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.41 — Freedom of Expression 19 Article 20: +0.14 — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: ND — Standard of Living Article 25: No Data — Standard of Living 25 Article 26: +0.35 — Education 26 Article 27: +0.39 — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: ND — Duties to Community Article 29: No Data — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean +0.21 Structural Mean +0.16
Weighted Mean +0.32 Unweighted Mean +0.32
Max +0.41 Article 19 Min +0.14 Article 20
Signal 4 No Data 27
Volatility 0.11 (Medium)
Negative 0 Channels E: 0.6 S: 0.4
SETL +0.08 Editorial-dominant
FW Ratio 65% 28 facts · 15 inferences
Evidence 23% coverage
12M 19 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.00 (0 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.00 (0 articles) Personal: 0.00 (0 articles) Expression: 0.28 (2 articles) Economic & Social: 0.00 (0 articles) Cultural: 0.37 (2 articles) Order & Duties: 0.00 (0 articles)
HN Discussion 8 top-level · 9 replies
handfuloflight 2026-02-25 06:52 UTC link
One moment you're speaking about context but talking in kilobytes, can you confirm the token savings data?

And when you say only returns summaries, does this mean there is LLM model calls happening in the sandbox?

sim04ful 2026-02-25 06:55 UTC link
Looks pretty interesting. How could i use this on other MCP clients e.g OpenCode ?
vicchenai 2026-02-25 07:20 UTC link
The BM25+FTS5 approach without LLM calls is the right call - deterministic, no added latency, no extra token spend on compression itself.

The tradeoff I want to understand better: how does it handle cases where the relevant signal is in the "low-ranked" 310 KB, but you just haven't formed the query that would surface it yet? The compression is necessarily lossy - is there a raw mode fallback for when the summarized context produces unexpected downstream results?

Also curious about the token count methodology - are you measuring Claude's tokenizer specifically, or a proxy?

rcarmo 2026-02-25 08:12 UTC link
Nice trick. I’m going to see how I can apply it to tool calls in pi.dev as well
robbomacrae 2026-02-25 08:33 UTC link
Really cool. A tangential task that seems to be coming up more and more is masking sensitive data in these calls for security and privacy. Is that something you considered as a feature?
wobblywobbegong 2026-02-25 11:24 UTC link
Interesting approach, I tried the Hackernews example from the docs, but its tools don't seem to trigger reliably. Any suggestions?

> Fetch the Hacker News front page, extract all posts with titles, scores, and domains. Group by domain. Then run /context-mode stats.

* Claude used regular fetch *

> why didnt you use the context mode fetch?

● Fair point. Two honest reasons:

  1. First request: The context-mode tools weren't loaded yet when I called WebFetch. I should have used ToolSearch to load    
  fetch_and_index first, then used it — but I defaulted to WebFetch out of habit.                                              
  2. Second request (the interrupted one): The tools were already loaded at that point. I had no excuse. I should have done:   
  fetch_and_index(url: "https://news.ycombinator.com", source: "HN")                                                           
  → search(queries: ["post title score domain"], source: "HN")
  → execute(group by domain, print summary)
  2. Instead I reached for WebFetch again — that's the anti-pattern the plugin exists to prevent.

  The context-mode skill instructions explicitly list this in the decision tree:

  Fetching web documentation or HTML page? → Use fetch_and_index → search

  I ignored it. The WebFetch output (the full post table) went straight into context when it didn't need to. I'll use
  fetch_and_index correctly if you want to retry.
gavinray 2026-02-25 12:01 UTC link
Im not sure i understand how it coexists with existing installed MCP servers

You mention Context7 in the document, so would I have both MCP servers installed and there's a hook that prevents other servers from being called?

i3oi3 2026-02-25 13:42 UTC link
Interesting approach. I just finished some work for a similar task in a different domain.

One thing that surprised me: tantivy's BM25 search is faster, more expressive, and more scalable than SQLite. If you're just building a local search (or want to optimize for local FTS), I would strongly recommend looking into tantivy.

If you have the resources, it would be very interesting to throw a some models (especially smart-but-context-constrained cheaper ones) at some of the benchmark programming problems and see if this approach can show an effective improvement.

mksglu 2026-02-25 06:55 UTC link
Hey! Thank you for your comment! There are test examples in the README. Could you please try them? Your feedback is valuable.
mksglu 2026-02-25 06:57 UTC link
Hey! Thank you for your comment! You can actually use an MCP on this basis, but I haven't tested it yet. I'll look into it as soon as possible. Your feedback is valuable.
mksglu 2026-02-25 07:08 UTC link
For your second question: No LLM calls. Context Mode uses algorithmic processing — FTS5 indexing with BM25 ranking and Porter stemming. Raw output gets chunked and indexed in a SQLite database inside the sandbox, and only the relevant snippets matching your intent are returned to context. It's purely deterministic text processing, no model inference involved.
mksglu 2026-02-25 07:25 UTC link
Great questions.

--

On lossy compression and the "unsurfaced signal" problem:

Nothing is thrown away. The full output is indexed into a persistent SQLite FTS5 store — the 310 KB stays in the knowledge base, only the search results enter context. If the first query misses something, you (or the model) can call search(queries: ["different angle", "another term"]) as many times as needed against the same indexed data. The vocabulary of distinctive terms is returned with every intent-search result specifically to help form better follow-up queries.

The fallback chain: if intent-scoped search returns nothing, it splits the intent into individual words and ranks by match count. If that still misses, batch_execute has a three-tier fallback — source-scoped search → boosted search with section titles → global search across all indexed content.

There's no explicit "raw mode" toggle, but if you omit the intent parameter, execute returns the full stdout directly (smart-truncated at 60% head / 40% tail if it exceeds the buffer). So the escape hatch is: don't pass intent, get raw output.

On token counting:

It's a bytes/4 estimate using Buffer.byteLength() (UTF-8), not an actual tokenizer. Marked as "estimated (~)" in stats output. It's a rough proxy — Claude's tokenizer would give slightly different numbers — but directionally accurate for measuring relative savings. The percentage reduction (e.g., "98%") is measured in bytes, not tokens, comparing raw output size vs. what actually enters the conversation context.

mksglu 2026-02-25 08:19 UTC link
That means a lot, thank you! Would love to hear your feedback once you try it — and an upvote would be much appreciated if you find it useful
mksglu 2026-02-25 08:55 UTC link
Good question.

The SQLite database is ephemeral — stored in the OS temp directory (/tmp/context-mode-{pid}.db) and scoped to the session process. Nothing persists after the session ends. For sensitive data masking specifically: right now the raw data never leaves the sandbox (it stays in the subprocess or the temp SQLite store), and only stdout summaries enter the conversation. But a dedicated redaction layer (regex-based PII stripping before indexing) is an interesting idea worth exploring. Would be a clean addition to the execute pipeline.

mksglu 2026-02-25 16:15 UTC link
On Tantivy: Agree it's the better search engine, but context-mode is session-scoped — DB is a temp file that dies when the process exits. At that scale (50-200 chunks), FTS5 is zero-config, single-file, <1ms startup, and good enough. If we ever add persistent cross-session indexing, Tantivy would be the move.

On benchmarking: This is the experiment I most want to see. The hypothesis: context-mode benefits smaller models disproportionately — a 32K model with clean context could outperform a 200K model drowning in raw tool output. Would love to see SWE-bench results with context-mode on vs. off across model tiers.

mksglu 2026-02-25 16:17 UTC link
That's a known bug in older versions — the WebFetch hook wasn't blocking reliably. Fixed in v0.7.1.

npm install -g context-mode@latest

If you're on the plugin install, re-run:

  /plugin marketplace add mksglu/claude-context-mode
  /plugin install context-mode@claude-context-mode
Then restart Claude Code. Sorry about that.
mksglu 2026-02-25 16:21 UTC link
Context Mode doesn't replace your other MCP servers — it sits alongside them. Your Context7, Playwright, GitHub servers all stay installed and work normally. The hook intercepts output-heavy tool calls (like WebFetch, curl) and redirects them through the sandbox. For example, instead of WebFetch dumping 56KB of raw HTML into context, the hook blocks it and tells the model to use fetch_and_index instead — which fetches the same URL but indexes it in a local SQLite DB, returning only a 3KB summary.

Your other MCP servers still run. Context Mode just gives the model a more context-efficient way to process their results when the output would be large.

Editorial Channel
What the content says
+0.25
Article 19 Freedom of Expression
Medium Advocacy Framing
Editorial
+0.25
SETL
+0.13

Repository title 'Stop losing context to large outputs' frames a technical problem; descriptive content focuses on tool functionality for managing information. Page is a public repository enabling users to share and discuss a software tool, inherently supporting freedom to impart information.

+0.22
Article 27 Cultural Participation
Medium Advocacy Framing
Editorial
+0.22
SETL
+0.12

Repository represents contribution to scientific and technical knowledge about information management and AI interaction patterns. Tool directly addresses sharing of practical knowledge about managing computational outputs.

+0.20
Article 26 Education
Medium Advocacy Framing
Editorial
+0.20
SETL
0.00

Repository content directly addresses a technical skill development problem (context management in AI interactions), supporting education and information access related to emerging technology literacy.

+0.15
Article 20 Assembly & Association
Medium Advocacy
Editorial
+0.15
SETL
+0.07

Repository represents voluntary association of users around a shared technical interest; code structure enables collaborative contribution model.

ND
Preamble Preamble
Medium

No editorializable content observable regarding universal human dignity or solidarity principles.

ND
Article 1 Freedom, Equality, Brotherhood
Medium

No explicit editorial statement regarding equality and dignity observable on page.

ND
Article 2 Non-Discrimination
Medium

No editorial content addressing discrimination or protected characteristics observable.

ND
Article 3 Life, Liberty, Security

No observable content addressing right to life, liberty, or personal security.

ND
Article 4 No Slavery

No observable content addressing slavery or servitude.

ND
Article 5 No Torture

No observable content addressing torture or cruel punishment.

ND
Article 6 Legal Personhood

No observable content addressing legal personhood.

ND
Article 7 Equality Before Law
Medium

No explicit editorial content on equality before law observable.

ND
Article 8 Right to Remedy

No observable content addressing legal remedies for rights violations.

ND
Article 9 No Arbitrary Detention

No observable content addressing arrest or detention.

ND
Article 10 Fair Hearing

No observable content addressing fair trial rights.

ND
Article 11 Presumption of Innocence

No observable content addressing criminal law or presumption of innocence.

ND
Article 12 Privacy
Medium

No explicit editorial content on privacy observable.

ND
Article 13 Freedom of Movement
Medium

No explicit editorial content on freedom of movement observable.

ND
Article 14 Asylum

No observable content addressing asylum or refuge rights.

ND
Article 15 Nationality

No observable content addressing nationality rights.

ND
Article 16 Marriage & Family

No observable content addressing family rights or marriage.

ND
Article 17 Property
Medium

No explicit editorial content on property rights observable.

ND
Article 18 Freedom of Thought

No observable content addressing freedom of thought, conscience, or religion.

ND
Article 21 Political Participation

No observable content addressing political participation or voting.

ND
Article 22 Social Security

No observable content addressing social security or welfare.

ND
Article 23 Work & Equal Pay

No observable content addressing labor rights or employment.

ND
Article 24 Rest & Leisure

No observable content addressing rest or leisure.

ND
Article 25 Standard of Living
Medium

No explicit editorial content on health, food, or standard of living observable.

ND
Article 28 Social & International Order

No observable content addressing social and international order.

ND
Article 29 Duties to Community

No observable content addressing duties or limitations on rights.

ND
Article 30 No Destruction of Rights

No observable content addressing prevention of rights destruction.

Structural Channel
What the site does
+0.20
Article 26 Education
Medium Advocacy Framing
Structural
+0.20
Context Modifier
+0.15
SETL
0.00

Public repository structure enables free access to educational content about tool usage and technical problem-solving; cached DCP notes accessible design features support equitable access to education-related technical content.

+0.18
Article 19 Freedom of Expression
Medium Advocacy Framing
Structural
+0.18
Context Modifier
+0.20
SETL
+0.13

Public repository structure enables free expression and information dissemination without gatekeeping; cached DCP notes GitHub's access model enables open participation without gatekeeping and that community guidelines protect user expression.

+0.15
Article 27 Cultural Participation
Medium Advocacy Framing
Structural
+0.15
Context Modifier
+0.20
SETL
+0.12

Public repository enables participation in technical knowledge community and benefits from scientific advancement; cached DCP notes access model supports knowledge dissemination and community guidelines protect creative expression.

+0.12
Article 20 Assembly & Association
Medium Advocacy
Structural
+0.12
Context Modifier
0.00
SETL
+0.07

GitHub infrastructure provides tools for assembly (collaboration, discussion) and association (contributor networks, community participation) around the repository project.

ND
Preamble Preamble
Medium

GitHub's platform infrastructure enables collaborative knowledge-sharing across borders consistent with universal principles; feature flags and analytics suggest systematic approach to platform governance.

ND
Article 1 Freedom, Equality, Brotherhood
Medium

GitHub's access controls and community guidelines establish baseline equal treatment; cached DCP notes ToS establish equal treatment without discrimination.

ND
Article 2 Non-Discrimination
Medium

GitHub's platform structure prohibits discrimination in access; cached DCP notes ToS establish non-discriminatory treatment framework.

ND
Article 3 Life, Liberty, Security

Repository interface does not address security, safety, or freedom from violence.

ND
Article 4 No Slavery

Repository infrastructure does not engage with slavery or forced labor concerns.

ND
Article 5 No Torture

Repository interface does not address torture or inhuman treatment.

ND
Article 6 Legal Personhood

Repository does not directly engage with legal personality recognition.

ND
Article 7 Equality Before Law
Medium

GitHub's community guidelines and access controls provide baseline equal legal protection within platform context; cached DCP notes community guidelines establish standards for respectful discussion.

ND
Article 8 Right to Remedy

Repository interface does not provide mechanisms for legal remedies.

ND
Article 9 No Arbitrary Detention

Repository does not engage with arrest or detention concerns.

ND
Article 10 Fair Hearing

Repository interface does not provide trial or dispute resolution mechanisms.

ND
Article 11 Presumption of Innocence

Repository does not engage with criminal justice concerns.

ND
Article 12 Privacy
Medium

GitHub provides privacy controls; cached DCP notes GitHub has standard privacy policies protecting user data and discussion content, but also flags analytics tracking creating privacy concerns regarding behavioral data and privacy of thought.

ND
Article 13 Freedom of Movement
Medium

GitHub's global platform structure enables worldwide access and participation without geographic barriers.

ND
Article 14 Asylum

Repository does not engage with asylum or refuge concerns.

ND
Article 15 Nationality

Repository does not engage with nationality or citizenship concerns.

ND
Article 16 Marriage & Family

Repository does not engage with family or marriage concerns.

ND
Article 17 Property
Medium

GitHub retains platform control; cached DCP notes user-generated content ownership is subject to platform terms, creating conditional rather than absolute intellectual property rights.

ND
Article 18 Freedom of Thought

Repository does not explicitly address thought or conscience freedom, though platform may implicitly protect it.

ND
Article 21 Political Participation

Repository does not engage with political participation or governance participation mechanisms.

ND
Article 22 Social Security

Repository does not engage with social protection or welfare concerns.

ND
Article 23 Work & Equal Pay

Repository does not directly engage with labor or employment concerns.

ND
Article 24 Rest & Leisure

Repository does not engage with rest, leisure, or working hours concerns.

ND
Article 25 Standard of Living
Medium

GitHub's accessibility features support equitable access to information and tools; cached DCP notes observable accessibility features including keyboard navigation, ARIA support, and responsive design promote equitable access.

ND
Article 28 Social & International Order

Repository operates within GitHub's order; does not directly establish new order mechanisms.

ND
Article 29 Duties to Community

Repository structure does not explicitly address duty or limitation frameworks.

ND
Article 30 No Destruction of Rights

Repository does not address preservation or protection against destruction of rights.

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.48 low claims
Sources
0.4
Evidence
0.5
Uncertainty
0.3
Purpose
0.8
Propaganda Flags
No manipulative rhetoric detected
0 techniques detected
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.3
Arousal
0.3
Dominance
0.4
Transparency
Does the content identify its author and disclose interests?
0.15
✗ Author
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.69 solution oriented
Reader Agency
0.7
Stakeholder Voice
Whose perspectives are represented in this content?
0.30 1 perspective
Speaks: individuals
Temporal Framing
Is this content looking backward, at the present, or forward?
present immediate
Geographic Scope
What geographic area does this content cover?
global
Complexity
How accessible is this content to a general audience?
technical high jargon domain specific
Longitudinal · 5 evals
+1 0 −1 HN
Audit Trail 25 entries
2026-02-28 14:19 model_divergence Cross-model spread 0.27 exceeds threshold (4 models) - -
2026-02-28 14:19 eval_success Lite evaluated: Neutral (0.00) - -
2026-02-28 14:19 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
PR tech content
2026-02-26 22:40 eval_success Light evaluated: Neutral (0.00) - -
2026-02-26 22:40 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
2026-02-26 20:07 dlq Dead-lettered after 1 attempts: Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code - -
2026-02-26 20:04 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 20:03 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 20:02 dlq Dead-lettered after 1 attempts: Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code - -
2026-02-26 20:02 eval_failure Evaluation failed: Error: Unknown model in registry: llama-4-scout-wai - -
2026-02-26 20:02 eval_failure Evaluation failed: Error: Unknown model in registry: llama-4-scout-wai - -
2026-02-26 20:02 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 17:26 dlq Dead-lettered after 1 attempts: Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code - -
2026-02-26 17:24 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 17:23 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 08:56 dlq Dead-lettered after 1 attempts: Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code - -
2026-02-26 08:56 dlq Dead-lettered after 1 attempts: Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code - -
2026-02-26 08:55 dlq Dead-lettered after 1 attempts: Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code - -
2026-02-26 08:55 dlq Dead-lettered after 1 attempts: Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code - -
2026-02-26 08:54 rate_limit OpenRouter rate limited (429) model=mistral-small-3.1 - -
2026-02-26 08:53 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 08:53 rate_limit OpenRouter rate limited (429) model=hermes-3-405b - -
2026-02-26 08:43 eval Evaluated by deepseek-v3.2: +0.27 (Mild positive) 9,730 tokens
2026-02-26 04:43 eval Evaluated by claude-haiku-4-5-20251001: +0.18 (Mild positive) 12,530 tokens +0.03
2026-02-26 03:42 eval Evaluated by claude-haiku-4-5-20251001: +0.16 (Mild positive) 13,753 tokens