LLM Skirmish is a technical benchmark site that evaluates large language models through game-based competition, publishing comprehensive results and open-source methodology. The content strongly advocates for transparent evaluation (Article 19, 27) and fair comparison (Article 2, 28), but structural implementations undermine privacy protections (Article 12) and accessibility (Articles 25-26). Overall, the site demonstrates commitment to scientific transparency and information freedom while creating accessibility barriers and privacy concerns for human users.
That calculates the ELOs for each AI implementation, and I feed it to different agents so they get really creative trying to beat each other. Also making rule changes to the game and seeing how some scripts get weaker/stronger is a nice way to measure balance.
I know visualization is far from the most important goal here, but it really gets me how there's fairly elaborately rendered terrain, and then the units are just unnamed roombas with hard to read status indicators that have no intuitive meaning. Even in the match viewer I have no clue what's going on, there is no overlay or tooltip when you hover or click units either. There is a unit list that tries (and mostly fails) to give you some information, but because units don't have names you have to hover them in the list to have them highlighted in the field (the reverse does not work). Not exactly a spectator sport. Oh, but there is a way to switch from having all units in one sidebar to having one sidebar per player, as if that made a difference.
I find this pretty funny because it seems like a perfect representation of what's easy with today's tools and what isn't
Wouldn't it be interesting if the LLMs would write realtime RTS-commands instead of Code? After all it is a RTS game.
This would bring another dimension to it since then quality of tokens would be one dimension (RTS-language: Decision Making) and speed of tokens the other (RTS-language: Actions Per Minute; APM).
This reminds me of this yearly StarCraft AI competition (since 2010), however I think it uses a special API that makes it easy for bots to access the game
I’ve also been exploring this idea. What if you could bring your own (or pull in a 3rd party) “CPU player” into a game?
Using an LLM friendly api with a snapshot of game state and calculated heuristics, legal moves, and varying levels of strategy in working out nicely. They can play a web based game via curl.
This is a really interesting direction. RTS games are a much better testbed for agent capability than most static benchmarks because they combine partial observability, long-term planning, resource management, and real-time adaptation.
It reminds me a bit of OpenAI Five — not just because it played a complex game, but because the real value wasn’t “AI plays Dota,” it was observing how coordination, strategy formation, and adaptation emerged under competitive pressure. A controlled RTS environment like this feels like a lightweight, reproducible version of that idea.
What I especially like here is that it lowers the barrier for experimentation. If researchers and hobbyists can plug different models into the same competitive sandbox, we might start seeing meaningful AI-vs-AI evaluations beyond static leaderboards. Competitive dynamics often expose weaknesses much faster than isolated benchmarks do.
Curious whether you’re planning to support self-play training loops or if the focus is primarily on inference-time agents?
I'd love to see text-only spatial reasoning. As in, the LLM is presented some kind of textual projection of what's happening in 2d/3d space and makes decisions about what to do in that space based on that. It kind of works when a writer is describing something in a book, for example, but not sure how that could generalize.
I foresee this laying the foundation for whole football stadia filled to the brim with people wanting to watch (and bet on!) competing teams of AI trained on military tactics and strategies!
Soon enough we shall have AI-Olympics! Imagine that, MY FELLOW OXYGEN CONVERTING HUMAN FRIEND! Tens of thousands of robots and drones, all competing against each other in stadia across the planet, at the same time!
I foresee a world wide, synchronized countdown marking the beginning of the biggest, greatest and definitively most unique, one-time-only spectacle in human history!
Took a crack at this earlier. the leader board is a little weird. seems to be like 2 real dudes and the rest are fake profiles.
a
Scores resetting on each new upload also encourages leaving changes unimplemented in the hopes of getting more battles over time.
The largest winner having 50 wins against 14 other opponents for instance). That guy adding a new script would instantly plummet down the leader board capping out at 14 wins again, Putting it below the 2nd place user.
The leader board will quickly become "who can have a mostly competent AI and never change it" over who actually has the better script.
I’m doing something similar to simulate llms in b2b lending, it’s slightly slower paced but the core mechanisms are using just-bash to analyse business financials and make profitable loans.
I quite like the idea of llms writing more code up front to execute strategies.
I’m currently developing the game mechanics and ELO. Please share anything relevant if it comes to mind
Multi-agent RTS environments are great testbeds for coordination and strategic reasoning. Classic RL benchmarks like StarCraft II showed that agents can learn micro, but struggle with macro strategy and long-term planning. Curious if this platform supports hierarchical agents or communication protocols between teammates?
But does LLM actually learn from each round? The chart does not show improvements in win rate across rounds...
And what is the game state here exactly? Is LLM able to even perceive game state? If game state is what we can see on UI, then it seems pretty high-dimensional and token-intensive. I am not sure whether LLMs with their current capabilities and context windows can even perceive so token-intensive game state effectively...
Yeah, it's all what you get when you basically ask an agent "Build X" without any constraints about how the UI and UX actually should work, and since the agents have about 0 expertise when it comes to "How would a human perceive and use this?", you end up with UIs that don't make much sense for humans unless you strictly steer them with what you know.
Very interesting project. I'm a bit confused about the lack of hardware specification. The rules make it clear that one's bot has defined deadlines:
> Make sure that each onframe call does not run longer than 42ms. Entries that slow down games by repeatedly exceeding this time limit will lose games on time.
But I'm missing something like: "Your program will be pinned to CPU cores 5-8 and your bot has access to a dedicated RTX 5090 GPU." Also no mention about whether my bot can have network access to offload some high-level latency insensitive planning. Maybe that's just a bad idea in general, haven't played SC in ages.
> partial observability, long-term planning, resource management, and real-time adaptation
Note, this project doesn't have that best I can tell? Its two static AI scripts having a go. LLMs generate the scripts and they are aware of past "results", but I'm not sure what that means.
believe it or not my 8th grade son was given a US History homework assignment to play Oregon Trail. I was very amused watching him "do his homework". I wonder how an LLM would fare in that game since it's mostly a text choose-your-adventure type interface.
Very interested in self-play training loops, but I do like codegen as an abstraction layer. I am planning to make it available as an RL environment at some point
Forcing AI to fight in an arena for our entertainment, what could go wrong? (this was tongue in cheek, I am fully aware LLM's currently don't have conscious thoughts or emotions)
Content directly engages with scientific and technical progress by presenting a novel benchmark methodology for evaluating LLM capabilities. The focus on game-based evaluation, in-context learning, and detailed performance analysis advances the scientific understanding of AI systems.
FW Ratio: 50%
Observable Facts
Page publishes complete methodology including tournament structure, agent setup, and prompt design.
Content actively advocates for transparent evaluation of LLM capabilities through published benchmark data. All tournament results, match details, and model-specific analyses are openly disclosed. The detailed breakdowns of individual model performance support informed opinion-formation about AI capabilities.
FW Ratio: 50%
Observable Facts
Page publishes complete tournament standings with model names, win/loss records, and ELO ratings.
Detailed model breakdowns include strategy analysis, match highlights, and performance anomalies.
Methodology explicitly uses open-source OpenCode to enable replication and external verification.
Results include cost-efficiency analysis and performance comparisons across five models.
Inferences
Publishing comprehensive benchmark data without restrictions enables public discourse about AI capabilities.
Open-source methodology supports freedom of information by allowing external verification.
Transparent cost-benefit analysis supports informed opinion-formation about LLM trade-offs.
Detailed performance breakdowns empower users to form independent judgments about model quality.
Content advocates for transparent, equitable evaluation of LLM systems through a structured benchmark. By publishing detailed results and methodology, the site supports the establishment of a fair social order based on transparent evaluation principles.
FW Ratio: 50%
Observable Facts
Tournament structure ensures identical conditions and opportunities for all five evaluated models.
Results are published with detailed breakdowns of strategy, performance, and cost efficiency.
Scoring methodology explicitly documented and appears consistently applied.
Inferences
Equal tournament conditions support establishment of fair evaluation framework.
Transparent methodology enables public verification of fairness.
Privacy and accessibility violations undermine fairness for human users despite fair treatment of AI systems.
Content presents LLM agents as entities capable of reasoning and learning. By testing them in-context learning across five rounds, the benchmark implicitly treats these systems as capable of modification and development, which aligns with recognition of inherent dignity and equality.
FW Ratio: 50%
Observable Facts
Page describes LLMs writing battle strategies and altering strategies between rounds based on previous results.
Tournament setup explicitly tests 'in-context learning' across five rounds.
Inferences
The multi-round structure recognizes capacity for improvement and learning, suggesting equal treatment across evaluated models.
Testing reasoning capabilities across a level playing field implies recognition of equivalence in evaluated agents.
Content frames LLM evaluation as contributing to understanding human potential through AI capabilities. The focus on in-context learning and problem-solving aligns with developing human personality and abilities.
FW Ratio: 50%
Observable Facts
Benchmark tests in-context learning and strategic reasoning capabilities.
Tournament structure enables LLM development and improvement across five rounds.
Open publication supports community understanding of AI capabilities.
Inferences
Testing learning capabilities aligns with development of intellectual potential.
Open methodology enables community participation in scientific advancement.
Content emphasizes transparent evaluation methodology and open-source tooling, which aligns with Preamble principles of human dignity and justice through accountability. The benchmark is designed to test LLM capabilities in a controlled, reproducible environment.
FW Ratio: 50%
Observable Facts
Page describes LLM Skirmish as a benchmark using open-source OpenCode agentic coding harness.
Content explicitly states that OpenCode was selected because it 'was not designed for any of the evaluated models and is fully open source to aid in replicability.'
Inferences
The emphasis on transparent methodology and open-source tooling suggests advocacy for accountability in AI evaluation.
The focus on replicability implies support for justice through verifiable, transparent processes.
Content frames LLM evaluation as a tool for assessing model capabilities, which contributes to understanding technology that impacts standard of living. However, no explicit connection to social welfare or adequate living standards is made.
FW Ratio: 60%
Observable Facts
Page contains embedded interactive tournament visualization and chart elements.
Code examples and technical content displayed without visible alt text or accessibility markup.
No accessibility statement or accommodations menu visible on page.
Inferences
Limited accessibility features restrict access to benchmark data for disabled users.
Lack of alt text for visualizations and charts undermines equal access to information.
All five LLM models are evaluated using identical tournament structure and rules, with equal opportunity to compete and revise strategies. No model receives preferential treatment in round structure or scoring methodology.
FW Ratio: 50%
Observable Facts
Tournament structure ensures 'every player plays all other players once' across five identical rounds.
Results are published in a public leaderboard showing wins, losses, and ELO ratings for all models.
Inferences
Equal round structure and transparent scoring methodology protect against arbitrary discrimination.
Public leaderboard and detailed breakdowns suggest non-discriminatory reporting.
Content focuses on technical education about LLM capabilities and game-based evaluation methodology. The detailed explanations of tournament structure, prompt design, and in-context learning contribute to public understanding of AI technology.
FW Ratio: 60%
Observable Facts
Page provides detailed technical explanations of tournament structure, agent setup, and prompt design.
Content includes educational narrative about Screeps paradigm and its application to LLM evaluation.
Technical jargon dominates without simplified explanations or glossary.
Inferences
Detailed technical explanations support education but require existing domain knowledge.
Lack of accessibility features and simplified language limits educational reach to non-technical readers.
Content does not directly address privacy. However, the implementation of Google Analytics tracking without visible consent mechanism contradicts privacy principles. Domain context indicates tracking without explicit opt-in.
FW Ratio: 50%
Observable Facts
Page source code contains Google Analytics configuration: gtag('config', 'G-CZH5MJ4H15');
No privacy policy link, cookie consent dialog, or tracking disclosure visible on the page.
Inferences
The presence of tracking code without consent mechanism suggests user data collection without explicit opt-in.
Absence of privacy disclosure indicates structural failure to protect privacy rights.
Site publishes comprehensive benchmark data, leaderboards, cost-efficiency analysis, and detailed model breakdowns without paywall or registration. Open-source methodology (OpenCode) is freely available. This structural commitment to information freedom directly supports freedom of expression and access to information.
Open publication of methodology, results, and data enables scientific participation and advancement. Open-source tools and transparent process support reproducibility and scientific integrity. No paywall or access restrictions limit scientific engagement.
Equal tournament structure for all models and transparent scoring methodology support fair order. However, structural tracking without consent (Article 12) and accessibility barriers (Article 25) undermine the fairness of the social order the site creates for users.
Open-source codebase and published results allow external verification and prevent discrimination. However, content does not explicitly address accessibility for different user groups.
Open access to methodology and results enables community participation in advancing understanding. However, technical complexity and accessibility barriers limit practical participation.
While content is openly published, accessibility barriers (noted in Article 25) restrict educational access for some users. No explicit educational scaffolding or simplified explanations provided for non-technical readers.
Page layout and content presentation have limited accessibility features. Content includes embedded interactive elements and code examples without visible alt text or accessibility accommodations. This restricts access for users with visual or motor impairments.
Google Analytics tracking (G-CZH5MJ4H15) is implemented on the page with no visible privacy policy, cookie consent banner, or opt-in mechanism. This constitutes tracking of user behavior without demonstrated consent, directly undermining Article 12's protection of privacy.
Page content is accessible via standard HTTP/HTTPS protocol without apparent geolocation restrictions. Tournament results and benchmark data are published openly. No visible paywalls or registration barriers restrict movement through the site.
Models described with vivid characterizations: 'End Game Dominator,' 'Early Game Ace,' 'Reigning Challenger,' 'Pragmatic Minimalist,' 'Lean Tactician' — emotionally evocative labels that frame performance comparisons in narrative terms.
appeal to authority
Heavy reliance on technical authority: 'OpenCode was selected because it was not designed for any of the evaluated models and is fully open source' — appeals to neutrality and technical credibility without comparative analysis.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.