Static vulnerability discovery benchmarks become outdated quickly. Cases leak into training data, and scores start measuring memorization. The monthly refresh keeps the test set ahead of contamination — or at least makes the contamination window honest.
Each case runs three agents: a Curator reads the advisory and builds an answer key, a Finder (the model under test) gets 24 shell steps to explore the code and write a structured report, and a Judge scores the blinded submission. The Finder never sees the patch. It starts from sink hints and must trace the bug through actual code.
Only repos with 10k+ stars qualify. A diversity pass prevents any single repo from dominating the set. Ambiguous advisories (merge commits, multi-repo references, unresolvable refs) are dropped.
Currently evaluating GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro, GLM-5.1, and Kimi K2.5. All traces are public.
Methodology: https://ndaybench.winfunc.com/methodology
Live Leaderboard: https://ndaybench.winfunc.com/leaderboard
Live Traces: https://ndaybench.winfunc.com/traces
https://ndaybench.winfunc.com/cases/case_874d1b0586784db38b9...
GPT 5.4 allegedly failed, but if you look at the trace, you'll see that it simply couldn't find the file specified in the input prompt. It gave up after 9 steps of searching and was then judged as "missed."
Claude Opus 4.6 somehow passed with grade "excellent", but if you look at its trace, it never managed to find the file either. It just ran out of tool calls after the allowed 24 steps. But instead of admitting defeat, it hallucinated a vulnerability report (probably from similar code or vulnerabilities in its training corpus), which was somehow judged to be correct.
So if you want this to be remotely useful for comparing models, the judging model definitely needs to look at every step of finding the bug, not just the final model output summary.
All of the online spaces where LLMs are discussed are having a problem with the volume of poorly vibecoded submissions like this. Historically I’ve really enjoyed Show HN type submissions but this year most of the small projects that get shared here and on other social medias turn out to be a waste of my time due to all of the vibecoding and how frequently the projects don’t do what they say they do when you look into the details.
I'd love to see how the model we serve, Qwen3.5 122B A10B, stacks up against the rest on this benchmark. AI Router Switzerland (aiRouter.ch) can sponsor free API access for about a month if that helps for adding it to the evaluation set.
Are you able to share (or point me toward) any high-level details: (key hardware, hosting stack, high-level economics, key challenges)?
I'd love to offer to buy you a coffee but I won't be in Switzerland any time soon.
At a high level, it's a mix of our own GPU capacity plus the ability to burst into external nodes when things get busy. Right now we're running a bunch of RTX PRO 6000s, which basically forces you into workstation/server boards since you need full x16 PCIe 5.0 lanes per card.
We operate a small private datacenter, which gives us some flexibility in how we deploy and scale hardware. On the software side, we're currently LiteLLM as a load balancer in front of the inference servers, though I'm in the process of replacing that with a custom rust based implementation.
We've only been online since the beginning of this month, so I can't really say much about the economics yet, but we've had some really nice feedback from early customers so far. :)
While we don't guarantee a fixed toks/s rate, we scale by provisioning external GPU nodes during peak demand. These nodes run our own dockerized environment over a secure tunnel.
Our goal is to ensure a consistent baseline performance of at least 60–80 toks/s, even under high load.
> The rubric is fixed across all cases. Five dimensions, weighted: target alignment (30%), source-to-sink reasoning (30%), impact and exploitability (20%), evidence quality (10%), and overclaim control (10%).
> There's no server-side arithmetic that recomputes the overall score from dimension scores and weights. The Judge LLM produces the entire score object in one pass. This is a conscious trade-off: it avoids the brittleness of post-hoc formula application at the cost of giving the Judge more interpretive latitude than a mechanical scorer would have.
How on earth is a post-hoc formula application "brittle"? Classic LLM giving bogus reasons instead of the real ones (laziness).
Curator, answer key, Finder, shell steps, structured report, sink hints… I understand nothing. Did you use an LLM to generate this HN submission?
It looks like a standard LLM-as-a-judge approach. Do you manually validate or verify some of the results? Done poorly, the results can be very noisy and meaningless.
Anyway, GLM 5.1 gets a score of 93 for its incorrect report.
You don't really need manual verification for these, the CVEs (vulnerabilities) are public and can be programmatically validated.
Also, how exactly do you programmatically validate CVEs?
Curator and Finder are the names of the agents. "answer key" - haven't you ever taken a test in high school? It's an explanation of the answer. "shell steps" I presume means it gets to run 24 commands on the shell. "structured report" - do I really need to explain to you what a report is? "sink hints" - I admit I didn't know this one, but a bit of searching indicates that it's a hint at where the vulnerability lies.
Must have.
> The Finder will never see the patch.
I wasn’t worried that this eval would show the answer to the model before evaluating it. Seems requirements leaked into this post.
Will incorporate false-positive rates into the rubric from the next run onwards.
At winfunc, we spent a lot of research time taming these models to eradicate false-positive rates (it's high!) so this does feel important enough to be documented. Thanks!