Interesting, but there is something really off here. Probably caused by a harness bug, but it heavily screws output and I wouldn't trust anything about this leaderboard right now. Consider this case:

https://ndaybench.winfunc.com/cases/case_874d1b0586784db38b9...

GPT 5.4 allegedly failed, but if you look at the trace, you'll see that it simply couldn't find the file specified in the input prompt. It gave up after 9 steps of searching and was then judged as "missed."

Claude Opus 4.6 somehow passed with grade "excellent", but if you look at its trace, it never managed to find the file either. It just ran out of tool calls after the allowed 24 steps. But instead of admitting defeat, it hallucinated a vulnerability report (probably from similar code or vulnerabilities in its training corpus), which was somehow judged to be correct.

So if you want this to be remotely useful for comparing models, the judging model definitely needs to look at every step of finding the bug, not just the final model output summary.

Aurornis7 hours ago | | | parent | | on: 47763454
Good find. This appears to be another vibe coded vanity project where the output was never checked.

All of the online spaces where LLMs are discussed are having a problem with the volume of poorly vibecoded submissions like this. Historically I’ve really enjoyed Show HN type submissions but this year most of the small projects that get shared here and on other social medias turn out to be a waste of my time due to all of the vibecoding and how frequently the projects don’t do what they say they do when you look into the details.