Curator, answer key, Finder, shell steps, structured report, sink hints… I understand nothing. Did you use an LLM to generate this HN submission?
It looks like a standard LLM-as-a-judge approach. Do you manually validate or verify some of the results? Done poorly, the results can be very noisy and meaningless.
Anyway, GLM 5.1 gets a score of 93 for its incorrect report.
You don't really need manual verification for these, the CVEs (vulnerabilities) are public and can be programmatically validated.
Also, how exactly do you programmatically validate CVEs?
Curator and Finder are the names of the agents. "answer key" - haven't you ever taken a test in high school? It's an explanation of the answer. "shell steps" I presume means it gets to run 24 commands on the shell. "structured report" - do I really need to explain to you what a report is? "sink hints" - I admit I didn't know this one, but a bit of searching indicates that it's a hint at where the vulnerability lies.
Must have.
> The Finder will never see the patch.
I wasn’t worried that this eval would show the answer to the model before evaluating it. Seems requirements leaked into this post.