"Pointless and nit-picky-annoying code review comments" seems like it could be mitigated with better prompting?

Leverage the innate in-context learning - by supplying the code review AI with an annotated list of "do" and "don't". Define the expected reviewer behavior better, dial it in over time.

Additionally, I can't be the only person who has initially viewed a received code review comment as a pointless nitpick only to realize it prevented a serious bug. I think as a code review recipient there is a natural human bias to believe that our code is already great and to see feedback as being less important than a truly neutral observer would.
lstolcman9 hours ago | | | parent | | on: 47765947
Apparently, this is what they are trying to do [0].

In some commercial projects we use copilot reviews in github, and noticed this "low quality nit-picky" style of review comments as well - but there is no way of getting rid of them as it is managed externally by github...

[0]: http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...