I read about similar issue today in another context, in a thread about introducing AI code review in OpenWrt [0]. The idea came from the fact that the project has too few maintainers compared to the number of incoming patches.

Automated code review is supposed to help catch the most trivial and basic mistakes (which, as the author claims, are often repetitive), and also speed up feedback. Ultimately, this should help push issues forward and let maintainers focus on harder problems like architectural issues, which needs deep knowledge, and AI can't solve this part yet.

On the other hand, there are comments opposing the policies of AI companies, complaining about pointless and nit-picky-annoying code review comments, that don't add much, and raising the concern that AI reviews are treated as checklist for getting things merged; which can be frustrating regarding to the amount of bot comments. The suggested mitigation would be to explicitly note, that the AI code review is only a suggestion of changes. [1]

In the end, I think accepting AI in a way similar to the rules introduced in Linux (i.e., you can make your life easier, but you still have to understand the code) makes sense, given the limited code review capacity, compared to the volume of incoming contributions - which is also referred in a mailing list thread I'm referring to [2]

[0] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...

[1] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...

[2] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...

twp10 hours ago | | | parent | | on: 47765769
AI reviews are flaky - maybe correct 80% of the time - and everyone hates flakiness.

AI code reviews easily double the work in reviewing: you have to both review the original code and the AI code review. The AI code review can be 80% correct, but you never know which 80% is correct and which 20% is garbage, so you have to review all the AI's comments.

Orygin9 hours ago | | | parent | | on: 47766014
Maybe but I'll take a 80% correct review over no review at all. If it alleviates a good chunk of back and forth between the reviewer and the committer, it's still overall a time save for the maintainer.
armanckeser10 hours ago | | | parent | | on: 47765769
Agreed. A problem I see with how AI reviews have been used is that after one kicks it off, now the maintainer has to review both the PR and the AI's review which doesn't really save time. Like you said, if AI review was used more intentionally, e.g. all PRs have to go through AI review that checks for the baseline requirements and only after the contributor signals "I addressed everything AI commented either by giving my disagreement reasons or making the changes", maintainers spending time on the review could save a lot of quality time.
ACCount3710 hours ago | | | parent | | on: 47765769
"Pointless and nit-picky-annoying code review comments" seems like it could be mitigated with better prompting?

Leverage the innate in-context learning - by supplying the code review AI with an annotated list of "do" and "don't". Define the expected reviewer behavior better, dial it in over time.

Additionally, I can't be the only person who has initially viewed a received code review comment as a pointless nitpick only to realize it prevented a serious bug. I think as a code review recipient there is a natural human bias to believe that our code is already great and to see feedback as being less important than a truly neutral observer would.
lstolcman9 hours ago | | | parent | | on: 47765947
Apparently, this is what they are trying to do [0].

In some commercial projects we use copilot reviews in github, and noticed this "low quality nit-picky" style of review comments as well - but there is no way of getting rid of them as it is managed externally by github...

[0]: http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...