The way LLMs work, different tokens can activate different parts of the network. I generally have 2-3 different agents review it from different perspectives. I give them identities, like Martin Fowler, or Uncle Bob, or whatever I think is relevant.
true - but the way LLMs are trained, google's RLVR is different from anthropic's is different from openai's. you'll get very good results sending the same 'review this change' prompt (literally) to different models.