You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.
At the end of the day it’s an autocomplete. So if you ask “are you sure?” then “oh, actually” is a statistically likely completion.
> You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.
I'm just a sample size of one, but FWIW I didn't find that this noticably improved my results.
Not having to completely recreate all the LLM context neccessary to understand the literal context and the spectrum of possible solutions (which the LLM still "knows" before you clear the session) saves lots of time and tokens.
Interesting, I definitely see better results on a clean session. On a “dirty” session it’s more likely to go with “this is what we implemented, it’s good, we could improve it this way”, whereas on a clean session it’s a lot more likely to find actual issues or things that were overlooked in the implementation session.