An example I have of this is when I asked Claude to copy a some functionality from a front-end application to a back-end application. It got all of the function signatures right but then hallucinated the contents of the functions. Part of this functionality included a look up map for some values. The new version had entirely hallucinated keys and values, but the values sounded correct if you didn't compare with the original. A human would have literally copied the original lookup map.
But now with Claude, the mental model of how your code works is not in your head, but resides behind a chain of reasoning from Claude Code that you are not privy too. When something breaks, you either have to spend much longer trying to piece together what your agent has made, or to continue throwing Claude at and hope it doesn't spiral into more subtle bugs.