Hacker News new | past | comments | ask | show | jobs | submit
I'm also skeptical that it's impossible to get an LLM to reproduce some code verbatim. Google had that paper a while back about getting diffusion models to spit out images that were essentially raw training data, and I wouldn't be surprised if the same is possible for LLMs.