Hacker News new | past | comments | ask | show | jobs | submit
It’s always harder to build a mental model of the code written by someone else. No matter what, if you trust an LLM on small things in the long run you’ll trust it for bigger things. And the most code the LLM writes, the harder it is to build this mental construct. In the end it’ll be « it worked on 90% of cases so we trust it ». And who will debug 300 millions of code written by a machine that no one read based on trust ?