Hacker News new | past | comments | ask | show | jobs | submit
He doesn't address the real question of how an LLM predicting the next token could exceed what humans have done. They mostly interpolate, so if the answer isn't to be found in an interpolation, the LLM can't generate something new.