Hacker News new | past | comments | ask | show | jobs | submit
No it doesn't check out. I think it's becoming abundantly clear LLMs learn in real time as they speak to you. There's a lot of denial and people claiming they don't learn that their knowledge is fixed on the training data and this is not even remotely true at all.

LLMs learn dynamically through their context window and this learning is at a rate much faster than humans and often with capabilities greater than humans and often much worse.

For a code base as complex and as closed source as google the problems an LLM faces is largely the same as a human. How much can he fit into the context window?

You're observing this "paradox", because what you call learning here is not learning in the ML sense; it's deriving better conclusions from more data. It's true for many ML methods, but it doesn't mean any actual learning happens.
It checks out if you take into account most developers are actually rather mediocre outside of places where they spend an insane amount of time and money to get good devs (including but not limited to FANG)