Even if one-shot LLM performance has plateaued (which I'm not convinced this data shows given omission of recent models that are widely claimed to be better) that missing the point that I see in my own work. The improved tooling and agent-based approaches that I'm using now make the LLM one-shot performance only a small part of the puzzle in terms of how AI tools have accelerated the time from idea to decent code. For instance the planning dialogs I now have with Claude are an important part of what's speeding things up for me. Also, the iterative use of AI to identify, track, and take care of small coding tasks (none of which are particularly challenging in terms of benchmarks) is simply more effective. Could this all have been done with the LLM engines of late 2024. Perhaps, but I think the fine-tuning (and conceivably the system prompts) that make the current LLM's more effective at agent-centered workflows (including tool-use) are a big part of it. One-shot task performance at challenging tasks is an interesting, certainly foundational, metric. But I don't think it captures the important advances I see in how LLM's have gotten better over the last year in ways that actually matter to me. I rarely have a well-defined programming challenge and the obligation to solve it in a single-shot.