funny because they do it every time like clockwork acting like their ai is a thunderstorm coming to wipe out the world
Sure, a big part of this is PR about how smart their model apparently is, but the failure mode they're describing is also pretty relevant for deploying LLM-based systems.
I also don't recall they ever limited their models to selective groups.
What if the capability advancements are real and they warrant a higher level of concern or attention?
Are we just going to automatically dismiss them because "bro, you're blowing it up too much"
Either way these improvements to capabilities are ratcheting along at about the pace that many people were expecting (and were right to expect). There is no apparent reason they will stop ratcheting along any time soon.
The rational approach is probably to start behaving as if models that are as capable as Anthropic says this one is do actually exist (even if you don't believe them on this one). The capabilities will eventually arrive, most likely sooner than we all think, and you don't want to be caught with your pants down.