Hacker News new | past | comments | ask | show | jobs | submit
Anthropic always goes on and on about how their models are world changing and super dangerous like every single time they make something new they say its going to rewrite everything and scary lmao

funny because they do it every time like clockwork acting like their ai is a thunderstorm coming to wipe out the world

You say this like it's a bad thing, but wouldn't you rather they overindex on the danger of their models?
loading story #47682585
They do tend to make a lot of noise about it for the PR, but at the same time the actual safety research they present seems to be relatively grounded in practical reality, e.g. the quote someone posted here about how the Mythos model apparently has a tendency to try to bypass safety systems if they get in the way of what it has been asked to do.

Sure, a big part of this is PR about how smart their model apparently is, but the failure mode they're describing is also pretty relevant for deploying LLM-based systems.

Every single time, really? When did they said that the last time?

I also don't recall they ever limited their models to selective groups.

If there are advancements, they have to be described somehow.

What if the capability advancements are real and they warrant a higher level of concern or attention?

Are we just going to automatically dismiss them because "bro, you're blowing it up too much"

Either way these improvements to capabilities are ratcheting along at about the pace that many people were expecting (and were right to expect). There is no apparent reason they will stop ratcheting along any time soon.

The rational approach is probably to start behaving as if models that are as capable as Anthropic says this one is do actually exist (even if you don't believe them on this one). The capabilities will eventually arrive, most likely sooner than we all think, and you don't want to be caught with your pants down.

loading story #47681505