Unintentional? This sort of marketing has been both Antrhopic's and OpenAI's MO for years...
Agree. I think they're intentionally sitting on the fence between "These models are the most useful" and "These models are the most dangerous".
They want the public and, in turn, regulators to fear the potential of AI so that those regulators will write laws limiting AI development. The laws would be crafted with input from the incumbents to enshrine/protect their moat. I believe they're angling for regulatory capture.
On the other hand, the models have to seem amazingly useful so that they're made out to be worth those risks and the fantastic investment they require.
The new Power MacĀ® G4 with Velocity EngineĀ®. So powerful, the government classifies it as a supercomputer and a potential weapon.
loading story #47685465