Hacker News new | past | comments | ask | show | jobs | submit
Agree. I think they're intentionally sitting on the fence between "These models are the most useful" and "These models are the most dangerous".

They want the public and, in turn, regulators to fear the potential of AI so that those regulators will write laws limiting AI development. The laws would be crafted with input from the incumbents to enshrine/protect their moat. I believe they're angling for regulatory capture.

On the other hand, the models have to seem amazingly useful so that they're made out to be worth those risks and the fantastic investment they require.

loading story #47691900