We do not think Anthropic should be designated as a supply chain risk
https://twitter.com/OpenAI/status/2027846016423321831The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.
This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.
This is an extremely bad idea and it will not be containable.
This is wildly different from the reality that you may find it difficult for an LLM to give an affirmative…
It does NOT mean that these models value anything.
I know $20 isn’t much, But to me not willing to spy on me for the US government is a good market differentiator.