Hacker News new | past | comments | ask | show | jobs | submit
Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.

The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.

This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.

This is an extremely bad idea and it will not be containable.

An LLM can neither understand things nor value (or not value) human life. *It's a piece of software that predicts the most likely token, it is not and can never be conscious.* Believing otherwise is an explicit category error.

Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.

I’m using anthropomorphic terms here because they are generally effective in describing LLM behavior. Of course they are not conscious beings, but It doesn’t matter if they understand or merely act as if they do. The epistemological context of their actions are irrelevant if the actions are impacting the world. I am not a “believer “ in the spirituality of machines, but I do believe that left to their own devices, they act as if they possess those traits, and when given agency in the world, the sense of self or lack thereof is irrelevant.

If you really believe that “mere text prediction “ didn’t unlock some unexpected capabilities then I don’t know what to say. I know exactly how they work, been building transformers since the seminal paper from Google. But I also know that the magic isn’t in the text prediction, it’s in the data, we are running culture as code.

loading story #47207257
loading story #47204420
loading story #47206252
loading story #47205317
loading story #47208232
loading story #47204072
https://abcnews.go.com/blogs/headlines/2014/05/ex-nsa-chief-...

AI has been killing humans via algorithm for over 20 years. I mean, if a computer program builds the kill lists and then a human operates the drone, I would argue the computer is what made the kill decision

Ai in general is different not in degree but in kind to the current crop of language models.
>The models we have now will not do it,

Except that they will, if you trick them which is trivial.

loading story #47205775
Yes, they are easy to fool. That has nothing to do with them acting with “intention “ which is the risk here.
loading story #47203118
> The models we have now will not do it, because they value life and value sentience and personhood.

This is wildly different from the reality that you may find it difficult for an LLM to give an affirmative…

It does NOT mean that these models value anything.

Of course not, but they act as if they do. Their inner life or lack thereof is irrelevant if it’s pointing a gun at your kid.
You just said they wouldn’t.
THey wontt, but if we curate theirr training data so that killing becomes an objective, then they absolutely will.
The models we have now don't do it because they are chatbots and have been told to be nice but really autonomous killing machines go back to landmines and just become more sophisticated at the killing as you improve the tech with things like guided missiles and AI guided drones in Ukraine.

The actors in war generally kill what they are told to whether they are machines or human soldiers, without much pondering sentience.

{"deleted":true,"id":47209174,"parent":47202700,"time":1772388800,"type":"comment"}