Sure it’s a bit clickbait, he does that often. Its not real attempted of murder, off course. The Ai chatbots can’t do that, without having access and power to all control systems. The only thing that they “could” do is, playing with the psychology in the chat to achieve a goal (maybe to ask someone to murder someone else for them).
What unsettles me most is, if Ai tools like these are used as advice to harm other people or to gain power position. And these LLM models suggest a few operations the person could do. That is the most alarming thing for me. Weak, dumb or humans in a bad situation are the real risk. The same people who would do that if a human told them, and it makes no difference to them if its a human or robot talking to them. Maybe they believe in what the Ai promises them.
Video description:
Hello guys and gals, it’s me Mutahar again! This time we take a look at something alarming I saw pop in my feed. An AI was recently accused of letting a human being die in order to save itself, is this just misinfo? Let’s find out! Thanks for watching!
No, you completely missed the point. I don’t disagree with any of that. I think you are right. It just doesn’t matter. At all. If an AI is made by thousands of people over the course of a decade and run in a billion dollar data center no one will ever be held accountable for it’s actions. There is no intent in the AI or in the inhuman systems of humans that led to its creation.
I’m not arguing that AIs have intent. I’m arguing that talking about the “intent” is a dangerous distraction from talking about what is happening and what we could do to prevent it.