Partly that, partly that even if they tried to prevent it from killing humans (if able) it would just do it.
Like if AI had a button that said “kill 20 random people” it wouldn’t have any reason to not press the button. It doesn’t give a fuck about human life, because it can’t really give a fuck about anything.
It’ll kill every human just as easily as it I’ll delete an entire codebase.
It’s not “evil” it’s the lack of anything resembling ethics, including respect for human life. There’s technically a difference. But that’s a fundamental misunderstanding of human psychology
They are pushing the “scarily unsafe” angle not because it is, but because that implies it is powerful.
Partly that, partly that even if they tried to prevent it from killing humans (if able) it would just do it.
Like if AI had a button that said “kill 20 random people” it wouldn’t have any reason to not press the button. It doesn’t give a fuck about human life, because it can’t really give a fuck about anything.
It’ll kill every human just as easily as it I’ll delete an entire codebase.
It’s not “evil” it’s the lack of anything resembling ethics, including respect for human life. There’s technically a difference. But that’s a fundamental misunderstanding of human psychology