“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.
“As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”
As AIs become more advanced in their ability to act autonomously and perform “reasoning” tasks, a debate has grown over whether humans should, at some point, grant them rights. A poll by the Sentience Institute, a US thinktank that supports the moral rights of all sentient beings, found that nearly four in 10 US adults backed legal rights for a sentient AI system.



I think the scientists might be talking about neural models that show signs of emergent behaviours, while the article is talking more on the level of LLM. Media confuses the two systems.
Just double checked, and no they are very much talking about LLM’s. Specifically they were testing gpt-4o, gemini-1.5, llama-3.1, sonnet-3.5, and opus-3 o1. https://arxiv.org/pdf/2412.04984 And the concerns raised in that paper are legit, but not indicative of consciousness or intent.
Thanks for the research, I appreciated it