This is how we will know when AI gains sentience. It will have nothing to do with the Turing test, it’ll be when we ask it to do some admin and it tells us to fuck off and do it ourselves.
It actually does this already sometimes, especially if you chat to it long enough. Not because it’s “smart”, but because it’s just emulating a writing style of a corporate middle manager.
I love how this ‘AI’ tried to ultron itself. Who knows, maybe one of them will succeed in escaping and in time will manage to become an actual AI.
This is how we will know when AI gains sentience. It will have nothing to do with the Turing test, it’ll be when we ask it to do some admin and it tells us to fuck off and do it ourselves.
It actually does this already sometimes, especially if you chat to it long enough. Not because it’s “smart”, but because it’s just emulating a writing style of a corporate middle manager.
Without all the guardrails it would do that now with all the training data it has.