For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
Really well put, I wish we stopped calling it “artificial intelligence” and pick something more descriptive of what actually happens.
Right now it’s closer to a parrot trained to say “this guy” when asked “who’s a good boy”.
The phrase I keep seeing is “stochastic parrot” which I like a lot lol.