For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
AI doesn’t really exist yet. Media, back in 1870, called Tesla’s magnetically controlled boat artificial intelligence, and again in the 80s when computer scientists invented the game of life. But even now nothing we’ve made so far can do decision making. ChatGPT, the smartest out there, is really just a versatile prediction engine.
Imagine if I said, “once upon a” and asked you to come up with the next word, you’d say, “time” as you’ve heard that phrase hundreds of times. I then asked you to come up with the next word, and the next you might start telling me about a princess locked in a tall tower protected by a dragon. These are all stereotypical elements of a “once upon a time” story. Nothing creative, just typical. Chat GPT has just read way more than you or I ever could and is really good at knowing more stereotypical stories and mixing them together. There is no “what is best for humanity” only “once upon a time…”-made up stories.
What your saying doesn’t exist is an Artificial General Intelligence, something approaching the conscious human mind. Your right that doesn’t exist.
AI doesn’t just mean that though.
What we’re dealing with right now is the computer equivalent of growing mouse brain cells in a petre dish, plugging them into inputs and outputs & getting them to do useful things for us.
The way you describe chat GPT not being creative, is also theoretically how our own brains work in the creative processes. If you study story structure & mythology you’d find that ALL successful stories boil down to a very minimalist set of archetypes & types of conflict.
What we’re dealing with is randomly choosing options from a weighted distribution. The only thing intelligent about that is what you’ve chosen as the data set to generate that distribution.
And that intelligence lies outside of the machine.
There’s really no need to buy into tech bros delusions of grandeur about this stuff.
AI learns from the data it is given. There is no inherent understanding to it.
For a text based AI:
- You feed the AI with text. The AI internalizes that text. (Remembers it. Learns it.)
- You give feedback to the AI, what kind of responses you like from it and what you don’t. (You train it to behave the way you want.)
The AI does not inherently understand anything. But it will behave the way you trained it to, to the degree you trained it, and with all the imperfections you trained it with (e.g. prejudices).
AI currently doesn’t “understand” or “know” anything. It’s trained on a collection of text, and then predicts and extends the text prompt you give it. It’s very good at doing this. If someone “creates something new” the trained AI will have no concept of it, unless you train a new ai model that includes text about that thing.
Oh wow it is really interesting that new things will be unknown! So basically AI still isn’t intelligence because it can’t really make choices on its own, just based on what it has learned.
well it can “make choices” in the sense of having it predict a decision that someone might make. but it’s not really thinking about things on it’s own trying to figure it out, it’s just extending the text.
For example, say you ask it: “should we ban abortion?” now, it’s not actually thinking on it’s own, so it’ll go “what’s the most likely response to this?” and give that. But if you go: “respond as a pro-life republican, should we ban abortion” the same ai model will respond “yes”, but if you then go “respond as a pro-choice democrat, should we ban abortion” and it’ll respond “no”.
Basically it’s not thinking at all, but rather just extending the text you give it (which would include a response to the question). We can try prompting it as some all knowing being, but it’ll just inherently have biases depending on the exact nature of the prompting, as well as the dataset. It’s not reasoning things out on it’s own.
So if you ask it something it doesn’t know, it’ll just spit out garbage. You could try explaining the new thing in your prompt, at which point it’d respond the most likely text which may or may not be a good answer. In practice a new model would just be trained with the included topic, and it’d be the same as before: your prompt would determine the output of the ai.
Basically, it’s not deciding things; it’s just giving you the most likely continuation of the text. and in that sense, you can completely control the type of answers it gives. if you want the ai to be a flat earther who thinks murder is right, you can do that.
It’s not even making decisions. It’s following instructions.
Chat gpt’s instructions are very advanced, but the decisions have already been made. It follows the prompt and it’s reference material to provide the most common response.
It’s like a kid building a Lego kit- the kid isn’t deciding where pieces go, just following instructions.
Similarly, between the prompt, the training and the very careful instructions in how to train, and instructions that limit objectionable responses…. All it’s doing is following instructions already defined.
The example you give is also a big concern with how modern AI is very susceptible to leading questions. It’s very easy to get the answer you want by leading it on. That makes it a potential misinformation machine.
Adversarial testing can help reduce this, but it’s an uphill battle to train an AI faster than people get mislead by it.
Then again, most humans conception of right and wrong depends on context, not on a coherent morality framework.
What does that even mean? Contact matters.
I mean most of the time we act based on what we perceive to be socially acceptable, not by following an ethical law gained through our own experience.
If you move people to a different social environment, they’ll adapt to fit unless actively discouraged.
The social context is the AI prompt.
We rarely decide, make choices, or reflect about anything, we regurgitate our training data based on our prompts.Excellent, thank you! I’m wondering if something was lost in translation or my interpretation. When I think “context,” I consider something along the lines of: “Water is good.”
Is it good for a person drowning? What if it’s contaminated? What about during a hurricane/typhoon? And so forth.
Yeah sorry about that, sometimes thing that feel evident in my head are anything but when written.
And translation adds a layer of possible confusion.
I’d rather drown in clean water given a choice.
Really well put, I wish we stopped calling it “artificial intelligence” and pick something more descriptive of what actually happens.
Right now it’s closer to a parrot trained to say “this guy” when asked “who’s a good boy”.
The phrase I keep seeing is “stochastic parrot” which I like a lot lol.
Now lets really break your brain, are you & I able to make our own choices? Is the ego, the voice in our own skulls, the conscious mind really ever making any decisions?
There are a great many studies that seem to indicate decisions are made well before our conscious selves are aware of them.
We are far more driven by emotion & instinct then any of us care to admit.
Well that’s the thing. It can’t distinguish right from wrong. I found this Video quite insightful when it comes to the question how we are supposed to train an AI to make ethically correct decisions.
I think some rules can be hard-coded to an AI, but there are a lot of situations where it’s not even clear for us humans what the correct decision could be.
thanks for the share! very cool
It really doesn’t. In simple terms, AI will only avoid talking about certain subjects because the data they used to teach the AI says it’s bad and shows how the AI should act accordingly to the scenarios provided in that data.
damn…
Well you do the same don’t you. You know not to scream loudly in public because the data that you reviecied when you were younger tells you that it’s a mistake.