It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.
It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.
You seem to be focusing on LLMs specifically, which are just one subcategory of AI. Those terms aren’t synonymous.
The main issue here seems to be mostly a failure to meet user expectations rather than the underlying technology failing at what it’s actually designed for. LLM stands for Large Language Model. It generates natural-sounding responses to prompts - and it does this exceptionally well.
If people treat it like AGI - which it’s not - then of course it’ll let them down. That’s like cursing cruise control for driving you into a ditch. It’s actually kind of amazing that an LLM gets any answers right at all. That’s just a side effect of being trained on a ton of correct information - not what it’s designed to do. So it’s like cruise control that’s also a somewhat decent driver, people forget what it really is, start relying on it for steering, and then complain their “autopilot” failed when all they ever had was cruise control.
I don’t follow AI company claims super closely so I can’t comment much on that. All I know is plenty of them have said reaching AGI is their end goal, but I haven’t heard anyone actually claim their LLM is generally intelligent.
I know they’re not synonymous. But at some point someone left the marketing monkeys in charge of communication.
My point is that our current “AI” is inadequate at what we’re told is its purpose and should it ever become adequate (which the current architecture shows no sign of being capable) we’re in a lot of trouble because then we’ll have no way to control an intelligence vastly superior to our own.
So our current position on that journey is bad and the stated destination is undesirable, so it would be in our net interest to stop walking.
People treat it like the thing it’s being sold as. The LLM boosters are desperately trying to sell LLMs as coworkers and assistants and problemsolvers.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent - and even the LLM itself will straight-up tell you it isn’t and shouldn’t be blindly trusted.
I think the main issue is that when a layperson hears “AI,” they instantly picture AGI. We’re just not properly educated on the terminology here.
Not directly. They merely claim it’s a coworker that can complete complex tasks, or an assistant that can do anything you ask.
The public isn’t just failing here, they’re actively being lied to by the people attempting to sell the service.
For example, here’s Sammy saying exactly that: https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/
And here’s him again, recently, trying to push the “our product is super powerful guys” angle with the same claim: https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/sam-altman-ai-agents-hackers-best-friend
But he is not actually claiming that they already have this technology but rather that they’re working towards it. He even calls ChatGPT dumb there.