It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.
It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.
I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent - and even the LLM itself will straight-up tell you it isn’t and shouldn’t be blindly trusted.
I think the main issue is that when a layperson hears “AI,” they instantly picture AGI. We’re just not properly educated on the terminology here.
Not directly. They merely claim it’s a coworker that can complete complex tasks, or an assistant that can do anything you ask.
The public isn’t just failing here, they’re actively being lied to by the people attempting to sell the service.
For example, here’s Sammy saying exactly that: https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/
And here’s him again, recently, trying to push the “our product is super powerful guys” angle with the same claim: https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/sam-altman-ai-agents-hackers-best-friend
But he is not actually claiming that they already have this technology but rather that they’re working towards it. He even calls ChatGPT dumb there.