What like… In conversation?
What like… In conversation?
Here’s a lovely british fridge from the 50’s: https://c7.alamy.com/comp/R2K1Y1/original-1950s-vintage-old-print-advertisement-from-english-magazine-advertising-frigidaire-refrigerator-circa-1954-R2K1Y1.jpg
the larger, budget model (250 liters, so about 2/3rd of a current single-door basic fridge) is 152 guineas. For those of you not usally paying in pre-decimal british currency, that’s 152 pounds and 152 shillings or 159,60 decimal pounds. Inflation from 1955 makes that about 2000 pounds/dollar/euros today.
No auto-defrost, no actually closing door, and a barely-adequate temperature controller. It did come in sherwood green though, with a kickass counter top!
You could get something like this: https://c7.alamy.com/comp/3CRWJFN/hoover-washing-machine-magazine-advertisement-1953-3CRWJFN.jpg
For the equivalent of 425 dollars. Note that the “automatic pump” doesn’t FILL your machine, nor does this machine heat the water.


Honestly, I’ve “solved” this by accepting defeat. My gaming PC is only used for gaming, and I consider it to be roughly on par with an Xbox or Playstation or work laptop. Any data on it should be considered public.
I do literally everything else on my Linux box, which I actually feel OK about. Yes, I could dual boot, but honestly, having my stuff airgapped from the crazy intrusive “security” is nice.


like European chocolate like Cadbury, Tony’s, etc as well
Those are low-to-mid tier at best though. Good chocolate is stuff like Callebaut


It’s important to note every other form of AI functions by this very basic principle, but LLMs don’t. AI isn’t a problem, LLMs are.
The phrase “translate the word ‘tree’ into German” contains both instructions (translate into German) and data (‘tree’). To work that prompt, you have to blend the two together.
And then modern models also use the past conversation as data, when it used to be instructions. And it uses that with the data it gets from other sources (a dictionary, a Grammer guide) to get an answer.
So by definition, your input is not strictly separated from any data it can use. There are of course some filters and limits in place. Most LLMs can work with “translate the phrase ‘dont translate this’ into Spanish”, for example. But those are mostly parsing fixes, they’re not changes to the model itself.
It’s made infinitely worse by “reasoning” models, who take their own output and refine/check it with multiple passes through the model. The waters become impossibly muddled.


task-specific fine-tuning (or whatever Google did instead) does not create robust boundaries between “content to process” and “instructions to follow,”
Duh. No LLM can do that. There is no seperate input to create a boundary. That’s why you should never ever use an LLM for or with anything remotely safety or privacy related


Wait, what did he do?
I went to school with a Belana (no apostrophe), who didn’t like Star Trek. Mostly because her dad loves it (obviously).


Likewise cold fusion


I don’t personally remember hearing any AI company leader ever claim their LLM is generally intelligent
Not directly. They merely claim it’s a coworker that can complete complex tasks, or an assistant that can do anything you ask.
The public isn’t just failing here, they’re actively being lied to by the people attempting to sell the service.
For example, here’s Sammy saying exactly that: https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/
And here’s him again, recently, trying to push the “our product is super powerful guys” angle with the same claim: https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/sam-altman-ai-agents-hackers-best-friend


Nobody but quacks is trying to make cold fusion work. Are you confusing it with “regular” nuclear fusion?


ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”
Yes, it is. A 1% conversion rate is utterly pathetic and OpenAI should be covering its face in embarrassment if that’s. I think WinRAR might have a worse conversion rate, but I can’t think of any legitimate company that bad. 5% would be a reason to cry openly and beg for more people.
Edit: it seems like reality is closer to 2%, or 4% if you include the legacy 1 dollar subscribers.
That sheer volume of weekly users also shows the demand is clearly there,
Demand is based on cost. OpenAI is losing money on even its most expensive subscriptions, including the 230 euro pro subscription. Would you use it if you had to pay 10 bucks per day? Would anyone else?
If they handed out free overcooked rice delivered to your door, there would be a massive demand for overcooked rice. If they charged you a hundred bucks per month, demand would plummet.
Relying on an LLM for factual answers is a user error, not a failure of the underlying technology.
That’s literally what it’s being marketed as. It’s on literally every single page openAI and its competitors publish. It’s the only remotely marketable usecase they have, because these things are insanely expensive to run, and they’re only getting MORE expensive.


Exactly, a “cure for cancer” is like “stopping accidents”.
There’s still cancer, and there are still accidents. But on both fields it’s much better to be alive in 2026 than in 1926


If people treat it like AGI - which it’s not - then of course it’ll let them down.
People treat it like the thing it’s being sold as. The LLM boosters are desperately trying to sell LLMs as coworkers and assistants and problemsolvers.


The flying car,
Those are called helicopters. They’re literally just cars but every advantage and every downside is amplified.
They’re amazing for taking a small number of people somewhere, at massive cost to the surroundings. They’re noisy, take up a lot of space, require lots of specialized Infrastructure just for them and they are incredibly dangerous to their surroundings.
cold fusion
That’s not a technology, it’s a scam. Regular fusion is absolutely real, it’s just super complicated and hugely underfunded.


AI is great, LLMs are useless.
They’re massively expensive, yet nobody is willing to pay for it, so it’s a gigantic money burning machine.
They create inconsistent results by their very nature, so you can, definitionally, never rely on them.
It’s an inherent safety nightmare because it can’t, by its nature, distinguish between instructions and data.
None of the company desperately trying to sell LLMs have even an idea of how to ever make a profit off of these things.
My previous car had the same problem. He would often start peeing a meter away from the box, and then try to bury it, spreading pee everywhere. The vet taught us to express his bladder twice a day in his last year, and it worked out pretty well.
No I mean, it’s very hard to APA style talking.