I don’t care if AI is useful, it’s not this useful. And it sure as shit isn’t going to see the returns they expect.
I run an internal multi-user AI app. It plugs into almost everyone’s workflows to make things easier (fetches documents, pulls data, contextualizes stuff). It costs $1 per day per user in token costs .
You need a trillion people using these apps for the current valuations to make sense.
My guess is they are using the Netflix playbook all over again.
Get you hooked to the extreme convenience, much like a drug addict, and then pump up the price or flood every prompt with ads.
That’s my best case.
Worse case is, that alongside the rising adoption, they will start surreptitiously but effectively modifying general knowledge, thought and behaviour in ways the worst best Marketer would blush about.
Get you hooked to the extreme convenience, much like a drug addict, and then pump up the price or flood every prompt with ads.
There is a big difference between “normal” SaaS and LLM.
In a normal SaaS you get a lot of benefit of being at scale. Going from 1000 to 10000 users is not that much harder than going from 10000 to 1000000. Once you have your scaling set up you can just add more servers and/or data centers. But most importantly, the cost per user goes waaay down.
With AI it just doesn’t scale at all, the 500000th user will most likely cost as much as the 5th. So doing a netflix/spotify/etc, I don’t think is going to work unless they can somehow make it a lot cheaper per user. OpenAI fails to turn a profit even on their most expensive tiers.
Edit: to clarify, obviously you get some small benefits from being at scale. Better negotiations and already having server racks, etc. But those same benefits a traditionsl SaaS gets as well, and so much more that LLM doesn’t, because the cost per user doesn’t drop.
Is this correct? I was under the impresion that the most expensive part of an llm is the training, and once that’s done the cost of running a prompt is negligible.
I get your point that this last part doesn’t scale well, but the far larger cost of training must get very diluted if they distribute it across a large user base.
I agree, scaling users isn’t the issue, what is is the neverending chase for the mirage that is AGI. They’ll throw every processing cycle they can muster at that fever dream, that’s the financial black hole.
Yes, but don’t underestimate the power of centralisation.
6 months ago you could set up a server for running a decent local llm for under 800.
By increasing the demands and pushing the price of hardware up, they are efectibly gate keeping access to llms.
I think the plan is that we will need to rely on this companies for compute power and llms services, and then they can do all sorts of nefarious things.
I don’t care if AI is useful, it’s not this useful. And it sure as shit isn’t going to see the returns they expect.
And how. Making coders slightly more efficient really just isn’t worth this much. There’s also going to be hell to pay when software created by all of the vibe-coding is found to be full of security holes.
They want to pitch this as unemployment causing efficiency gains, but this shit clearly can’t do anyone’s job that wasn’t already automatable through regular software.
If it’s this useful, we’re (and them) fucked too because the economy would collapse under falling aggregate demand due to falling wages and layoffs. The “people will find new jobs” won’t save us from a shift this large without a depression. And all sorts of things happen during depressions.
Yep, pop the bubble and watch these companies deflate instantly. Won’t bother me, my fortune isn’t tied up in bullshit because I believed my own propaganda. Maybe some of those psychopaths will solve their instant financial problems the old fashioned way, from the 100th floor.
This is clearly a bubble.
I don’t care if AI is useful, it’s not this useful. And it sure as shit isn’t going to see the returns they expect.
I run an internal multi-user AI app. It plugs into almost everyone’s workflows to make things easier (fetches documents, pulls data, contextualizes stuff). It costs $1 per day per user in token costs .
You need a trillion people using these apps for the current valuations to make sense.
That explains why they want people to have more babies.
My guess is they are using the Netflix playbook all over again.
Get you hooked to the extreme convenience, much like a drug addict, and then pump up the price or flood every prompt with ads.
That’s my best case.
Worse case is, that alongside the rising adoption, they will start surreptitiously but effectively modifying general knowledge, thought and behaviour in ways the
worstbest Marketer would blush about.There is a big difference between “normal” SaaS and LLM.
In a normal SaaS you get a lot of benefit of being at scale. Going from 1000 to 10000 users is not that much harder than going from 10000 to 1000000. Once you have your scaling set up you can just add more servers and/or data centers. But most importantly, the cost per user goes waaay down.
With AI it just doesn’t scale at all, the 500000th user will most likely cost as much as the 5th. So doing a netflix/spotify/etc, I don’t think is going to work unless they can somehow make it a lot cheaper per user. OpenAI fails to turn a profit even on their most expensive tiers.
Edit: to clarify, obviously you get some small benefits from being at scale. Better negotiations and already having server racks, etc. But those same benefits a traditionsl SaaS gets as well, and so much more that LLM doesn’t, because the cost per user doesn’t drop.
Is this correct? I was under the impresion that the most expensive part of an llm is the training, and once that’s done the cost of running a prompt is negligible.
I get your point that this last part doesn’t scale well, but the far larger cost of training must get very diluted if they distribute it across a large user base.
I agree, scaling users isn’t the issue, what is is the neverending chase for the mirage that is AGI. They’ll throw every processing cycle they can muster at that fever dream, that’s the financial black hole.
Yes, but don’t underestimate the power of centralisation.
6 months ago you could set up a server for running a decent local llm for under 800.
By increasing the demands and pushing the price of hardware up, they are efectibly gate keeping access to llms.
I think the plan is that we will need to rely on this companies for compute power and llms services, and then they can do all sorts of nefarious things.
“Sure I found that document you needed, and with it, I also found this great new game I know you’ll love. Raid: Shadow Legends, It’s a free to pla…”
I cannot wait for companies spending 300 dollars per user per month for this convenience.
I think the word you’re looking for there is surreptitiously
Thank you kind person, I just fixed it!
And how. Making coders slightly more efficient really just isn’t worth this much. There’s also going to be hell to pay when software created by all of the vibe-coding is found to be full of security holes.
They want to pitch this as unemployment causing efficiency gains, but this shit clearly can’t do anyone’s job that wasn’t already automatable through regular software.
If it’s this useful, we’re (and them) fucked too because the economy would collapse under falling aggregate demand due to falling wages and layoffs. The “people will find new jobs” won’t save us from a shift this large without a depression. And all sorts of things happen during depressions.
I’m struggling to find a scenario in which we are not already fucked. I say we “go for broke” and move on.
Pop this thing.
Yep, pop the bubble and watch these companies deflate instantly. Won’t bother me, my fortune isn’t tied up in bullshit because I believed my own propaganda. Maybe some of those psychopaths will solve their instant financial problems the old fashioned way, from the 100th floor.
Shit! Don’t threaten me with a good time!