Call it what you will, but all signs seem to indicate that generative AI is simply not as profitable as the evangelists want it to be.
The problem is the only customers are other corporation heads that want this not the regular customers,
No, the problem is that even if people who consider themselves really important want this really, really hard that doesn’t change the reality that the technology doesn’t do what they want it to do.
Not even remotely. LLMs have failed to find any viable market fit.
The problem continues to be hallucinations and limited utility. This is compounded by the fact that LLMs are very expensive to run. The latter problem wouldn’t really be a problem if LLMs were truly capable of replacing a human employee, but they’re not. They’re just too unreliable for any serious enterprise grade application, and they’re too expensive for any low severity application.
For example, as a coding assistant, a lot of people quite like them. But as a replacement for a human coder, they’re a disaster. That means you still have to employ the expensive human, and you also have to pay an exorbitant monthly fee for what amounts to a very cool search engine.
There are tonnes of frivolous applications where they work really well. The AI girlfriend stuff, for example. A chatbot that sexts you is a very sellable product, regardless of how icky it might seem to some people. But no one is going to pay over $200 / month for it (as an example, ChatGPT still doesn’t make a profit at their $200/month tier).
LLMs are too unreliable to make anything better than toys, but too expensive to sell as toys.
That is not true, we use Vision/LL Models to extract data and structure said data from documents. It is amazing how easy it is compared to OCR and needing to know where which information is located.
They’re just too unreliable for any serious enterprise grade application
I wouldn’t go that far. They are perfectly capable of fucking up as badly as organizations with the “enterprise” warning label have for decades.
I’d probably say it was the one realm where they could exceed their human counterparts.
For example, as a coding assistant, a lot of people quite like them. But as a replacement for a human coder, they’re a disaster.
New technology is best when it can meaningfully improve the productivity of a group of people so that the group can shrink. The technology doesn’t take any one identifiable job, but now an organization of 10 people, properly organized in a way conscious of that technology’s capabilities and limitations, can do what used to require 12.
A forklift and a bunch of pallets can make a warehouse more efficient, when everyone who works in that warehouse knows how the forklift is best used, even when not everyone is a forklift operator themselves.
Same with a white collar office where there’s less need for people physically scheduling things and taking messages, because everyone knows how to use an electronic calendar and email system for coordinating those things. There might still be need for pooled assistants and secretaries, but maybe not as many in any given office as before.
So when we need an LLM to chip in and reduce the amount of time a group of programmers need in order to put out a product, the manager of that team, and all the members of that team, need to have a good sense of what that LLM is good at and what it isn’t. Obviously autocomplete has always been a productivity enhancer for long before LLMs have been around, and extensions of that general concept may be helpful for the more tedious or repetitive tasks, but any team that uses it will need to use it with full knowledge of its limitations and where it best supplements the human’s own tasks.
I have no doubt that some things will improve and people will find workflows that leverage the strengths while avoiding the weaknesses. But it remains to be seen whether it’ll be worth the sheer amount of cost spent so far.
That’s true but at least one of these things needs to happen:
-
the forklift costs billions and consumes tons of energy, but it can lift a whole mountain, which no group of humans can do
-
the forklift helps a team of 10 do the work of 50 and, while still relatively expensive, it costs less than the 40 people it’s replacing
-
the forklift becomes an inexpensive commodity and it augments human capabilities and creates new possibilities for society as a whole
This is roughly what happened with mainframes to personal computers to mobile devices. LLMs are stuck between 1 and 2, they are not good enough forklifts to lift a mountain and not cheap enough to replace 40 people and save money. There are some hints that they could at one point move to 3 but the large players that could make it happen are starting to be scared by the amount of investment to get there.
On a related note, lot of people are being fooled by this hype machine mixing GenAI with good “old” machine learning and you now read about all these “AI wins” like “student discovers new galaxies with AI” or “scientist discover new medicines with AI” that make it sound like these people just asked ChatGPT “how would you go about discovering a new galaxy?” or “could you make up a new drug for me pretty please?”.
I think back to the late 90’s investment in rolling out a shitload of telecom infrastructure, with a bunch of telecom companies building out lots and lots of fiber. And perhaps more important than the physical fiber, the poles and conduits and other physical infrastructure housing that fiber, so that it could be improved as each generation of tech was released.
Then, in the early 2000’s, that industry crashed. Nobody could make their loan payments on the things they paid billions to build, and it wasn’t profitable to charge people for the use of those assets while paying interest on the money borrowed to build them, especially after the dot com crash where all the internet startups no longer had unlimited budgets to throw at them.
So thousands of telecom companies went into bankruptcy and sold off their assets. Those fiber links and routes still existed, but nobody turned them on. Google quietly acquired a bunch of “dark fiber” in the 2000’s.
When the cloud revolution happened in the late 2000’s and early 2010’s, the telecom infrastructure was ready for it. The companies that built that stuff weren’t still around, but the stuff they built finally became useful. Not at the prices paid for it, but when purchased in a fire sale, those assets could be profitable again.
That might happen with AI. Early movers over invest and fail, leaving what they’ve developed to be used by whoever survives. Maybe the tech never becomes worth what was paid for it, but once it’s made whoever buys it for cheap might be able to profit at that lower price, and it might prove to be useful in the more modest, realistic scope.
-
The biggest problem is that it is trying to be everything at the same time instead of being focused on a limited amount of knowledge. They are shooting for the moon and failing spectacularly, but since their staged presentations impress rich people who don’t understand the need for reliable results it keeps getting jammed down everyone’s throats.