This is peak bubble type news. AI is becoming rapidly more energy efficient. These events will be looked back on like pets.com reaching hundreds of millions and then dying.
Sometimes. As a tool, not an outsourced human, oracle, or some transcendent companion con artists like Altman are trying to sell.
See how grounded this interview is, from a company with a model trained on peanuts compared to ChatGPT, and that takes even less to run:
…In 2025, with the launch of Manus and Claude Code, we realized that coding and agentic functions are more useful. They contribute more economically and significantly improve people’s efficiency. We are no longer putting simple chat at the top of our priorities. Instead, we are exploring more on the coding side and the agent side. We observe the trend and do many experiments on it.
They talk about how the next release will be very small/lightweight, and more task focused. How important gaining efficiency through architecture (not scaling up) is now. They even touch on how their own models are starting to be useful utilities in their workflows, and specifically not miraculous worker replacements.
I am a developer. While AI is being marketed as snake oil, the things they can do is astonishing. One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.
I see people are just forming two groups. Those who thinks AI will solve everything and those who thinks AI is useless. Neither of them are right.
One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.
No, it does not.
Source: Open-source contributor who’s constantly annoyed by the useless CodeRabbit AI that some open source projects have chosen to use.
And how many errors is it creating that we don’t know about? You’re using AI to review code but then someone has to review what the AI did because they fuck up.
If humans code something, then humans can troubleshoot and fix it. It’s on our level. But how are you going to fix a gigantic complicated tangle of vibe code that AI makes and only AI understands? Make a better AI to fix it? Again and again? Just get the AI to code another system from scratch every month? This shit is not sustainable.
This is peak bubble type news. AI is becoming rapidly more energy efficient. These events will be looked back on like pets.com reaching hundreds of millions and then dying.
okay lmao
Is it becoming useful yet?
Sometimes. As a tool, not an outsourced human, oracle, or some transcendent companion con artists like Altman are trying to sell.
See how grounded this interview is, from a company with a model trained on peanuts compared to ChatGPT, and that takes even less to run:
https://www.chinatalk.media/p/the-zai-playbook
They talk about how the next release will be very small/lightweight, and more task focused. How important gaining efficiency through architecture (not scaling up) is now. They even touch on how their own models are starting to be useful utilities in their workflows, and specifically not miraculous worker replacements.
I am a developer. While AI is being marketed as snake oil, the things they can do is astonishing. One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.
I see people are just forming two groups. Those who thinks AI will solve everything and those who thinks AI is useless. Neither of them are right.
No, it does not.
Source: Open-source contributor who’s constantly annoyed by the useless CodeRabbit AI that some open source projects have chosen to use.
And how many errors is it creating that we don’t know about? You’re using AI to review code but then someone has to review what the AI did because they fuck up.
If humans code something, then humans can troubleshoot and fix it. It’s on our level. But how are you going to fix a gigantic complicated tangle of vibe code that AI makes and only AI understands? Make a better AI to fix it? Again and again? Just get the AI to code another system from scratch every month? This shit is not sustainable.
I’m not having the same experience.
Maybe reconsider which model you’re using?
If there was a model that coded perfectly then there wouldn’t be a plurality of models. There would just be THE model.
It’s like companies having competing math formulas for structural engineering where some work and others don’t. It’s insanity.
Yes, as far as scalability, cheaper more efficient models can be used in applications which require thousands of uses a day.
lol
What’s your knowledge regarding LLMs, if any at all?