From COBOL in the 1960s to AI in the 2020s, every generation promises to eliminate programmers. Explore the recurring cycles of software simplification hype.
Eliminating programmers will be possible when we figure out how to eliminate engineers in designing buildings.
Only a true AGI will be able to do that, and while LLMs feel like a step towards AGI, they are still missing the critical ongoing learning component that needs to happen for an AGI to exist. The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.
These AI “coding agents” aren’t learning or thinking. They’re just natural language statistical search engines, and as such it’s easy to anthropomorphize them. Future generations will laugh at us, kinda like how we laugh at old products that contain cocaine, asbestos, lead, uranium, etc.
But by definition they are learning and it is not conceptually different from how we learn.
(citation needed)
“Machine learning” is neither mechanically nor conceptually similar to how humans learn, unless you take a uselessly broad view and define it as “thing goes in, thing comes out”. The same could be applied to a simple CRUD app.
The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.
As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.
Rag was never supposed to be about learning over time. It was supposed to provide better context at inference. It could never scale to handle new learning beyond focused concepts.
The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.
But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.
A true AGI also might simply not want to be a programmer or engineer, or might want to work on niche, single-developer projects interesting for them but not of use to wider humanity, like many actual developers do once their $dayjob is over. I can imagine they’ll also be annoyed by slop machine users creating extra boring work for them to shovel through and AI bros getting creepy with them or trying to subordinate them to own wishes.
Eliminating programmers will be possible when we figure out how to eliminate engineers in designing buildings.
Only a true AGI will be able to do that, and while LLMs feel like a step towards AGI, they are still missing the critical ongoing learning component that needs to happen for an AGI to exist. The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.
They are already learning, but they cannot ideate. That is the distinction.
Take a human and have him study every single repo on GitHub
Take an AI and train it on every single repo on Github
Which of those two will continue to make novice mistakes like SQL injection and XSS vulnerabilities?
These AI “coding agents” aren’t learning or thinking. They’re just natural language statistical search engines, and as such it’s easy to anthropomorphize them. Future generations will laugh at us, kinda like how we laugh at old products that contain cocaine, asbestos, lead, uranium, etc.
I never said they were intelligent. But by definition they are learning and it is not conceptually different from how we learn.
(citation needed)
“Machine learning” is neither mechanically nor conceptually similar to how humans learn, unless you take a uselessly broad view and define it as “thing goes in, thing comes out”. The same could be applied to a simple CRUD app.
As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.
Rag was never supposed to be about learning over time. It was supposed to provide better context at inference. It could never scale to handle new learning beyond focused concepts.
The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.
But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.
A true AGI also might simply not want to be a programmer or engineer, or might want to work on niche, single-developer projects interesting for them but not of use to wider humanity, like many actual developers do once their $dayjob is over. I can imagine they’ll also be annoyed by slop machine users creating extra boring work for them to shovel through and AI bros getting creepy with them or trying to subordinate them to own wishes.