From COBOL in the 1960s to AI in the 2020s, every generation promises to eliminate programmers. Explore the recurring cycles of software simplification hype.
The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.
As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.
Rag was never supposed to be about learning over time. It was supposed to provide better context at inference. It could never scale to handle new learning beyond focused concepts.
The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.
But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.
As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.
Rag was never supposed to be about learning over time. It was supposed to provide better context at inference. It could never scale to handle new learning beyond focused concepts.
The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.
But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.