• TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 hours ago

    The way the current systems are trained simply doesn’t allow for accepting and adopting new information continuously.

    As further evidence of this, RAG was supposed to enable this. Instead, we’ve found that RAG was nothing more than an overused buzz-term that has limited applications, and often results in hallucination anyway.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      Rag was never supposed to be about learning over time. It was supposed to provide better context at inference. It could never scale to handle new learning beyond focused concepts.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        The way it was presented with regards to search engines was that it was supposed to pull data that was more up-to-date than when the model was trained. It does do that, actually, and provides better results too, on average anyway.

        But that’s just one domain, and “better” doesn’t mean “good” or “accurate”. In most domains, at least where I work, we’ve found that RAG overcomplicates things for little benefit, unfortunately.