• entropicdrift@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    This is a bad idea. It’s extremely likely to hallucinate at one point or another no matter how many tools you equip it with, and humans will eventually miss some fully made up citation or completely misrepresented conclusion.

      • entropicdrift@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I’m a professional software engineer and I’ve used RAG. It doesn’t prevent all hallucinations. Nothing can. The “hallucinations” are a fundamental part of the LLM architecture.

      • obsoleteacct@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Are the down votes because people genuinely think this is an incorrect answer, or because they dislike anything remotely pro-AI?

          • entropicdrift@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            2 days ago

            I use LLMs daily as a professional software engineer. I didn’t downvote you and I’m not disengaging my thinking here. RAGs don’t solve everything, and it’s better not to sacrifice scientific credibility to the altar of convenience.

            It’s always been easier to lie quickly than to dig for the truth. AIs are not consistent, regardless of the additional appendages you give them. They have no internal consistency by their very nature.

            • CatsPajamas@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              What would the failure rate on this be? What would the rate have to be to actually matter? Literally it would just poll the abstract and spit out yes no undecided. That is in the abstract. There is very little chance of there being any hallucinations that are meaningful at a degree large enough to vary literally anything.

              Have you never had it organize things or analyze sentiments? I understand if that’s not your use case but this is pretty fundamentally an easy application of AI.

            • porksnort@slrpnk.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 days ago

              And this isn’t even really a great application for RAG. Papermaps just goes off of references and citations. Perhaps a sentiment analysis would be marginally useful, but since you need a human to verify all LLM outputs it would be a dubious time savings.

              The system scores review papers very favorably and the “yes/no/maybe” conclusion is right in the abstract, usually the last sentence or two of it. This is not a prime candidate for any LLM, it’s simple database operations on srtuctured data that already exists. There’s no use case here.

              • entropicdrift@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                1 day ago

                Perhaps a sentiment analysis would be marginally useful, but since you need a human to verify all LLM outputs it would be a dubious time savings.

                Thank you, yes. That’s exactly my point. You’d need a human to verify all of the outputs anyways, and these are literally machines that exclusively make text that humans find believable, so you’re likely adding to the problem of humans messing stuff up moreso than speeding anything up. Being wrong fast has always been easy, so it’s no help here.