• bradd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    14 days ago

    When it’s important you can have an LLM query a search engine and read/summarize the top n results. It’s actually pretty good, it’ll give direct quotes, citations, etc.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      And some of those citations and quotes will be completely false and randomly generated, but they will sound very believable, so you don’t know truth from random fiction until you check every single one of them. At which point you should ask yourself why did you add unneccessary step of burning small portion of the rainforest to ask random word generator for stuff, when you could not do that and look for sources directly, saving that much time and energy

      • PapstJL4U@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        I, too, get the feeling, that the RoI is not there with LLM. Being able to include “site:” or “ext:” are more efficient.

        I just made another test: Kaba, just googling kaba gets you a german wiki article, explaining it means KAkao + BAnana

        chatgpt: It is the combination of the first syllables of KAkao and BEutel - Beutel is bag in german.

        It just made up the important part. On top of chatgpt says Kaba is a famous product in many countries, I am sure it is not.

      • bradd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        I guess it depends on your models and tool chain. I don’t have this issue but I have seen it for sure, in the past with smaller models no tools and legal code.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          You do have this issue, you can’t not have this issue, your LLM, no matter how big the model is and how much tooling you use, does not have criteria for truth. The fact that you made this invisible for you is worse, so much worse.

          • bradd@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            If I put text into a box and out comes something useful I could give a shit less if it has a criteria for truth. LLM’s are a tool, like a mannequin, you can put clothes on it without thinking it’s a person, but you don’t seem to understand that.

            I work in IT, I can write a bash script to set up a server pivot to an LLM and ask for a dockerfile that does the same thing, and it gets me very close. Sure, I need to read over it and make changes but that’s just how it works in the tech world. You take something that someone wrote and read over it and make changes to fit your use case, sometimes you find that real people make really stupid mistakes, sometimes college educated people write trash software, and that’s a waste of time to look at and adapt… much like working with an LLM. No matter what you’re doing, buddy, you still have to use your brian.

      • bradd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        As a side note, I feel like this take is intellectually lazy. A knife cannot be used or handled like a spoon because it’s not a spoon. That doesn’t mean the knife is bad, in fact knives are very good, but they do require more attention and care. LLMs are great at cutting through noise to get you closer to what is contextually relevant, but it’s not a search engine so, like with a knife, you have to be keenly aware of the sharp end when you use it.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          LLMs are great at cutting through noise

          Even that is not true. It doesn’t have aforementioned criteria for truth, you can’t make it have one.
          LLMs are great at generating noise that humans have hard time distinguishing from a text. Nothing else. There are indeed applications for it, but due to human nature, people think that since the text looks like something coherent, information contained will also be reliable, which is very, very dangerous.

          • bradd@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I understand your skepticism, but I think you’re overstating the limitations of LLMs. While it’s true that they can generate convincing-sounding text that may not always be accurate, this doesn’t mean they’re only good at producing noise. In fact, many studies have shown that LLMs can be highly effective at retrieving relevant information and generating text that is contextually relevant, even if not always 100% accurate.

            The key point I was making earlier is that LLMs require a different set of skills and critical thinking to use effectively, just like a knife requires more care and attention than a spoon. This doesn’t mean they’re inherently ‘dangerous’ or only capable of producing noise. Rather, it means that users need to be aware of their strengths and limitations, and use them in conjunction with other tools and critical evaluation techniques to get the most out of them.

            It’s also worth noting that search engines are not immune to returning inaccurate or misleading information either. The difference is that we’ve learned to use search engines critically, evaluating sources and cross-checking information to verify accuracy. We need to develop similar critical thinking skills when using LLMs, rather than simply dismissing them as ‘noise generators’.

            See these: