• SleeplessCityLights@programming.dev
    link
    fedilink
    English
    arrow-up
    57
    ·
    7 hours ago

    I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

    • hardcoreufo@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      5 hours ago

      Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

      • MrScottyTay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 hours ago

        It’s fucking awful isn’t it. Summer day soon when i can be arsed I’ll have to give one of the paid search engines a go.

        I’m currently on qwant but I’ve already noticed a degradation in its results since i started using it at the start of the year.

      • ironhydroxide@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.

    • cub Gucci@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      edit-2
      2 hours ago

      I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe

      • DireTech@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 hours ago

        Either you’re using them rarely or just not noticing the issues. I mainly use them for looking up documentation and recently had Google’s AI screw up how sets work in JavaScript. If it makes mistakes on something that well documented, how is it doing on other items?

        • cub Gucci@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 hour ago

          Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don’t use them for my coding job.

          Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that manual is a keyword in PostgreSQL, which is not.