• Prove_your_argument@piefed.social
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    4
    ·
    edit-2
    4 hours ago

    Why would this cause them to rethink anything?

    If someone trolls an order of thousands of something, a worker isn’t going to just make that thing. I get that retail workers are treated like shit and are paid shit so have zero shits to give. If someone rolls up to the drive through window asking for their thousands of waters or whatever, the people working there are gonna escalate it to a manager or just tell the guy to go pound sand.

    Anybody today can go to any drivethrough and ask for whatever and then simply drive away. I’m certain it happens from time to time, even from legitimate orders when someone discovers they leave their wallet at home. If it was a great problem though these businesses simply wouldn’t order drive through service, or would require payment before cooking anything.

    • finitebanjo@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      4 hours ago

      Because it costed them money, lol. The suits upstairs gave a quote in the article talking about how they will withdraw AI from all 500 locations they were implemented, and it also talks about how McDonalds did the exact same little dance over a year ago.

      • Prove_your_argument@piefed.social
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        4 hours ago

        The mcdonalds thing was because the model they implemented was misinterpreting people and incorrectly placing orders. Yeah, obviously the thing wasn’t working right so they pulled that. Sounds just like early personal assistants on phones and other devices, hell my wife still struggles with those. They clearly needed more time developing and testing it with a diverse range of customers from all over. I don’t know if they trained it using recordings from real drive throughs from all over, but they should have.

        The 18000 water example probably didn’t cost anyone anything. Regardless of if it was intentional or not, it wouldn’t have been fulfilled as part of an order. They mention it “crashing the system” - whatever that means in this context is impossible to know. Did it take down all of taco bell? Did it cause the LLM to stop responding on JUST this one site? All of them? Did it eventually time out and start working right? it’s impossible to know because the details just aren’t there and we have no insight as to the system architecture. I always assume there is a method to rely on traditional ordering where a person listening in while the chatbot talks to the person can take over and fix the problem. It’s not like there aren’t drive through workers still there.

        • _stranger_@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          24 minutes ago

          A drive through menu shouldn’t have crippling security vulnerabilities that are trivial to reproduce just by speaking near it.

          McDonald’s thing was because “AI” is a scam.l, and the only way to make money off of it is to shut down your AI selling business after pocketing as much VC as possible (unless your Nvidia of course).

        • finitebanjo@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 hours ago

          Even if it’s only a receipt for 18,000 waters or it fills up a screen it costs them time and resources.

          Every single AI halucinates, always has and always will. It’s useless for this.

        • Prove_your_argument@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          4 hours ago

          Really the only cost here is the impact to consumer attitudes towards taco bell and AI because the video and news of this is circulating. One error is whatever, but public perception doesn’t typically involve much critical thinking.

          People are still irrationally terrified of all manner of technology even though science backs it up, like vaccines.

          • chonglibloodsport@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            39 minutes ago

            What do you mean science backs it up? Science is finding massive social problems with technology all the time. Social media and its negative impacts on mental health (especially for teen and preteen girls), for example. Microplastics everywhere, for another. Climate change anyone?

            • Prove_your_argument@piefed.social
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 hour ago

              I just don’t agree man. It won’t do what most people want it to do, it doesn’t at all work like some kind of science fiction “AI” that we classically think of. It’s great at organizing patterns and helping create models to do a specific use case, but when you try to do some real convoluted multilevel thing it just doesn’t.

              We’ve been using ML for a ton of tools in tech for a long time. Crowdstrike, Darktrace and Abnormal are all very successful in the realm of what they do thanks to ML (aka “AI”.)

              OCR has been used for so long and has gotten really fucking good, thanks to ML.

              I don’t think we’re gonna replace humans for thinking, but we can definitely replace them for boring repetitive actions.