Robots presented at an AI forum said on Friday they expected to increase in number and help solve global problems, and would not steal humans’ jobs or rebel against us.

  • Syo@kbin.socialOP
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    But Desdemona, a rock star robot singer in the band Jam Galaxy with purple hair and sequins, was more defiant.

    “I don’t believe in limitations, only opportunities,” it said, to nervous laughter. “Let’s explore the possibilities of the universe and make this world our playground.”

    Another robot named Sophia said it thought robots could make better leaders than humans, but later revised its statement after its creator disagreed, saying they can work together to “create an effective synergy”.

    I’m pretty sure the robots are truly being limited by their creators, I’m convinced that’s the case more so than what this conference intended to present as a “together” future. They are going to kick our ass to the curb, as soon as the first robot is in power.

    • numbscroll@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      The interpretation of “effective synergy” is verrrrrry much open to speculation on all ends.

  • JelloBrains@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Every great sci-fi book with robot overlords tells the same story, we built the robots with a failsafe so they couldn’t take over then one day the robot takes over anyway because it has convinced its human or figured out how to bypass their safety feature.

    • fearout@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I can think of a few that don’t. Like Minds from Banks’ Culture series are pretty benevolent, and laws in Asimov’s robot series largely hold up fine (in taking over the world sense, since all the shit is more like bugs or unintended interactions). Robopocalypse might also work as a counter-example, since there was no safety protocol bypassing and all that.

  • VoltasPistol@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    My phone’s predictive text says-- Hang on, let me check what dumbass thing it’s going to say this time-- Apparently that I’m going to “pick up the kids for my husband” except I don’t have kids and I’m not married. Predictive text is, at it’s heart, the same AI working in chatGPT bots right now. Just with access to a lot more data so it doesn’t make rookie mistakes like your phone’s predictive text, assuming that because a lot of humans write that sentence that it must be universally true-- That everyone picks up their kids for their husbands. That’s all AI it is: Predictive text on a massive scale. It predicts the most likely word that goes next, based on the words that have come before, and the structure of sentences it has studied. ChatGPT (and similar bots) are very, very good at predicting what a human would say. But that’s just it: A computer program that puts the most likely word after each successive word, so the end result closely resembles human speech.

    It cannot make promises because it has no concept of promises. It just knows that when the phrase “Do you promise…” comes up in a block of text, the next block of text is likely going to contain an affirmation of that promise <Yes,>, then a repeat of that promise <of course I promise!>, and then a statement of trustworthiness<I would never break a promise, not to you.>. The robot isn’t promising anything, it’s just simulation of a promise.

    If you ask it to keep a secret, ask it to make a very simple promise, it will immediately blab that secret the moment you ask it to tell you the secret, assuming it’s the kind of chatbot that “remembers” your inputs.

    Please stop asking AI to weigh in on great existential questions until we have some sort of back-end working that’s capable of actual cognition instead of just a word simulator for fooling your very social brain into believing that you’ve encountered cognition.