For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

  • Otome-chan@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    well it can “make choices” in the sense of having it predict a decision that someone might make. but it’s not really thinking about things on it’s own trying to figure it out, it’s just extending the text.

    For example, say you ask it: “should we ban abortion?” now, it’s not actually thinking on it’s own, so it’ll go “what’s the most likely response to this?” and give that. But if you go: “respond as a pro-life republican, should we ban abortion” the same ai model will respond “yes”, but if you then go “respond as a pro-choice democrat, should we ban abortion” and it’ll respond “no”.

    Basically it’s not thinking at all, but rather just extending the text you give it (which would include a response to the question). We can try prompting it as some all knowing being, but it’ll just inherently have biases depending on the exact nature of the prompting, as well as the dataset. It’s not reasoning things out on it’s own.

    So if you ask it something it doesn’t know, it’ll just spit out garbage. You could try explaining the new thing in your prompt, at which point it’d respond the most likely text which may or may not be a good answer. In practice a new model would just be trained with the included topic, and it’d be the same as before: your prompt would determine the output of the ai.

    Basically, it’s not deciding things; it’s just giving you the most likely continuation of the text. and in that sense, you can completely control the type of answers it gives. if you want the ai to be a flat earther who thinks murder is right, you can do that.

    • Flaky_Fish69@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      It’s not even making decisions. It’s following instructions.

      Chat gpt’s instructions are very advanced, but the decisions have already been made. It follows the prompt and it’s reference material to provide the most common response.

      It’s like a kid building a Lego kit- the kid isn’t deciding where pieces go, just following instructions.

      Similarly, between the prompt, the training and the very careful instructions in how to train, and instructions that limit objectionable responses…. All it’s doing is following instructions already defined.

    • CoderKat@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The example you give is also a big concern with how modern AI is very susceptible to leading questions. It’s very easy to get the answer you want by leading it on. That makes it a potential misinformation machine.

      Adversarial testing can help reduce this, but it’s an uphill battle to train an AI faster than people get mislead by it.

    • dedale@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Then again, most humans conception of right and wrong depends on context, not on a coherent morality framework.

        • dedale@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I mean most of the time we act based on what we perceive to be socially acceptable, not by following an ethical law gained through our own experience.
          If you move people to a different social environment, they’ll adapt to fit unless actively discouraged.
          The social context is the AI prompt.
          We rarely decide, make choices, or reflect about anything, we regurgitate our training data based on our prompts.

          • Maeve@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Excellent, thank you! I’m wondering if something was lost in translation or my interpretation. When I think “context,” I consider something along the lines of: “Water is good.”

            Is it good for a person drowning? What if it’s contaminated? What about during a hurricane/typhoon? And so forth.

            • dedale@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Yeah sorry about that, sometimes thing that feel evident in my head are anything but when written.
              And translation adds a layer of possible confusion.
              I’d rather drown in clean water given a choice.

              • Maeve@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                No worries, friend! I’m the same way and when questioned, upon rereading my post, even I wonder what on earth I was thinking, when I wrote it!

                I hear you. Sadly, we’re often not given a choice, wrt water.

    • Pisodeuorrior@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Really well put, I wish we stopped calling it “artificial intelligence” and pick something more descriptive of what actually happens.

      Right now it’s closer to a parrot trained to say “this guy” when asked “who’s a good boy”.