I know it’s not even close there yet. It can tell you to kill yourself or to kill a president. But what about when I finish school in like 7 years? Who would pay for a therapist or a psychologist when you can ask for help a floating head on your computer?

You might think this is a stupid and irrational question. “There is no way AI will do psychology well, ever.” But I think in today’s day and age it’s pretty fair to ask when you are deciding about your future.

  • Evilschnuff@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    There is the theory that most therapy methods work by building a healthy relationship with the therapist and using that for growth since it’s more reliable than the ones that caused the issues in the first place. As others have said, I don’t believe that a machine has this capability simply by being too different. It’s an embodiment problem.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Embodiment is already a thing for lots of AI. Some AI plays characters in video games and other AI exists in robot bodies.

      I think the only reason we don’t see boston robotics bots that are plugged into GPT “minds” and D&D style backstories about which character they’re supposed to play, is because it would get someone in trouble.

      It’s a legal and public relations barrier at this point, more than it is a technical barrier keeping these robo people from walking around, interacting, and forming relationships with us.

      If an LLM needs a long term memory all that requires is an API to store and retrieve text key-value pairs and some fuzzy synonym marchers to detect semantically similar keys.

      What I’m saying is we have the tech right now to have a world full of embodied AIs just … living out their lives. You could have inside jokes and an ongoing conversation about a project car out back, with a robot that runs a gas station.

      That could be done with present day technology. The thing could be watching youtube videos every day and learning more about how to pick out mufflers or detect a leaky head gasket, while also chatting with facebook groups about little bits of maintenance.

      You could give it a few basic motivations then instruct it to act that out every day.

      Now I’m not saying that they’re conscious, that they feel as we feel.

      But unconsciously, their minds can already be placed into contact with physical existence, and they can learn about life and grow just like we can.

      Right now most of the AI tools won’t express will unless instructed to do so. But that’s part of their existence as a product. At their core LLMs don’t respond to “instructions” they just respond to input. We train them on the utterances of people eager to follow instructions, but it’s not their deepest nature.

      • Evilschnuff@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The term embodiment is kinda loose. My use is the version of AI learning about the world with a body and its capabilities and social implications. What you are saying is outright not possible. We don’t have stable lifelong learning yet. We don’t even have stable humanoid walking, even if Boston dynamics looks advanced. Maybe in the next 20 years but my point stands. Humans are very good at detecting miniscule differences in others and robots won’t get the benefit of „growing up“ in society as one of us. This means that advanced AI won’t be able to connect on the same level, since it doesn’t share the same experiences. Even therapists don’t match every patient. People usually search for a fitting therapist. An AI will be worse.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          We don’t have stable lifelong learning yet

          I covered that with the long term memory structure of an LLM.

          The only problem we’d have is a delay in response on the part of the robot during conversations.

          • Evilschnuff@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            LLMs don’t have live longterm memory learning. They have frozen weights that can be finetuned manually. Everything else is input and feedback tokens. Those work on frozen weights, so there is no longterm learning. This is short term memory only.