To clarify: I’m not talking about the popular conception of the Turing test as something like the Voight-Kampff test, meant to catch rogue AIs—but Turing’s original test, meant for AI designers to evaluate their own machines. In particular, I’m assuming the designers know their machine well enough to distinguish between a true inability and a feigned one (or to construct the test in a way that motivates the machine to make a genuine attempt).

And examples of human inabilities might be learning a language that violates the patterns of natural human languages, or engaging in reflexive group behavior the way starlings or fish schools do.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    I see what you’re saying but I think the problem is that you would need to test an AI while it’s unaware of being tested, or use a novel trick that it’s unaware of, to try and catch it producing non-human output.

    If it’s aware that it’s being tested, then presumably it will try to pass the test and try to limit itself to human cognition to do so.

    i.e. It’s possible that an AI’s intelligence includes enough human-like intelligence to completely mimic a human and pass a Turing test, but not enough to know to keep to those boundaries; but it’s also possible that it both knows enough to mimic us and enough to keep to our bounds, in which case the test then needs to be done in secret.

    • AbouBenAdhem@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      In the original Turning test, the black box isn’t the machine—it’s the human. The test is to see whether a (known) machine is an accurate model of an unknown system.

      While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself. When I say “the inability to act otherwise”, I’m assuming the experimenter can distinguish a true inability from an induced one (even if the tester can’t).

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        While the tester is blind as to which is which, the experimenter knows the construction of the machine and can presumably tell if it’s artificially constraining itself.

        In the case of intelligences and neural networks that is not so straight forward. The humans and machines that are behind the curtain have to be motivated to try and replicate a human, or the test would fail, whether that’s because a human control is unhelpful or because the machine isn’t bothering trying to replicate a human.

        • AbouBenAdhem@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          The humans and machines that are behind the curtain have to be motivated to try and replicate a human

          In a Turing test, yes. What I’m suggesting is to change the motivation, to see if the machine fails like a human even when motivated not to.