Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • XLE@piefed.socialOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    I think I just described a conventional computer program. It would be easy to make that. It would be easy to debug if something was wrong. And it would be easy to read both the source code and the data that went into it. I’ve seen rudimentary symptom checkers online since forever, and compared to forms in doctors’ offices, a digital one could actually expand to relevant sections.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      (Assuming you meant “you” instead of “I” for the 3rd word)

      Yeah, it fits more with the older definition of AI from before NNs took the spotlight, when it meant more of a normal program that acted intelligent.

      The learning part is being able to add new branches or leaf nodes to the tree, where the program isn’t learning on its own but is improving based on the expeirences of the users.

      It could also be encoded as a series of probability multiplications instead of a tree, where it checks on whatever issue has the highest probability using the checks/questions that are cheapest to ask but afffect the probability the most.

      Which could then be encoded as a NN because they are both just a series of matrix multiplications that a NN can approximate to an arbitrary %, based on the NN parameters. Also, NNs are proven to be able to approximate any continuous function that takes some number of dimensions of real numbers if given enough neurons and connections, which means they can exactly represent any disctete function (which a decision tree is).

      It’s an open question still, but it’s possible that the equivalence goes both ways, as in a NN can represent a decision tree and a decision tree can approximate any NN. So the actual divide between the two is blurrier than you might expect.

      Which is also why I’ll always be skeptical that NNs on their own can give rise to true artificial intelligence (though there’s also a part of me that wonders if we can be represented by a complex enough decision tree or series of matrix multiplications).