• Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    7 hours ago

    Hmmm…

    As the article correctly states, machine learning (“AI” is a misnomer that has stuck imo) has been used successfully for decades in medicine.

    Machine learning is inherently about spotting patterns and inferring from them. The problem, I think, is two-fold:

    1. There are more “AI” products than ever, not all companies build it in responsibly and it’s difficult for regulators to keep up with them.

    The gutting of these regulatory agencies by the current US administration does not help ofc, but many of them were already severely undermanned.

    1. As AI is normalised, some doctors will put too much trust in these systems.

    This isn’t helped by the fact that the makers of these products are likely to exaggerate the capabilities of their products. This may be reflected in the products themselves, where they may not properly communicate the degree of certainty of a diagnosis / conclusion (e.g. “30% certainty this lesion is cancerous”)

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      It seems like a lot of ai problems is how people treat it. It needs to be treated like a completely naive and inexperienced intern or student or just helper. Everyone should expect that all output has to be carefully looked over like a teacher checking a students work.