• gens@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    1 month ago

    The current limitations of LLMs are built in how they fundementaly work. We would need something completely new. That is a fact.

    Honestly the thought of med students using them to pass exams scares me.

    Sure, use them to replace CEOs of some unimportant companies like facebook. But they are not for jobs where other peoples lives are at stake. They inherently halucinate (like many CEOs). It is built in in how they work.

    • ByteJunk@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      I don’t think the bar will be where you’re setting it.

      Suppose a new cancer drug or something comes out that significantly improves the life expectancy and quality of patients. In rare cases however, it can cause serious liver complications that may be fatal. Should this drug be used, or not?

      It’s not trivial, but there’s a chance that it would in fact be used.

      My point with AI hallucinations is that they’re the same. If at some point it’s proven that it leads to better patient outcomes, but can have side effects, should it be outright discarded?