• MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    6 hours ago

    To be blunt, if you were to train a gpt model on all the current medical information available, it actually might be a good starting point for most doctors to “collaborate” with and formulate theories on more difficult cases.

    However, since GPT and list other LLMs are trained on information generally available on the Internet, they’re not going to come up with anything that could possibly be trusted in any field where bad decisions could mean life or death.

    It’s basically advanced text prediction based on whatever intent statements you made in your prompt. So if you can feed a bunch of symptoms into a machine learning model, that’s been trained on the sum of all relatively recent medical texts and case work, that would have some relevant results.

    Since chat gpt isn’t that, heh. I doubt it would even help someone pass medical school, quite bluntly… Apart from the hiring boiler plate stuff and filling in words and sentences that just take time to write out and don’t contribute in any significant manner to the content of their work. (Eg, introduction paragraphs, sentence structures for entering information and conclusions… Etc).

    There’s a lot of good, time saving stuff ML, in its current form, can do, diagnostics, not so much.

  • MuskyMelon@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    13 hours ago

    Yeah people think all doctors were straight A students thru med school. Ya’d never know if the one treating you right now was a C- muthafucka.

    • zululove@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      8 hours ago

      Med school isn’t easy bro

      That C+ doctor retained more knowledge and has a better intuition that the chatGPT doctor

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      I’ll take a Dr with enough real world experience to have good intuition over a recently graduated straight-A doc any day.

      But this is why doctors have like 8 years of practical, hands on experience with oversight before they’re allowed to actually practice solo in most places. They spend more time learning hands on, than they do in class.

      Even a straight-C “level” doctor should be more than prepared to handle whatever you their their way. Even if they don’t know, they probably know how to find out, or who to ask.

    • Triti@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      11 hours ago

      To be fair, just passing med school, even by the slimmest margins, is no easy feat. The idea that a doctor who got D- grades is somehow bad is wrong because they’re still good enough to pass an extremely difficult program.

      Yeah, I’d be more comfortable with an A+ doctor, but a doctor still graduated from med school, you know?

      • MuskyMelon@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        11 hours ago

        If you could pick between two doctors, one A student and one D student, you know you’d pick the A student.

        But what if the A student was from some sketchy barely accredited medical school and the D student was from Johns Hopkins.

        Who do you pick now?

    • Notyou@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      8 hours ago

      Haven’t you heard that one ‘joke,’ what do you call the student that passed with the lowest grade in med school?..Doctor.

    • Jhex@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      10 hours ago

      idiocracy would be a step up from current murica

      at least Camacho noticed and hired the smartest guy he found

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    ·
    edit-2
    1 day ago

    Jokes on Future Doctor because we’re closing down the hospitals, cancelling the research grants, and taking all the sick people to jail for the crime of being unemployable.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    1
    ·
    1 day ago

    We’re so cooked.

    Take 1 moment to imagine the enshitification of medicine.

    We’re gonna long for the days of sawbones and “you got ghosts in your blood. You should do cocaine about it”

    • Alaik@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      8 hours ago

      Good news. The MCAT, USMLE, and board exams are all done in a proctored environment with no electronic devices allowed. Hell, you cant even take a calculator in for the MCAT, so you better be cool with doing Arrhenius Equations by hand.

      As far as doctors go you might be able to get through your premed degree with ChatGPT, but you’re not going to get 500+ on your MCAT and you certainly aren’t passing Step 1 of the USMLE.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      36
      arrow-down
      1
      ·
      1 day ago

      Medicine has already been enshitified…

      PE initiated take of provider groups in early 2010s.

      Consolidation by PE and health insurance parasites is about complete.

      Nurse and mid level providers are being pressured. Doctors are next on the chopping block.

      Service quality is down across the board and they haven’t even started squeezing in earnest.

      You would get better service at 2005 McDonalds than at 2025 urgent care 🤡

        • sunzu2@thebrainbin.org
          link
          fedilink
          arrow-up
          13
          ·
          1 day ago

          Private equity. These are funds admined by fund managers for a fee but the capital comes from high networth individuals.

          You have to be a “qualified investor” to even access it.

          They are used for risky plays but general play us buy up a market, consolidate it, then start extracting and then exiting for a fat profit. This will generally result in low quality services and business is unsustainable after the extraction.

          You prolly participated in markets they ruined. For example the play for sears was about looting the pension fund of the employees. That one was extra nasty.

  • fubarx@lemmy.world
    link
    fedilink
    arrow-up
    39
    ·
    1 day ago

    “I’m sorry, but you have Fistobulimia. You may want to put your affairs in order.”

    “Oh my god, Doc! That’s terrible. I came here for a runny nose. Are you sure?”

    “Pretty sure. It lists… runny nose, tightness of jaw, numb toes, and a pain in your Scallambra region as typical symptoms.”

    “I don’t have any of those other things and what the heck is a Scallambra?”

    “You don’t have those? Hmm, let me double-check.”

    (types)

    “Good news! It’s likely a type of Fornamic Pellarsy. Says 76.2387% recovery rate by leeching. System’s already sent a referral.”

    • nonfuinoncuro@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      8 hours ago

      that was a decent south park episode. i will never be able to get the image of martha stewart sitting on a turkey out of my head

  • sunbrrnslapper@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 day ago

    I have had absolutely terrible luck with PCPs believing my symptoms or looking at them holistically - even just to gat a referral to the right specialist. In this moment AI has been better at pointing me in the right direction than my previous PCPs. 🤷

    • sbv@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      This is what people miss. If you’ve experienced a chronic condition that doctors don’t know what to do with, then trying alternatives seems pretty attractive.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Doctors doing this has been a historical issue but it only becoming obvious now that they behave like this, thanks to the internet.

      Many doctors view the patient with contempt

  • Evono@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Doctors right now Google or ask chat gpt what they don’t know ( atleast the goofling part is good if rather than barking out false stuff )

    • shawn1122@sh.itjust.works
      link
      fedilink
      arrow-up
      14
      ·
      22 hours ago

      I’m a doctor and I google all the time. There’s nothing inherently wrong with googling the question is what source are you using from there.

      • medgremlin@midwest.social
        link
        fedilink
        arrow-up
        3
        ·
        15 hours ago

        4th year med student here: I use Google to find studies and STATPearls pages because the built-in NIH search function sucks.

      • Evono@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        13 hours ago

        thats exactly what i said , googling is fine as the doctor can also judge the sources and stuff and preferred rather than a doctor barking something out without knowledge because he/ she doesnt want to google. , using AI ? is on another level of bad.

        • BlackRoseAmongThorns@slrpnk.net
          link
          fedilink
          arrow-up
          3
          ·
          12 hours ago

          Open books exams are bounded by time, something an AI has plenty of, ridiculously more than a person, to say it’s giving them an advantage is missing the absurdity of the situation as it also had much more time to train and “study”, and yet it produces the results it does.

          • Tja@programming.dev
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            11 hours ago

            What results? We are assuming it passes med school, so the result must be good.

            • Alaik@lemmy.zip
              link
              fedilink
              arrow-up
              3
              ·
              8 hours ago

              The OP is rage bait, all 5 big exams on the path to becoming a doctor are proctored with no electronics allowed. Being caught cheating at any one of those guarantees expulsion (For the USMLE) or never being admitted in the first place (MCAT) or never being licensed in a specialty (Board exam).

  • ByteJunk@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    12
    ·
    edit-2
    14 hours ago

    Ok but my counter argument is that if they pass their exam with GPT, shouldn’t they be allowed to practice medicine with GPT in hand?

    Preferably using a model that’s been specifically trained to support physicians.

    I’ve seen doctors that are outright hazards to patients, hopefully this would limit the amount of damage from the things they misremember…

    EDIT: ITT bunch of AI deniers who can’t provide a single valid argument, but that doesn’t matter because they have strong feelings. Be sure to slam the “this doesn’t align with how I want my world to be” button!

    • Alaik@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      8 hours ago

      Out of curiosity, i put in some organic chemistry practice questions into ChatGPT just now, it got 1 out of 5 correct. Im not an outright hater of AI (I do dislike how its being forced into some things and makes the original product worse, and the enviromental impact it has) but i’m sure in the future it’ll be able to do some wonderous things.

      As it stands though, I would rather my doc do a review of literature rather than trusting ChatGPT alone.

      • ByteJunk@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        4 hours ago

        Out of curiosity, what questions?

        Organic chemistry isn’t a subject for a medical degree, at least not in my neck of the woods (we have biochemistry) so I’m not super familiar with the subject, but curious enough to see what it got wrong.

        • Alaik@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          3 hours ago

          In the USA, organic chemistry is required by the vast majority of medical schools, and organic chemistry I is a prereq for biochemistry in most colleges.

          Your average applicant is going to have Orgo1/2 and biochemistry though.

          As far as the questions go, one was a multistep synthesis (It seemed to screw up markovnikov vs anti markovnikov addition if I had to guess where it went wrong).

          It didnt seem to do well with HNMR, or Fischer projections, it also got a mechanism of ring breaking wrong.

          It did get the question right regarding stability of chair conformations though.

          I cant post the exact questions as im not home where my old Ochem book is, but those are the jists.

          It seems to struggle with the more visual problems.

          Edit: The AAMC has a list of requirements for med schools, so far I’ve seen… 3 that dont require Ochem and use biochem in its place? Thats certainly not all of them, but theyre also certainly a extremely small minority. Although again, most universities also have Ochem1 as a prereq for Biochem. And if you’re college is an ACS compliant university, they’ll make you take ochem 2 for biochemistry also. Which is lame.

    • starman2112@sh.itjust.works
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      1 day ago

      I love having a doctor who offloaded so much of their knowledge onto a machine that they can’t operate without a cell phone in hand. It’s a good thing hospitals famously have impenetrable security, and have never had network outages. And fuck patient confidentiality, right? My medical issues are between me, my doctor, and Sam Altman

      • ByteJunk@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        edit-2
        16 hours ago

        Do you realize your argument is basically the same argument people used to make about calculators? That they were evil and should never be used because they make kids stupid and how will their brains develop and yap yap yap.

        There is a scenario where doctors are AIDED by AI tools and save more lives. You outright reject this on the edge case that they loose that tool and have to * checks notes * do what they do right now. How does that even make sense?

        Going by that past example, this is how it’ll go: you’ll keep bitching and moaning that it’s useless and evil up until your dying breath, an old generation that will never embrace the new tech, while the world around you moves on and figures out out how to best take advantage of it. We’re at the stage where it’s capabilities are overhyped and oversold, as it always happens when something is new, and eventually we’ll figure out how to best use them and when/where to avoid them.

        And fuck patient confidentiality, right?

        How is this an AI problem? That’s already fucked - 5 million patients data breached, here other 4.5M patients, the massive Brazil one with 250 million patient records, etc etc. The list is endless, as health data increasingly goes online, best you come to terms that it will be unlawfully accessed sooner rather than later, with or without AI.


        EDIT to add: on your case of network outages, do you know what happens right now when there’s a network outage at a hospital? It already stops working - you can’t admit patients, you don’t have access to their history or exams, basically can’t prescribe anything, can’t process payments. Being unable to access AI tools is the least of the concerns.

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          3 minutes ago

          The difference between an AI and a calculator is that we have data showing that using LLMs degrades users’ reading comprehension, and no such data with calculators re: ability to do math. Also, calculators don’t rely on the internet. Also, calculators don’t send confidential patient data to a third party. Also, a doctor can get the output of an equation by hand, and you cannot get the output of an LLM by hand.

          How is this an AI problem?

          This is a deeply unserious comment. Data gets breached sometimes, so let’s just give all of our data away for free lmao

    • lad@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      Thag might be okay if what said GPT produces would be reliable and reproducible, not to mention providing valid reasoning. It’s just not there, far from it

      • gens@programming.dev
        link
        fedilink
        arrow-up
        12
        arrow-down
        2
        ·
        1 day ago

        It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.

        There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.

        • Tja@programming.dev
          link
          fedilink
          arrow-up
          3
          arrow-down
          5
          ·
          19 hours ago

          Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.

          • medgremlin@midwest.social
            link
            fedilink
            arrow-up
            7
            arrow-down
            2
            ·
            15 hours ago

            This is stupid. Fully reading and analyzing the source for accuracy and relevancy can be extremely time consuming. That’s why physicians have databases like UpToDate and Dynamed that have expert (ie physician and PhD) analyses and summaries of the studies in the relevant articles.

            I’m a 4th year medical student and I have literally never used an LLM. If I don’t know something, I look it up in a reliable resource and a huge part of my education is knowing what I need to look up. An LLM can’t do that for me.

            • ByteJunk@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              5
              ·
              14 hours ago

              And why are you assuming that a model that is designed to be used by physicians would not include the very same analysis from experts that goes into UpToDate or Dynamed? This is something that is absolutely trivial to do, the only thing stopping it is copyright.

              AI can not only lookup reliable sources, it will probably be much better and faster than you or I or anybody.

              I’m a 4th year medical student and I have literally never used an LLM

              It was clear enough from your post, but thanks for confirming. Perhaps you should give it a try so you can understand it’s limitations and strengths first-hand, no? Grab one the several generic LLMs available and ask something like:

              Can you provide me with a small summary of the most up to date guidelines for the management of fibrodysplasia ossificans progressiva? Please be sure to include references, and only consider sources that are credible, reputable and peer reviewed whenever possible.

              Let me know how it did. And note that it probably is a general purpose model and trained on very generic data, and not at all optimized for this usage, but it’s impossible to dismiss the capabilities here…

              • medgremlin@midwest.social
                link
                fedilink
                arrow-up
                1
                ·
                51 minutes ago

                Some of my classmates used chatGPT to summarize reading assignments and it garbled the information so badly that they got things wrong on in-class assessments. Aside from the hallucinations and jumbled garbage output, I refuse to use AI unless there is absolutely no alternative on an ethical basis due to the environmental and societal impacts.

                As far as I’m concerned, the only role for LLMs in medicine is to function as a scribe to reduce the burden of documentation and that’s only because everything the idiot machines vomit up has to be checked before being committed to the medical record anyways. Generative AI is a scourge on society and an absolute menace in medicine.

              • gens@programming.dev
                link
                fedilink
                arrow-up
                3
                ·
                7 hours ago

                It’s called RAG, and it’s the only “right” way to get any accurate information out of an LLM. And even it is not perfect. Not by far.

                You can use it without an LLM. It’s basically keyword search. You still have to know what you are asking, so you have to study. Study without an imprecise LLM that can feed you false information that sounds plausible.

                There are other problems with current LLMs that make them problematic. Sure you will catch onto those problems if you use them, and you still have to know more about the topic then them.

                They are a fun toy and ok for low-stakes knowledge (ex cooking recipies). But as a tool in serious work they are a rubber ducky at best.

                PS What the guy couple comments above said about sources, that’s probably about web search. Even when an LLM reads the sources it can missinterpet them easily. Like how apple removed their summaries because they were often just wrong.

                • ByteJunk@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  4 hours ago

                  Let’s not move the goal post. OP post is about med students using GPT to pass their exam in a successful manner. As another comment put it, it’s not about Karen using GPT to diagnose pops, it’s about trained professionals using an AI tool to assist them.

                  And yet, all we get is a bunch of people spewing vague FUD and spitballing opinions as if they’re proven facts, or as if AI has stopped evolving and the current limitations are never going to be surpassed.

      • Pornacount128@lemmynsfw.com
        link
        fedilink
        arrow-up
        2
        ·
        12 hours ago

        For what it’s worth I recently was urged by chatGPT to go to the hospital after explaining symptoms and turns out I had appendisitis

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        19 hours ago

        “just replace developers with ai”

        You bother going to the doctor because an expert using a tool is different than Karen using the same tool.