• MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    14 hours ago

    To be blunt, if you were to train a gpt model on all the current medical information available, it actually might be a good starting point for most doctors to “collaborate” with and formulate theories on more difficult cases.

    However, since GPT and list other LLMs are trained on information generally available on the Internet, they’re not going to come up with anything that could possibly be trusted in any field where bad decisions could mean life or death.

    It’s basically advanced text prediction based on whatever intent statements you made in your prompt. So if you can feed a bunch of symptoms into a machine learning model, that’s been trained on the sum of all relatively recent medical texts and case work, that would have some relevant results.

    Since chat gpt isn’t that, heh. I doubt it would even help someone pass medical school, quite bluntly… Apart from the hiring boiler plate stuff and filling in words and sentences that just take time to write out and don’t contribute in any significant manner to the content of their work. (Eg, introduction paragraphs, sentence structures for entering information and conclusions… Etc).

    There’s a lot of good, time saving stuff ML, in its current form, can do, diagnostics, not so much.