Not much familiar wirh metrics for evaluating progression in medical fields, so asking in general sense.

  • dfyx@lemmy.helios42.de
    link
    fedilink
    arrow-up
    29
    ·
    2 days ago

    Absolutely and it has done so for over a decade. Not LLMs of course, those are not suitable for the job but there are lots of specialized AI models for medical applications.

    My day job is software development for ophthalmology (eye medicine) and people are developing models that can, for example, detect cataracts in an OCT scan long before they become a problem. Grading those by hand is usually pretty hard.

    • Stovetop@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      1 day ago

      For what it’s worth, even LLMs are helpful for writing notes. Easier to capture each interaction within a patient’s chart for a certain visit and generate a summary than to rely on the memory of a doctor/nurse/MA to recall everything accurately hours after the fact.

      • ThirdConsul@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        So… The medical professional is taking voice notes and then they get transcribed (ok, this is fine) - and then summarized automatically? I don’t think the summary is a good idea - it’s not a car factory, the MD should get to know my medical history, not just a summary of one.

        • Stovetop@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          The history is all there, the chart should contain each granular instance of what is done in exacting detail. The summary is just one of those obligatory elements of patient interactions covering the key details of a visit because no one has time to review each and every data value in the chart when looking through medical history.

          That aspect is not even a new AI thing to justify AI, that’s just how it works today. Each doctor or nurse who works with a patient for a given visit puts in a little paragraph or two summary of everything they did and their plans for future care, and maybe they also import some of the key data points they consider directly relevant. And while I think this shouldn’t be the case, a lot of this can happen hours or even days after that visit is over.

          A lot of that work can be streamlined or even made more accurate using an LLM where the only data set referenced is the data that exists in the chart. Not hallucinatory ChatGPT garbage, but something more airgapped and tailored to that specific purpose.

          • AnyOldName3@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            1 day ago

            You can’t make an LLM only reference the data it’s summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it’s choosing whatever piece of that data seems most likely given the existing text in its context window. If there’s not a huge corpus of training data, it won’t have a model of English and won’t know how to summarise text, and even restricting the training data to medical notes will stop mean it’s potentially going to hallucinate something from someone else’s medical notes that’s commonly associated with things in the current patient’s notes, or it’s going to potentially leave out something from the current patient’s notes that’s very rare or totally absent from its training data.

            • Stovetop@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              Well, I can’t claim to be an expert on the subject, at any rate, but there are plenty of models which are local-only and are required to directly reference the information they interpret. I’d assume a HIPAA-compliant model would need to be more like an airgapped NotebookLM than ChatGPT.

              But I would also assume the risk of hallucinations or misinterpretations is the reason why a clinician would still need to review the AI summary to add/correct details before signing off on everything, so there’s probably still some risk. Whether that risk of error is greater or less than an overworked resident writing their notes a couple days after finishing a 12-hour shift is another question, though.

              • cecinestpasunbot@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                If you end up integrating LLMs in a way where it could impact patient care that’s actually pretty dangerous considering their training data includes plenty of fictional and pseudo scientific sources. That said it might be okay for medical research applications where accuracy isn’t as critical.

                • Stovetop@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 day ago

                  For what it’s worth, I don’t mean to say that this is something that hospitals and health networks should be doing per se, but that they are doing right now. I’m sure it has benefits for them, as another user somewhere further in this post described, otherwise I don’t think all these doctors would be so eager to use it.

                  I work for a non-profit which connects immigrants and refugees to various services, among them being healthcare. I don’t know all of the processes they use when it comes to LLM-assisted documentation, but I’d like to think they have some protocols in place to help preserve accuracy. If they don’t, that’s why this is on our radar, but so is malpractice in general (which is thankfully rare here, but it does happen).