This transcript has been edited for clarity.
Say the words "artificial intelligence" to a group of doctors and I'm going to bet that half will be excited and the other half will tune you out. It's because of a lack of understanding of how broad artificial intelligence (AI) is and from headlines about whether AI can act in place of a clinician. It's not going to.
Recently, the conversation about AI in healthcare climbed up the Twitter-debate ranks because of two stories: a story about an autonomous chest x-ray analyzer that was recently approved for clinical use in the European Union, and an overview of how AI is helping hospitals save lives, which focused heavily on sepsis risk algorithms.
AI at its core is simply a computer algorithm that mimics human intelligence in terms of looking at data, finding patterns, and making decisions. This is already happening for sepsis risk.
Imagine training a machine-learning model to flag patients at risk for sepsis using 32 million data points from 42,000 patient encounters.
That's exactly what researchers at Duke University Hospital did when they created Sepsis Watch.
The researchers reported that proper use of the sepsis bundle — antibiotics, labs, and so on —increased to 64% in the 15 months after starting Sepsis Watch. In the 18 months prior, it was only 31%.
With Sepsis Watch, patient data are entered automatically into the model every 5 minutes. If a patient meets SIRS criteria, a rapid response team, doctors, and other clinicians then make the call about next steps.
Another hospital system, HCA Healthcare, has its own predictive tool called SPOT, or Sepsis Prediction and Optimization of Therapy, and they claim that this can detect sepsis 6 hours earlier and more accurately than clinicians can.
If AI can help us act early and prevent sepsis deaths, that is awesome.
Machine learning still requires manual data entry and interpretation of the numbers.
Clinician oversight is a necessity for AI technology that involves direct patient care, regardless of what tech investors may say.
Without naming companies, I read about a "patient data analytical software" that claimed it could reduce unnecessary lab ordering. Think about that for a second. Would a physician not order a lab if the AI didn't recommend it? Unlikely. There is going to be some clinical oversight.
There's research about using machine-learning models to predict which patients are most likely to be readmitted to the hospital, and then providing specialized education on discharge. The algorithm looks for factors such as polypharmacy, chronic conditions, and a history of readmissions.
While following this rule, some readmissions could be prevented, but what about the patients who have high a level of anxiety, don't have any follow-up, or have low health literacy? AI isn't picking any of that up. I would hate for such patients to fall through the cracks because of the inherent bias of machine learning.
If models are designed by humans, then they can be designed with limitations and biases.
Speaking of limitations, let's talk about using AI in imaging, which is causing a lot of chat.
This spring, the ChestLink Autonomous AI Medical Imaging Suite (by developer Oxipit) was approved for use in the European Union. This is AI-meets–chest x-ray diagnostics. Long story short, the technology reads chest x-rays, and if they're normal, it automatically sends a message to patients.
The press release for ChestLink claims it'll help with the global radiology shortage, and the company is pursuing approval by the FDA.
However, this technology is not nearly as popular in the United States. Neither the American College of Radiology nor the Radiological Society of North America think that AI alone can catch subtle chest x-ray abnormalities without physician oversight. It's going to need a radiologist to double-check its work.
Take a look at EKGs, which have had computer interpretations on the printouts for years. If there's any clinical concern, you talk to a cardiologist and they're read by hand.
I've done my fair share of counting those little boxes with an on call-cardiologist to calculate a QT interval.
To improve access in resource-limited areas, some companies are using a convolutional neural network (which I barely understand) to improve AI interpretation of EKGs. Time will tell how accurate they are.
In the end, a human must be there to build trust, to know what questions to ask, what data to collect, and then to know how to interpret it and what to do with it. That takes the art of medicine. Sorry, AI machines!
Honestly, AI needs some rebranding and a new public relations team.
There are so many cool applications, from lab automation and specimen organizing, to pathology specimen assessments, drug development, identifying patients for clinical trials, mental health assessments, individualized physical therapy, charting, and more.
I think we'll end up working side by side with AI. What do you think? Do you have any experience working with AI? Do you think it'll improve or hinder your workflow?
Maybe AI can make EMRs less annoying — that would be a win.
If you could automate any part of patient care with machine-learning models, what would it be? We want to hear from you. Comment below.
Alok S. Patel, MD, is a pediatric hospitalist, television producer, media contributor, and digital health enthusiast. He splits his time between New York City and San Francisco, as he is on faculty at Columbia University/Morgan Stanley Children's Hospital and UCSF Benioff Children's Hospital. He hosts The Hospitalist Retort video blog on Medscape.
Follow Alok Patel on Twitter.
Follow Medscape on Facebook, Twitter, Instagram, and YouTube
© 2022 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Will an Algorithm Save Our Butts in Sepsis? - Medscape - May 12, 2022.
Comments