Medical AI Is In The Works: New Study Analyzes The Potential And Pitfalls

With help from artificial intelligence, doctors across medical disciplines will soon be able to consult patients’ entire medical files against a breadth of medical healthcare data and published medical literature.
“We see a paradigm shift coming in the field of medical AI. Previously, medical AI models could only address very small, narrow pieces of the healthcare puzzle. Now we are entering a new era, where it’s much more about larger pieces of the puzzle in this high-stakes field,” explained Jure Leskovec, a computer science professor at Stanford Engineering.
In a new study published in Nature, researchers from Stanford described how generalist medical artificial intelligence (GMAI) is a new category of AI models that is flexible, knowledgeable, and reusable across numerous data types and medical applications.
GMAI can interpret various data combinations drawn from electronic health records, imaging, lab results, medical text, and genomics– an ability that far surpasses other AI models such as ChatGPT.
GMAI will even draw sketches, annotate images, provide explanations, and offer care recommendations.
“A lot of inefficiencies and errors that happen in medicine today occur because of the hyper-specialization of human doctors and the slow and spotty flow of information,” said Michael Moor, the study’s co-first author.
“The potential impact of generalist medical AI models could be profound because they wouldn’t be just an expert in their own narrow area, but would have more abilities across specialties.”
There are currently over 500 AI models– designed for clinical medicine– that are approved by the U.S. Food and Drug Administration (FDA). But, most of these models are only able to perform one or two specific tasks. For instance, looking for signs of pneumonia by scanning a patient’s chest X-ray.
However, new advances in the research of foundational models are promising and could solve a much more challenging and diverse range of tasks.

Mumtaaz Dharsey/peopleimages.com – stock.adobe.com – illustrative purposes only, not the actual people
“We expect to see a significant change in the way medical AI will operate. Next, we will have devices that– rather than doing just a single task– can do maybe a thousand tasks, some of which were not even anticipated during model development,” Moor detailed.
So, in the study, the authors outlined how GMAI could work in a variety of applications– from taking notes to helping doctors make bedside decisions.
In radiology, for instance, models could theoretically draft radiology reports that visually distinguish abnormalities. At the same time, GMAI could take a patient’s medical history into account.
Still, there are certain capabilities and requirements needed in order to ensure GMAI is trustworthy technology.
Primarily, the model must consume both personal medical data and historical medical knowledge. Thereafter, the model should only refer to these data when used by authorized persons.
The model also must be able to maintain a conversation with a patient or doctor in order to collect new data or recommend different courses of treatment.
Among the authors, though, the largest concern is verification. In other words, ensuring that GMAI models are relaying accurate medical information.
Right now, AI chatbots like ChatGPT are gaining both widespread praise and criticism on the internet. Sometimes, this language model is spot-on. Other times, it spits out inaccurate information.
In addition to verification, concerns about privacy also represent a reasonable reservation with the technology.
“This is a huge problem because with models like ChatGPT and GPT-4, the online community has already identified ways to jailbreak the current safeguards in place,” Moor explained.
Finally, distinguishing between social biases and data is also a challenge for GMAI models. But, according to Moor, this problem must be tackled by the owners and developers of models– making sure biases are both identified and addressed before deploying GMAIs in hospitals.
So, while the technology is extremely promising, there are still various roadblocks that need to be tackled.
“The question is, can we identify current missing pieces, like verification of facts, understanding of biases, and explainability/justification of answers so that we give an agenda for the community on how to make progress to fully realize the profound potential of GMAI?” Leskovec asked.
To read the study’s complete findings, visit the link here.
There Was Only One Woman Who Has Ever Received The Medal of Honor, And This Is Her Incredible Story
Sign up for Chip Chick’s newsletter and get stories like this delivered to your inbox.
More About:Science