Practioners

Doctors take cautious approach to spread of AI in medicine

Consider a fairly common scenario: A stroke patient arrives at a hospital. Their brain is scanned, and hundreds of images are captured. Medical staff begin figuring out exactly what’s wrong and how best to treat the patient.

Now consider a relatively new twist on this process: Artificial intelligence software rapidly analyzes those brain images. If it detects, for example, a suspected large vessel occlusion — the obstruction of a large artery in the brain — the patient’s entire medical team is alerted via smartphone app within minutes. The process of care gets going more quickly than in years past, saving precious minutes when “time is brain cells,” as stroke teams often say. 

This is one example of the impact artificial intelligence is having in the field of health care. Advocates for such technology hope combining cutting-edge software with enormous amounts of data will improve patient care and make processes more efficient.

“AI has the potential to be a gamechanger if it’s utilized responsibly, and I have every reason to think it will be utilized responsibly in health care,” said Dr. Robert Alphin, chief medical officer for LewisGale Medical Center in Salem, which uses such AI-powered software to help treat stroke patients.

At the same time, experts caution that AI technologies must be carefully vetted and governed to ensure accuracy, patient privacy and data security. Among the most recent steps to bring AI regulation into the spotlight is an executive order that President Joe Biden issued Monday outlining steps that the White House says will make AI safer and more trustworthy.

“Don’t put the hype of AI above good judgment,” Kay Firth-Butterfield, an AI ethicist and CEO of Good Tech Advisory, told a group of hospital executives and medical professionals last month at the Virginia Hospital and Healthcare Association’s annual conference in Roanoke.

What is AI in health care?

Exact definitions of artificial intelligence, or AI, can vary based on who is asked. It can be generally defined as using computers and large amounts of data to make decisions, solve problems and perform other tasks that otherwise typically require human abilities.

An example in health care of the difference between conventional software and what could be considered AI can be seen when new technology takes on the role of not just presenting data, such as a test result, but actually suggesting a diagnosis.

For example, consider a mammogram, an X-ray picture of a breast that a doctor uses to check for cancer. The mammogram itself is not AI — it’s just a test, Firth-Butterfield explained to her audience at the Roanoke conference.

But when software goes a step further and recommends a diagnosis, that’s AI, she said.

“It’s a different technology. You are handing decision-making to the technology you are using,” Firth-Butterfield said.

Alphin said that essentially, artificial intelligence seeks to reproduce aspects of human intelligence — sometimes faster and more accurately.

“It’s clear that the ability of some machines exceeds the human brain capability. It can consume such vast amounts of data at one time that the human brain can’t do,” he said.

AI is typically coupled with machine learning, a related technology in which software uses algorithms and large amounts of data to improve its own results over time.

“They may make decision A at the beginning, but much later, they’ve ingested double the amount of data, they have learned the data that they have ingested and will make a better decision the next time,” Alphin said. “They keep learning and keep getting better.”

‘Time is brain’

A typical workflow of care for a stroke patient begins when the person arrives at the hospital with symptoms, their brain is scanned, and hundreds of images are captured. Those images are packaged together and sent to a radiologist for diagnosis. 

Doctors then decide on potential treatments such as administering a thrombolytic, a drug that breaks up blood clots in the brain, or performing a thrombectomy, which is surgery to remove a blood clot from the brain.

While all of those steps are important, they also all consume precious time.

“In stroke we say, ‘Time is brain,’” said Dr. Dan Karolyi, chair of radiology at Carilion Clinic. “There are millions of brain cells dying as the seconds pass on for each stroke patient that goes untreated.”

Today, AI can drastically reduce the time it takes to complete this process. Software can quickly analyze the brain scan images to determine if a blockage of large arteries in the brain exists and suggest a diagnosis.

The software can quickly send updates to a hospital’s stroke team via a smartphone app, allowing team members to begin communicating about the patient’s care sooner. 

“It’s not only rapid diagnosis, but it’s also rapid communication and team activation,” Karolyi said.

While such technology can speed up the process of caring for a patient, regional health care providers emphasized it’s not intended to replace human doctors.

“This is just an early identification; it’s not definitive,” said Dr. Zach Williams, stroke medical director at LewisGale Medical Center. “It’s more of an alert to, ‘Hey, look at this, this is what we think we’ve identified here.’ It certainly gets it identified much quicker, where at least it grabs the attention to take a look at it.”

ChatGPT comes to health care apps

Many Carilion Clinic patients are familiar with MyChart, an online service and app they can use to schedule appointments, manage health records and ask questions of doctors and nurses.

Roanoke-based Carilion Clinic is working to integrate a chatbot powered by ChatGPT into its MyChart service. Photo by Matt Busse.

In partnership with Epic, Carilion’s electronic medical records provider, and tech giant Microsoft, Carilion is working to integrate a chatbot into MyChart that’s powered by ChatGPT, the AI-powered language tool developed by San Francisco-based OpenAI.

The idea is that when a patient submits a query through the app, the question will be processed by the software, incorporating the patient’s medical records into a draft response that the software will write for medical staff to review before replying to the patient.

ChatGPT is perhaps the best-known example of generative AI — using AI to create content — and of what’s called a large language model: software that crunches enormous amounts of data as it processes natural language inputs, such as a patient asking a question, and produces a response in a form intended to be easy for the general public to read and understand.

Carilion has been collaborating with Epic on testing the software and reviewing the draft responses it provides, with physicians providing feedback on how well the software performs, said Dr. Stephen Morgan, Carilion’s chief medical information officer.

“It’s fascinating, one, how well it works, but two, how much training it takes, because there are some nuances to the responses, and you want to make sure that it’s empathetic,” Morgan said.

It has not yet been determined when Carilion will debut the software in its interactions with the general public, Morgan said.

Carilion is also looking into adopting AI-powered software to transcribe conversations between providers and patients, with an eye toward launching such a system early next year.

“The time savings alone, I’m really looking forward to that,” Morgan said. “I’m pretty good at typing and use voice-to-text, but this would take it to another level.”

As AI interest grows, so do warnings

AI usage is growing in a variety of fields, including education, finance, human resources, journalism, the law and medicine. Google queries such as “what is AI” have increased significantly in the past year, according to the search giant’s data, reflecting an increasing public interest.

In the health care industry, AI has a wide variety of potential applications. Besides image analysis and chatbots, AI could be used to transcribe notes, automate back-office functions, improve record-keeping and otherwise help alleviate workforce shortages. It could also help patients with tasks such as making appointments, monitoring glucose levels or taking medicine as prescribed.

As the impact of AI in health care continues to grow, flags have been raised regarding the relatively new technology’s accuracy, privacy safeguards and responsible application.

In an opinion first published in April 2022 and reaffirmed this past July, the Food and Drug Administration reminded health care providers about software used to detect large vessel occlusion in stroke patients, saying that such technology can improve a medical team’s workflow but doesn’t remove the role of a radiologist in reviewing the images.

In August, Sen. Mark Warner, D-Virginia, called on Google to increase transparency and privacy protections around its use of Med-PaLM 2, an AI chatbot currently being tested that Warner said had led to “concerning reports of repeated inaccuracies.”

A recent Stanford University study published in the journal Nature warned that some chatbots responded to researchers’ queries “with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations,” according to the Associated Press.

This week’s executive order says, in part, that the U.S. Department of Health and Human Services will establish a safety program to receive and act upon reports of unsafe or harmful health care practices involving AI.

Biden’s order also mandates new safety testing standards for AI models if those models could pose a risk to national security, asks Congress to pass new legislation protecting Americans’ privacy and outlines several directives to prevent discrimination and promote equity when AI is used in areas including housing, criminal justice and federal benefits programs.

Warner said in a statement he was “impressed by the breadth” of Biden’s executive order but named health care as one area where the president’s new policies “just scratch the surface.”

“While this is a good step forward, we need additional legislative measures, and I will continue to work diligently to ensure that we prioritize security, combat bias and harmful misuse, and responsibly roll out technologies,” Warner said.

All of these recent developments point to a growing effort to get a handle on the rapidly developing field of AI, which experts say requires rules to protect patients, keep personal data safe and ensure the technology is used responsibly.

“There is a great need for appropriate governance of the use of a technology, and artificial intelligence is no different,” Alphin said.

Humans maintain crucial role in care

Health care companies embracing AI find themselves confronting another vital question: When a machine takes on more decision-making responsibility, how does the role of a human medical professional change? 

“It’s never going to replace us,” said Mandi Zemaiduk, stroke supervisor for Centra Health. “It’s just going to assist us to provide better care for our patients.”

Lynchburg-based Centra Health uses software from Viz.ai to help treat stroke patients. Photo by Matt Busse.

AI-powered software has improved Centra’s level of care in the past two years or so, as the Lynchburg-based health system treats more than 700 stroke patients each year across its footprint, but it’s not perfect, Zemaiduk said.

“There are times that our physicians pick up things that the artificial intelligence doesn’t pick up,” she said.

For example, the software can produce false positives or false negatives. Such discrepancies are rare, but when they occur, they’re noted and sent to the software vendor, Viz.ai, Zemaiduk said.

Williams, of LewisGale, said such software saves time — about 15 minutes on average for the most severe stroke patients — but care decisions are still made by human doctors.

“We’re not using artificial intelligence to change our decision pathway or how we offer any sort of treatment,” Williams said.

Another potential concern in the use of AI: Medical professionals must watch for signs of flawed data or bias. If the data used by the software is not representative of the patient population, its ability to provide accurate, relevant care is reduced.

And while AI has the potential to improve the health care that rural populations receive, Firth-Butterfield, in Roanoke, warned that unless it’s used specifically to support human doctors, health systems run the risk of create a two-tiered system, where a rural area gets AI-based health care and a more populous area gets “real human care.”

Alphin noted that discrepancies in care already exist between rural and urban areas, and new technologies will provide more services to places that lack specialists and other providers on site.

“I look at something like artificial intelligence as reducing the gap between those,” he said.

Firth-Butterfield urged hospital executives to avoid the temptation to use AI simply to cut costs. 

Rather, it should be used to help medical providers offer better care.

“Run the AI in the background. Give your humans the opportunity to do that job they do so well,” she said.

No Byline Policy

Editorial Guidelines

Corrections Policy

Source

Leave a Reply