Generative AI Has Potential Benefits With Risks Involved in Healthcare Sector 

Generative AI Has Potential Benefits With Risks Involved in Healthcare Sector. Credit | iStock
Generative AI Has Potential Benefits With Risks Involved in Healthcare Sector. Credit | iStock

United States: The utilization of generative AI may not be the way to cut down on burnout in health care, new research indicates. 

Studies done before have shown that increased usage of electronic health records (EHR) systems and dealing with administrative responsibilities have proved to be a burden on doctors. 

Hence, some people have welcomed artificial intelligence as a possible answer to the problems of today. However, US health systems have recently found that large language models (LLMs) do not help in the daily activities of clinicians. 

Know more about Artificial Intelligence 

An observational study performed at Brigham and Women’s Hospital in Boston, Massachusetts, in 2023 studied the effect of utilizing AI for the purpose of electronic patient messaging. 

Here, the researchers cited an LLM to react to stimulated questions from cancer patients, where they compared the output with responses from six board-certified radiation oncologists. 

Further, the medical professionals edited the responses, which were AI-generated, into “clinically acceptable” answers to send to patients. 

Generative AI Has Potential Benefits With Risks Involved in Healthcare Sector. Credit | REUTERS
Generative AI Has Potential Benefits With Risks Involved in Healthcare Sector. Credit | REUTERS

The study reports were published in The Lancet Digital Health. It identified that the LLM drafts posed “a risk of severe harm in 11 of 156 survey responses and death in one survey response.” 

The researchers also wrote, “The majority of harmful responses were due to incorrectly determining or conveying the acuity of the scenario and recommended action,” Fox News reported. 

Furthermore, the study concluded, “These early findings … indicate the need to thoroughly evaluate LLMs in their intended clinical contexts, reflecting the precise task and level of human oversight.” 

Medical billing codes  

Another study conducted at New York’s Mount Sinai Health System examined four kinds of LLMs in order to analyze the performance and error patterns while querying medical billing codes. 

The study which was published in the journal NEJM AI, revealed that the performance of LLMs were seen on poor side on medical code querying, “often generating codes conveying imprecise or fabricated information.” 

In the conclusion, the study said, “LLMs are not appropriate for use on medical coding tasks without additional research.” 

Researchers also noted, “This has significant implications for billing, clinical decision-making, quality improvement, research, and health policy,” as Fox News reported. 

Patient messages and physicians’ time 

A third JAMA Network-published study, done at the University of California San Diego School of Medicine, analyzed AI-drafted responses to patient messages and physicians’ amount of time spent in editing them. 

The study cited, “Generative AI-drafted replies were associated with significantly increased read time, no change in reply time, significantly increased reply length and [only] some perceived benefits,” as Fox News reported. 

What do the doctors think about AI? 

The chief medical officer of Qventus, an AI-powered surgical management solution in Mountain View, California, David Atashroo responded to the findings and said, “We see an immense potential for AI to take on lower-risk, yet highly automatable tasks that traditionally fall on the essential yet often overlooked ‘glue roles’ in health care — such as schedulers, medical assistants, case managers and care navigators.” 

“These professionals are crucial in holding together processes that are directly tied to clinical outcomes, yet spend a substantial portion of their time on administrative tasks like parsing faxes, summarizing notes, and securing necessary documentation,” he added. 

He also suggested to helpfulness of generative AI in improving potency of clinical sphere where “When considering the deployment of generative AI, it’s crucial to set realistic expectations about its performance,” he said. 

“The standard cannot always be perfection, as even the humans currently performing these tasks are not infallible,” he said, 

Atashroo also mentioned that “transparency in the development and implementation of AI technologies is essential in building trust among hospital partners and patients.”