As artificial intelligence aims to transform healthcare, soon your doctor may consult an AI algorithm before deciding on your treatment.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

SPOKANE — When doctors decide on a course of treatment, they have plenty of data to inform decision-making. But they don’t always have time to interpret that data.

Dushant Sahani, chair of radiology at the University of Washington, believes that physicians can process only 5 percent of the data available to them before deciding on a particular treatment.

“Doctors are overwhelmed with managing data. And we want physicians to spend more time with the patient and provide them with the best experience,” he said. “Healthcare is one of the greatest human endeavors, but it There is also data travel. And in the modern world, we have a lot of data, but we need a better way to use that data to make better decisions.”

Sahani is a co-founder of the UW’s Institute of Medical Data Science, which supports healthcare artificial intelligence initiatives. Founded last year in Seattle, the institute hopes to provide research, education and funding to bring AI to hospitals to deliver better patient outcomes.

The technology promises to transform healthcare by synthesizing millions of pieces of data in nanoseconds — informing how a physician treats their patients and how care is prioritized. But as AI becomes increasingly embedded in the healing process, an algorithm that works on purpose can become a life-and-death proposition.

Used properly, an AI algorithm helps medical professionals sift through large amounts of data in a short amount of time. Sahani refers to the determination of whether a lesion on the body is possibly cancerous. Based on the risk profile of the lesion, a physician may decide to wait to see if the lesion progresses, or to examine it with an invasive procedure that carries little risk.

An AI algorithm can be trained on many images of the same type of lesion and a host of other data. Through this, the AI ​​will determine if the lesion is high enough to warrant further testing, which will inform the doctor’s observations.

AI can also help prioritize triage and care. Often, medicine’s role is to decide how to prioritize patients who need care first or who need more care. Sahani said these AI tools can interpret patient data and “select a few patients who might benefit most from early intervention.”

A more unusual use of AI in healthcare is the use of large language models – in the vein of Chat-GPT – that can help medical providers with administrative tasks such as writing notes after a doctor’s visit or helping patients with their Assisting in scheduling appointments.

“AI is collecting and integrating this information for us, which is too difficult for humans to apply manually. We need a lot of staff and other resources to apply the data meaningfully. AI is often more can speed up accurate diagnosis,” Sahani said. “With AI, we may be able to integrate clinical information with lab imaging and other information to come up with more personalized diagnoses that will help us make more appropriate decisions for that patient.”

Although he believes AI is “not a panacea,” Sahani and his colleagues at the Institute of Medical Data Science hope the technology will improve the patient experience.

Despite the optimism, many of the problems facing AI in other fields are much more at stake in healthcare. An algorithm that doesn’t work properly can lead a doctor to misdiagnose their patient or incorrectly prioritize care.

Does AI create bias in healthcare?

A 2019 study by the University of California, Berkeley found that an algorithm used to care for 200 million patients a year was racially biased.

The AI ​​analyzed in the study was used by hospitals to identify patients with complex health needs who may need specialized care. But the researchers found that the algorithm predicted health care costs rather than disease severity.

Due to existing racial disparities in health care, less money is spent on black patients than on white patients. As a result, the algorithm assumed that black patients needed less specialized care than white patients, even though this was not the case.

If corrected, the algorithm would predict that 46.5% of black patients would need this extra help, compared to 17.7% of black patients that the algorithm originally predicted would need it. Will be needed.

“Less money is spent on black patients with the same level of need, and the algorithm incorrectly concludes that black patients are healthier than similarly ill white patients,” the study reads. .

At a US Senate hearing on the use of artificial intelligence in health care earlier this month, study author Ziad Obermeier said his research shows how easily human bias can unintentionally creep into AI. can find its way and then be justified by the neutrality of technology.

“(AI) predicted — correctly — that black patients would incur lower costs, and thus denied them access to help for their health. The result was racial bias that cost hundreds of millions each year. affect important decisions for patients,” he said in remarks to the Senate.

Although not analyzed in his study, Obermeier noted that AI algorithms can create bias beyond race, such as gender, socioeconomic status or disability.

Obermeier also said that many of these algorithms are still in use after his 2019 study, and that regulators “shouldn’t take the algorithm developers’ word that it’s performing properly. ” Despite these criticisms, Obermeyer also said AI has the potential to improve health and reduce costs.

Addressing the hearing of the Senate Finance Committee, the chair of the committee, Sen. Ron Wyden, D-Ore. said that while AI is making healthcare more efficient, the technology is also “fraught with bias that discriminates against patients based on race, gender, sexuality. orientation and disability.”

In his efforts to spread AI in healthcare, Sahani hopes the technology can reduce disparities. But he acknowledged that it could also increase bias.

“Obviously, bias is a major concern. I don’t think we’ve fully addressed it. We need to keep an open mind and constantly evaluate our algorithms to see if they’re correct,” he said. said

Sahani also noted that it is incredibly important to be upfront with the public and patients about how AI is being used in their healthcare.

What’s in Spokane?

Artificial intelligence tools are already in use at Spokane hospitals, though they may not yet be used in some of the broad ways envisioned by AI’s champions.

Providence, the largest health system in Spokane, uses AI to complete administrative tasks, help medical professionals make diagnoses and in “other innovative ways” at Sacred Heart Medical Center and other facilities.

“Providence is always looking for ways to improve the patient experience. Over the past several years, Providence has invested in technological advances, including artificial intelligence, that allow us to deliver high-quality, compassionate care safely and responsibly. allows for the advancement of new delivery methods,” Providence said in a statement.

Earlier this year, Providence CEO Rod Hochman said AI would be a “major driver of change” for health systems in 2024.

“Having made significant investments in IT infrastructure, digital and cloud technologies in recent years, health systems have set the stage for rapid AI innovation in 2024. Generative AI will drive advances that enable personalized patient care. experiences, better patient outcomes and clinical breakthroughs,” he said. in January.

Providence also partnered with Microsoft and AI company Nuance to implement an AI tool that helps physicians with data entry, which Providence said will free up more time with patients.

MultiCare, Spokane’s other major health system, has also implemented AI tools in recent years. This technology is used to add “more patient time” to the organization’s planning tool and electronic medical record. AI is also used for “inventory management, waste reduction and anomaly detection,” according to a statement from MultiCare’s chief information officer Brad Busk.

The hospital system has launched an “environmental listening platform” that uses AI to automate the creation of clinical notes and medical charts. Multicare facilities also use AI to refill patient prescriptions over the phone.

Multicare Deaconess Hospital introduced several autonomous robots that use AI to navigate the hospital and deliver supplies and complete menial tasks. Four Moxi robots completed 35,000 item deliveries, traveled 7,000 miles and saved more than 23,000 staff hours, according to a statement from MultiCare.

Both hospital systems said only internal data is used to train their AI programs and that all private data is protected.

MultiCare’s AI applications are trained on our own generated data. We do not use open-source/off-the-shelf platforms, but we apply strict governance and provisions regarding the types of investments we make in AI,” Biske said in a statement.

Providence signed the “Rome Call to AI Ethics,” a 2020 document that hopes to create a framework around the ethical development of AI. Also signed by IBM and Microsoft, the letter states that AI should be developed “not for technology, but for the good of humanity and the environment.”

“Providence is committed to aligning priorities and strategies, protecting patient data and privacy, preventing bias and providing access to promising innovations for all, especially the populations served,” the hospital system said in a statement. proactively set up an AI governance structure to ensure

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment