LAS VEGAS — Healthcare organizations need to focus on testing, monitoring and implementing artificial intelligence models to ensure safe deployment of the technology and avoid disruptions, experts said during a panel at the HLTH conference Tuesday.
Although AI could be a boon for a sector rife with workforce shortages and heavy administrative workloads, model performance can vary based on setting and patient population, meaning how it’s implemented is an important factor to a successful deployment, panelists said.
The COVID-19 pandemic highlighted one challenge to safely and effectively adopting AI in healthcare, according to Christine Silvers, healthcare executive advisor at Amazon Web Services.
Prior to the pandemic, a healthcare AI model was developed to predict which patients might not show up to their appointments, Silvers said. The model solved a key issue for health systems — when patients miss their appointments, continuity of care is disrupted and health systems miss out on revenue.
With a predictive model, providers could intervene ahead of time. Maybe the patient needs a ride? Or perhaps the scheduled time doesn't actually work for the patient.
At first, the model worked great — until the pandemic upended normal healthcare access, she said.
“There’s the concept of model drift. Your environmental factors that are going into helping your model work today can change over time,” Silvers said.
To mitigate disruptions, healthcare organizations need to test their models on local datasets and continually monitor their performance, said David Rhew, global chief medical officer and vice president of healthcare at Microsoft. They also need to keep an eye on how the model performs within subpopulations, such as age, race, gender and sexual orientation, and maintain an overall governance strategy.
“The problem that we’re running into is not so much that organizations don’t agree with these principles. They don’t necessarily have the resources to be able to do all that at scale,” he said.
To address the big questions around safety, efficacy and bias, healthcare organizations will need to work together to decide on an implementation strategy they can use for comparison, Rhew said. That way, companies would have a better idea if a model could work well at their organization if they understood how the AI was deployed at a similar company.
Some groups have already launched to help set standards for responsible AI deployment in healthcare, including the Coalition for Health AI. This spring, Microsoft also formed a group that aims to operationalize responsible AI guidelines.
But investing in AI tools, deploying them carefully and monitoring their performance will take money — potentially leaving behind lower-resourced or rural health systems.
“How do we enable the FQHCs [federally qualified health centers], the critical access hospitals, to responsibly be part of this AI revolution?” said Brian Anderson, CEO of CHAI. “Literally every other digital health innovation that’s come before it has reinforced the digital divide.”
The problem hasn’t been solved yet, Rhew said. But healthcare organizations could try to adopt AI in a hub and spoke model, where the hub hospital has the resources to support other facilities in the group. Technology companies could also help by offering to test and monitor models, he added.
The HHS’ Office for Civil Rights is interested in partnering with the healthcare industry too, and organizations should speak up about questions or conversations the sector needs to be having, said Director Melanie Fontes Rainer.
“This isn't a one and done. We all have to hold hands and jump in this together,” she said. “Otherwise, we’re going to have a fractured use of this where we pick winners and losers, and we’re going to have potential disparities widen for certain populations that, frankly, already exist.”