Dive Brief:
-
The Duke-Margolis Center for Health Policy identified barriers to the development and adoption of safe and effective AI-enabled diagnostic support software (DxSS) in a new report released Wednesday.
-
The potential for AI to improve DxSS and reduce diagnostic errors has attracted a lot of attention, but concerns remain about the evidence, risk and ethics the center identified as need resolving.
-
To address these concerns, Duke-Margolis thinks the field needs to consider developing best practices to mitigate bias and rethink product labeling.
Dive Insight:
The prevalence of diagnostic errors and impact they can have on patients create a strong driver for improved clinical decision support technologies. According to a report by the National Academies of Sciences, Engineering, and Medicine, diagnostic errors account for around 60% of all medical errors. Perhaps most troubling is that diagnostic errors are implicated in 40,000 to 80,000 deaths a year in U.S. hospitals.
Advocates of AI see the prevalence of diagnostic errors as a data problem, at least in part, that can be improved by technology. In theory, AI systems scan patient data and deliver faster, more accurate diagnoses than can be achieved currently, resulting in people getting the right care sooner.
Numerous barriers stand between the sector and the realization of that vision. Duke-Margolis, which pools expertise from the policy community in Washington, D.C., Duke University and Duke Health, put together a report on the topic in collaboration with former and current FDA officials and Alphabet's Verily Life Sciences.
The result is a document identifying three broad areas that the AI community will need to work through if the technology is to go mainstream. These areas cover evidence that supports adoption, effective risk management and measures to ensure the systems are ethically trained and flexible.
Responsibility for addressing these topics will fall on different parts of the community of technology vendors, regulators, healthcare providers and other groups that make up the AI-enabled DxSS sector. The upshot is that while DxSS developers can address some barriers to the use of their technologies, such as the need for evidence of effectiveness, they are not in total control of their destinies.
For example, Duke-Margolis identifies a potential need for fresh thinking about product labeling. The current regulatory model for product labeling and other topics including verification and validation is tailored toward devices with fixed features. As AI systems can continue to "learn" and improve after regulatory clearance, it is unclear how they can fit into the current model.
"More clarity is needed to understand when modifications or updates to AI-enabled [software as a medical device] will require submission of a new 510(k) or a supplemental PMA to FDA, and when these quality systems will suffice," Duke-Margolis wrote in the report.
None of the barriers identified in the report are necessarily insurmountable but collectively they suggest that there is a lot of work to do before AI-enabled DxSS fulfils its potential.