Dive Brief:
- Doctors who rely on an artificial intelligence-based system recommendation for clinical decisionmaking may face legal liability if they provide nonstandard care on the advice of AI and the patient is injured, law academics from the University of Michigan and Harvard University wrote in a JAMA viewpoint Friday.
- While there is little medical AI case law on the books, because current law protects physicians when they follow the standard of care, there may be incentives to only use AI as a way to confirm decisions rather than to improve patient care, potentially forgoing an AI benefit that outperforms a human, they said.
- The authors note that, eventually, AI may be considered part of the standard of care for patients, shifting tort law to a place where reliance on a clinical decision support software may be a defense to liability. Eventually, they warn, the legal system could even shift to a place where doctors may face liability if they don't use correct, albeit nonstandard, AI recommendations.
Dive Insight:
Legal considerations for doctors using clinical decision support software come amid challenges for regulators attempting to define a framework to oversee the emerging technologies.
In September, FDA published revised draft guidance on clinical decision support software after industry slammed its first attempt, saying it now plans to take a risk-based approach to oversight.
The agency plans to closely regulate software that informs clinical management of serious conditions where a physician cannot independently evaluate the recommendation. Software that informs clinical management for non-serious conditions will, in general, not be regulated as a medical device under the new draft guidance.
Physicians and medical professional societies must learn how to better interpret AI algorithms, the academics argued in JAMA.
"Review by the FDA will provide some quality assurance, but societies will be well placed to provide additional guidelines to evaluate AI products at implementation and to evaluate AI recommendations for individual patients," they said. "The analogy to practice guidelines is strong; much as societies guide the standard of care for specific interventions, they can guide practices for adopting and using medical AI reliably, safely, and effectively."
AI products must be "rigorously vetted" before they are used in a hospital, and doctors should be communicating with malpractice insurers to make sure care relying on AI is covered. Doctors must be aware of the legal landscape as it matures and legislatures begin to consider changes to the law.
"Although current law around physician liability and medical AI is complex, the problem becomes far more complex with the recognition that physician liability is only one piece of a larger ecosystem of liability," the legal academics wrote. "Hospital systems that purchase and implement medical AI, makers of medical AI, and potentially even payers could all face liability."