Jeff Shuren, director of the FDA's Center for Devices and Radiological Health, on Thursday called out the need for better methodologies for identification and improvement of algorithms prone to mirroring "systemic biases" in the healthcare system and the data used to train artificial intelligence and machine learning-based devices, speaking at an FDA public workshop on the topic.
The medical device industry should develop a strategy to enroll racially and ethnically diverse populations in clinical trials.
"It's essential that the data used to train [these] devices represent the intended patient population with regards to age, gender, sex, race and ethnicity," Shuren said.
The virtual workshop comes nine months after the agency released an action plan for establishing a regulatory approach to AI/ML-based Software as a Medical Device (SaMD). Among the five actions laid out in the plan, FDA intends to foster a patient-centered approach that includes device transparency for users.
Jack Resneck, president-elect of the American Medical Association, said "the less transparency there is about a tool and its development, the less we're going to be able to counsel our patients about the risks and benefits."
Resneck said the physician group wants to see FDA focus on patient outcomes and clinical validation with published peer-reviewed data to develop trust regarding AI/ML-based medical devices, while taking steps to guard against bias and exacerbating healthcare disparities in already vulnerable populations.
"The 'just trust us' approach from an entrepreneur or developer isn't going to be enough for us to be able to reassure our patients," Resneck remarked.
The purpose of Thursday's workshop was to gather feedback from stakeholders to identify the types of information that FDA might recommend manufacturers include in the labeling of AI/ML-based medical devices to support transparency.
Resneck said the AMA wants to see labeling with a level of transparency and explainability related to how the devices will be deployed and how much risk they involve.
Pat Baird, senior regulatory specialist at Philips, noted that the general principles of labeling "have been around for decades" and, after talking recently to other medtechs, device manufacturers "want to understand what additional information is needed from the caregivers and patients."
However, at the same time, Baird cautioned about the pitfalls of "information overload" and inundating patients with too many details. "Labeling needs to be helpful and not overwhelming," Baird added.
FDA contends transparent device development and evaluation requires a total product lifecycle approach that enables the agency and manufacturers to evaluate and monitor such devices from premarket through postmarket real-world performance.
Bakul Patel, director of FDA's Digital Health Center of Excellence, said the agency must have "appropriately tailored oversight" ensuring that the benefits of the AI/ML-based devices outweigh the risks to patients, while taking into account usability, trust, equity and accountability.
"The safety and effectiveness of these novel technologies are still unclear to patients and providers," Patel said, adding that FDA is seeking to tailor its regulatory approach for these "emerging technologies that bring new challenges" as well as opportunities.
FDA intends to publish draft guidance in 2021 including a proposal of what should be included in a SaMD Pre-Specifications (SPS) and Algorithm Change Protocol (ACP) to support the safety and effectiveness of AI/ML-based devices as the associated algorithms change over time.
SPS is meant to describe what aspects the manufacturer intends to change through learning, and ACP explains how the algorithm will learn and change while remaining safe and effective.
"To achieve transparency, we must first understand what [it] means to those who are using this technology and what factors play in building that trust," Patel concluded.