The Food and Drug Administration’s device center clarified how manufacturers should approach artificial intelligence in a draft guidance issued on Monday.
The document outlines recommendations for design, development and maintenance to ensure AI-enabled devices are safe and effective. In particular, the guidance outlines how device makers should address transparency and bias and when postmarket monitoring is needed.
Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence, said the agency has authorized more than 1,000 AI-enabled devices to date.
“As we continue to see exciting developments in this field, it’s important to recognize that there are specific considerations unique to AI-enabled devices,” Tazbaz said in a statement.
The new guidance follows final guidance the FDA issued last month on pre-determined change control plans (PCCP), a framework for changes to devices after they are on the market. Currently, most AI-enabled devices on the market are “locked,” meaning the model does not change after authorization.
In the latest draft guidance, the FDA encouraged device sponsors to consider using a PCCP, allowing them to alter devices to improve model performance without making a new submission.
The FDA will accept public comments until April 7 and plans to hold a webinar on Feb. 18.
Transparency recommendations
A comprehensive approach to transparency and bias is particularly important for AI-enabled devices because many models can be opaque or hard for users to understand, the FDA wrote in the draft.
Patients have called for more information about when AI is used in their care and how the FDA makes decisions about AI-enabled devices. Most authorized devices must have a submission summary available to the public, and the FDA said developers must include sufficient detail in these documents.
Developers should specify if AI is used in a device, explain how AI is used, and explain the type of model used in the device. They should also provide information about the datasets used to develop and validate the device, including demographic characteristics, and share a description of how the model will be updated and maintained over time, according to the guidance.
The FDA suggested using model cards, or short documents with key information about an AI model, to organize the information.
Mitigating bias
One concern with AI-enabled devices is the potential for bias, or incorrect results due to limitations in a model’s training data or erroneous assumptions. For example, a model could be over-trained to recognize features of images associated with a specific scanner or clinical site. If certain demographics of people are underrepresented in a dataset, that can also affect a model’s performance.
“Although bias may be difficult to eliminate completely, FDA recommends that manufacturers, as a starting point, ensure that the validation data sufficiently represents the intended use (target) population of a medical device,” the FDA wrote.
Other approaches to mitigate bias include monitoring devices throughout the entire product lifecycle and evaluating performance across subgroups of intended use.
The agency said device developers should submit a description of how data were collected, as well as the size and limitations of each dataset. They should also submit a description of approaches used to improve diversity in enrollment within the study and how they ensured the results can be generalized across patient populations and clinical sites.
Post-market monitoring plans
The FDA recommended that manufacturers have a postmarket performance monitoring plan in place because the performance of AI-enabled devices can change or degrade over time.
Changes in input data, such as patient populations or disease patterns, can affect a model’s performance.
Developers should proactively monitor and address device performance changes, as well as changes to inputs or the context a device is used in that could affect its performance, the FDA said.
Performance monitoring plans typically aren’t required in 510(k)-cleared devices, but may be required as a special control for devices authorized through the de novo pathway, or as a condition for premarket approval, according to the guidance. Device sponsors who choose to include a plan in their premarket submissions should describe methods for assessing changes in a model’s performance, as well as a plan for deploying updates or corrective actions to address changing performance in a timely manner.