Dive Brief:
- U.S. Sens. Ron Wyden, D-Ore., and Cory Booker, D-N.J., are pressing CMS, the Federal Trade Commission and major health insurance companies to explain what they are doing to prevent bias in algorithms used to make decisions and target resources that affect patient care.
- In letters sent Tuesday to the agencies and five companies, the senators flagged a study published in October in the journal Science that found racial bias in a widely used algorithm for assessing healthcare needs and said they are seeking information to help determine the extent of the bias problem.
- Wyden and Booker also asked the FTC to commit to investigating the ways algorithms already in use may discriminate against marginalized groups.
Dive Insight:
FDA to date has authorized more than 30 AI algorithms for use in healthcare. The agency is developing a pre-certification pilot for software whose high-profile participants include Apple, Johnson & Johnson and Fitbit.
The senators are shining a spotlight on the potential for bias in algorithms used in making care decisions at a time when healthcare organizations are ramping up investment in predictive data analytics and AI. A number of recent surveys show a significant increase in adoption among health systems, many of which are hiring data scientists to help design their strategies for using analytics to improve clinical outcomes and drive revenue growth, among other goals.
Automated decision systems based on technologies such as advanced analytics and artificial intelligence hold potential to identify patients for care who need it the most, but also can carry human biases built into the massive data sets, Wyden and Booker wrote in their letters. The letters were sent to executives at UnitedHealth Group, Blue Cross Blue Shield, Cigna, Humana and Aetna, in addition to CMS Administrator Seema Verma and FTC Chairman Joseph Simons.
The Science research, which the senators called "deeply troubling," found that racial bias in an algorithm reduced the number of black patients identified for extra care by more than half, because they were assigned the same level of risk despite being sicker than white patients. The algorithm was formulated to use health costs as a proxy for health needs even though less money is spent on black patients who have the same level of need.
In another example of what the lawmakers called the "biases, disparities and inequities that plague our healthcare system," Wyden and Booker pointed to the findings of a 2016 study that revealed most medical students and residents incorrectly believed that black patients tolerate more pain than white patients. The belief led to less accurate treatment recommendations for black patients.
The lawmakers are not alone in calling for greater scrutiny of potential bias in algorithms. Just last month, several major U.S., Canadian and European radiology societies issued a joint statement urging development of additional guidelines on the ethical use of AI in imaging, saying developers need to be held to the same “do no harm” standard as doctors.
Earlier this year, the Margolis Center for Health Policy at Duke University recommended in a report that the field consider establishing a set of best practices to mitigate bias. In April, a group of more than 30 tech giants, digital health companies, trade associations and healthcare organizations, including AdvaMed, Google and Fitbit, joined forces to work toward developing standards and best practices for AI in medicine and health.
The senators asked the insurance executives to provide details about the algorithms their companies use to improve patient care and what safeguards they have established to prevent bias. They asked CMS and the FTC to outline steps they are taking to address algorithm bias in the healthcare system and whether their current enforcement mechanisms can handle the challenge.