Dive Brief:
- Mayo Clinic has introduced a product to evaluate the accuracy and susceptibility to bias of artificial intelligence models.
- The proliferation of AI models has provided healthcare professionals with new, potentially better ways to tailor dosages, interpret images and perform other tasks that affect patients. However, there are concerns about the robustness of the evidence supporting the algorithms.
- Mayo Clinic has created a product that measures algorithms for bias in categories such as age and ethnicity and provides third-party validation of the models.
Dive Insight:
As algorithms start to play a bigger role in healthcare decisions, it is critical to ensure they’re accurate. An AI model that is biased or trained on a limited dataset could lead to serious risks to patient health, including incorrect diagnoses.
Mayo Clinic Platform_Validate is an attempt to address the risk posed by inaccurate and biased models. The platform tests AI models against anonymized data sets from more than 10 million patients treated at Mayo Clinic and its partners to measure sensitivity, specificity and bias. Mayo Clinic has designed the product to test a model's fit-for-purpose against data from urban and rural communities in Minnesota, Florida, Arizona and other states.
Through the testing, the platform generates a report that shows how an algorithm performs in people in subgroups based on factors such as age, behavioral health, ethnicity, family history, menopause and obesity. Model-data analysis can take two to three weeks. Developers of AI models can publicize the test scores to provide external validation of their products.
The Mayo Clinic product is part of Mayo Clinic Platform_Discover, a platform that provides developers with access to health data. Medical device firm Becton Dickinson and K Health, a startup that lets patients chat with a doctor, both use the platform.