Dive Brief:
- There is “a paucity of robust evidence” to support claims that artificial intelligence can enhance clinical outcomes, according to a systematic review of published studies.
- The review, details of which were published in the Journal of Medical Internet Research, found that 39 of the 11,839 articles on AI described randomized controlled trials.
- With small sample sizes and single-center designs limiting the generalizability of the studies, the authors of the paper see a need for more RCTs of AI tools integrated into clinical practice.
Dive Insight:
AI-assisted tools have begun to make a mark on healthcare, with a 2020 study finding 64 FDA-approved devices and algorithms based on artificial intelligence and machine learning. Yet, the systematic review found a lack of evidence to support the technologies.
“Despite the plethora of claims for the benefits of AI in enhancing clinical outcomes, there is a paucity of robust evidence. In this systematic review, we identified only a handful of RCTs comparing AI-assisted tools with standard-of-care management in various medical conditions,” the authors wrote.
Many of the 39 studies had limitations that affect the generalizability of their results. More than half, 56%, of the studies took place at single clinical trial sites, and 85% of the trials had small sample sizes. As such, while 77% of the trials reported positive outcomes, there are doubts about whether the results can be extrapolated to the broader healthcare system.
The systematic review also found many of the studies are at risk of bias. Using the Cochrane risk-of-bias tool for randomized trials, the researchers found 49% of the studies had a high risk of bias. The study found some concerns with a further 31% of the trials.
In light of the findings, the authors want AI-assisted tools to “demonstrate unequivocal improvement in clinically relevant outcomes” over standard of care in properly designed RCTs. Such studies could support the implementation of AI into daily clinical practice.