Dive Brief:
- A tablet-based app powered by computer vision and machine learning may improve screening for autism, according to a paper in Nature Medicine.
- Researchers at Duke University developed the app and used it to assess 475 toddlers. Participants who screened positive for autism had a 40.6% probability of being diagnosed with the condition.
- The figure compares favorably to the positive predictive value of a questionnaire currently used to screen for autism, which was 14.6% in another study. The researchers’ app performed consistently across sex, race and ethnicity.
Dive Insight:
Physicians currently screen for autism using a parent questionnaire, the Modified Checklist for Autism in Toddlers-Revised with Follow-Up (M-CHAT-R/F). Studies have found the questionnaire is less effective in real-world settings than research studies, particularly for girls and children of color, and fails to detect most cases of autism. Missed diagnoses delay access to interventions.
Duke researchers developed SenseToKnow to improve the screening process. The app displays short movies and records the child’s behavioral responses using the tablet’s front-facing camera and computer vision to detect signs of autism such as differences in social attention, facial expressions and head movements. Machine learning analyzes the data to screen for autism.
“The AI we’ve built compares each child’s biomarkers to how indicative they are of autism at a population level,” Sam Perochon, a PhD student and co-senior author of the study, said in a statement. “This allows the tool to capture behaviors other screening tests might miss and also report on which biomarkers were of the most interest and most predictive for that particular child.”
To evaluate the technology, the Duke team used it to assess 475 toddlers aged 17 to 36 months during pediatric primary care well-child visits. The population included 49 toddlers who were later diagnosed with autism and 98 children who were later diagnosed with developmental delay.
The app correctly identified 40.6% of the children who were later diagnosed with autism. Because the study lacked a control arm, the researchers used data from a separate trial to contextualize the result. The other study, which assessed M-CHAT-R/F in 25,000 children, reported a positive predictive value of 14.6%, but differences in the trials mean comparisons of the results may be unreliable.
The ratio of autism to non-autism cases in the Duke study was higher than in the general population, suggesting the sample may be biased toward parents with developmental concerns about their children. The Duke researchers named possible validation bias as another limitation.
Even so, the results suggest there may be a role for screening apps such as SenseToKnow in the detection of autism. Combining the app and M-CHAT-R/F increased the positive predictive value to 63.4%, leading the researchers to predict digital phenotyping will improve the accuracy of autism screening in real-world settings.
The study was funded by the National Institutes of Health’s Eunice Kennedy Shriver National Institute of Child Health and Human Development, as well as the National Institute of Mental Health and the Simons Foundation.