AI for supporting an autism spectrum disorder diagnosis (www.thelancet.com)

🤖 AI Summary
A broad set of recent studies demonstrate how machine learning applied to video, audio, eye-tracking, skeletal pose and biosignal data can support autism spectrum disorder (ASD) detection and symptom quantification. Tasks range from binary ASD/typical-development classification to severity and joint-attention scoring using paradigms like the still-face, simulated-interaction tasks, ADOS videos, VR scenes and mobile game play. Feature sets include gaze fixation and scanpaths, facial action units and dynamics, head movement, vocal features, skeletal keypoints and physiological signals; algorithms span classical classifiers (SVM, Random Forest, XGBoost, logistic regression) to deep architectures (CNN, VGG16+LSTM, CNN-LSTM-attention, LSTM/bi-LSTM, TCNs). Reported validation strategies are mixed but often subject-wise (nested CV, leave-one-out, 10-fold, 80/20 splits), with thresholds for decision rules reported (e.g., 0.5, 0.7–0.8) and metrics emphasizing sensitivity/specificity and PPV. These works are significant because they point to objective, scalable biomarkers for earlier and more consistent ASD screening and stratification, especially around social attention and interaction behaviors. However, technical limitations temper enthusiasm: many studies use small, imbalanced cohorts (n from single digits to a few hundred), heterogeneous feature engineering and preprocessing, inconsistent validation and limited external testing, raising concerns about generalizability and bias. The field therefore needs larger, demographically diverse datasets, standardized protocols for stimulus/feature extraction and robust external validation — alongside interpretability and ethical safeguards — before clinical deployment.
Loading comments...
loading comments...