Intelligent ADHD Detection Using Explainable Multimodal Fusion Model

Abstract:

Deep learning has been proved to diagnose Attention Deficit/Hyperactivity Disorder (ADHD) accurately, but it has raised concerns about trustworthiness because of the lack of explainability. Fortunately, the development of explainable artificial intelligence (XAI) offers a solution to this problem. In this study, we employed a VR-based GO/NOGO task with distractions, capturing participants’ eye movement, head movement, and electroencephalography (EEG) data. We used the collected data to train an explainable multimodal fusion model. Besides classifying between ADHD and normal children, the proposed model also generates explanation heatmaps. The heatmaps provide the importance of specific variables and timestamps in the EEG data to help us analyze the patterns captured by the model. According to our observations, the model identified specific time intervals that related to specific event-related potentials (ERPs) components. The heatmaps also demonstrate that the impacts of distractions vary between not only the GO and NOGO events but also ADHD and normal children.

This conference paper includes a citation to Nesplora Attention Kids Aula related research:

  • U. Díaz-Orueta, C. Garcia-López, N. Crespo-Eguílaz, R. Sánchez-Carpintero, G. Climent and J. Narbona, «AULA virtual reality test as an attention measure: Convergent validity with Conners’ Continuous Performance Test», Child Neuropsychology, vol. 20, no. 3, pp. 328-342, 2014.

 


 

H. -J. Tsai, H. -K. Wu, C. -C. Chen, C. -H. Yeh, T. -Y. Chu and S. -C. Yeh, «Intelligent ADHD Detection Using Explainable Multimodal Fusion Model,» 2024 IEEE 4th International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 2024, pp. 1713-1718, doi: 10.1109/ICIBA62489.2024.10868017

Colaboramos con los mejores expertos de más de 20 universidades internacionales