Because of a lapse in government funding, the information on this website may not be up to date,
transactions submitted via the website may not be processed, and the agency may not be able to respond to inquiries until appropriations are enacted.
The NIH Clinical Center (the research hospital of NIH) is open. For more details about its operating status, please visit cc.nih.gov.
Updates regarding government operating status and resumption of normal operations can be found at opm.gov.
Ante la falta de fondos del gobierno federal, no se actualizará este sitio web y la organización no responderá a transacciones ni consultas hasta que se aprueben los fondos. 
El Centro Clínico de los Institutos Nacionales de la Salud  (el hospital de investigación) permanecerá abierto. 
Consulte cc.nih.gov (en inglés). 
Infórmese sobre el funcionamiento del gobierno federal y el reinicio de las actividades en opm.gov
Center for Multimodal Neuroimaging

Explainable AI in Neuro-Imaging: Challenges and Future Directions

March 2020
Pamela Douglas class=

Decoding and encoding models are widely applied in cognitive neuroscience to find statistical associations between experimental context and brain response patterns. Depending on the nature of their application, these models can be used to read out representational content from functional activity data, determine if a brain region contains specific information, predict diagnoses, and test theories about brain information processing. These multivariate models typically contain fitted linear components. However, even with linear models - the limit of simplicity - interpretation is complex. Voxels that carry little predictive information alone may be assigned a strong weight if used for noise cancelation purposes, and informative voxels may be assigned a small weight when predictor variables carry overlapping information. In the deep learning setting, determining which inputs contribute to model predictions is even more complex. A variety of recent techniques are now available to map relevance weights through the layers of a deep learning network onto input brain images. However, small noise perturbations, common in the MRI scanning environment, can produce large alterations in the relevance maps without altering the model prediction. In certain settings, explanations can be highly divergent without even altering the model weights. In clinical applications, both false positives and omissions can have severe consequences. Explanatory methods should be reliable and complete before interpretation can appropriately reflect the level of generalization that the model provides.