Center for Multimodal Neuroimaging

The Machine Learning in Brain Imaging Series

The Machine Learning in Brian Imaging Series is a talk series sponsored by the NIMH that takes place every month on NIH main campus in Bethesda, MD. Invited speakers work at the intersection of neuroscience and machine learning. They come to the NIH for one or two days to present their research at the Talk Series, but also to meet with NIH researchers with shared interests. Talks include discussions on new machine learning methods as well as applications of machine learning to neuroscience research. Previous speakers in the series include: Dr. Tulay Adali (University of Maryland), Dr. Joshua Vogelstein (John Hopkins University), Dr. Christopher Honey (John Hopkins University), Dr. Vernon Lawhern (Army Research Lab), Dr. Jonas Richiardi (Lausane University Hospital), Gaël Varoquaux (INRIA) and Dr. Yoshua Bengio (University of Montreal).

All announcements are distributed via the MachineLearning-BrainImaging NIH e-mail list. If you want to join the list, or want to provide suggestions for future speakers, please contact Javier Gonzalez-Castillo or Francisco Pereira.

The Machine Learning in Brian Imaging Series is a talk series sponsored by the NIMH that takes place every month on NIH main campus in Bethesda, MD. Invited speakers work at the intersection of neuroscience and machine learning. They come to the NIH for one or two days to present their research at the Talk Series, but also to meet with NIH researchers with shared interests. Talks include discussions on new machine learning methods as well as applications of machine learning to neuroscience research.

Recent Talks
Events in May 2020

Using EEG to Decode Semantics During an Artificial Language Learning Task

AlonaFyshe_Headshot

As we learn a new language, we begin to map real world concepts onto new orthographic representations until words in a new language conjure as rich a semantic representation as our native language.  Using electroencephalography, we show that it is possible to detect a newly formed semantic mapping as it is learned.  We show that the localization of the neural representations is highly distributed, and the onset of the semantic representation is delayed when compared words in a native language. Our work shows that language learning can be monitored using EEG, suggesting new avenues for both language and learning research

Events in March 2020

Explainable AI in Neuro-Imaging: Challenges and Future Directions

Pamela Douglas Headshot

Decoding and encoding models are widely applied in cognitive neuroscience to find statistical associations between experimental context and brain response patterns. Depending on the nature of their application, these models can be used to read out representational content from functional activity data, determine if a brain region contains specific information, predict diagnoses, and test theories about brain information processing. These multivariate models typically contain fitted linear components. However, even with linear models - the limit of simplicity - interpretation is complex. Voxels that carry little predictive information alone may be assigned a strong weight if used for noise cancelation purposes, and informative voxels may be assigned a small weight when predictor variables carry overlapping information. In the deep learning setting, determining which inputs contribute to model predictions is even more complex. A variety of recent techniques are now available to map relevance weights through the layers of a deep learning network onto input brain images. However, small noise perturbations, common in the MRI scanning environment, can produce large alterations in the relevance maps without altering the model prediction. In certain settings, explanations can be highly divergent without even altering the model weights. In clinical applications, both false positives and omissions can have severe consequences. Explanatory methods should be reliable and complete before interpretation can appropriately reflect the level of generalization that the model provides.

Events in February 2020

Machine Learning in Neuroimaging: Applications to Clinical Neuroscience and Neurooncology

C. Davatzikos Speaker Photo

Machine learning has deeply penetrated the neuroimaging field in the past 15 years, by providing a means to construct imaging signatures of normal and pathologic brain states on an individual person basis. In this talk, I will discuss examples from our laboratory’s work on imaging signatures of brain aging and early stages of neurodegenerative diseases, brain development and neuropsychiatric disorders, as well as brain cancer precision diagnostics and estimation of molecular characteristics. I will discuss some challenges, such as disease heterogeneity and integration and harmonization of large datasets in multi-institutional consortia. I will present some of our work in these directions.

Events in December 2019

How computational models of words and sentences can reveal meaning in the brain

R. Raizada Speaker Photo

Linguistic meaning is full of structure, and in order to understand language the human brain must somehow represent that. My colleagues and I have explored how the brain achieves that, using computational models of the meanings of words, and also of words combined into phrases and sentences. In particular, we have been tackling the following questions: How are the brain's representations of meaning structured, and how do they relate to models of meaning from Computer Science and Cognitive Psychology? How do neural representations of individual words relate to the representations of multiple words that are combined together into phrases and sentences? Can people's behavioural judgments capture aspects of meaning that are missed by models derived from computing word co-occurrences in large bodies of text, and does this enable better neural decoding? I will address these questions, and outline some of the many unsolved problems that remain.

Events in November 2019

Passive Detection of Perceived Stress Using Location-driven Sensing Technologies at Scale

RajeshBalan

Stress and depression are a common affliction in all walks of life. When left unmanaged, stress can inhibit productivity or cause depression. Depression can also occur independently of stress. There has been a sharp rise in mobile health initiatives to monitor stress and depression. However, these initiatives usually require users to install dedicated apps or multiple sensors, making such solutions hard to scale. Moreover, they emphasize sensing individual factors and overlook social interactions, which plays a significant role in influencing stress and depression. In this talk I will describe StressMon, a stress and depression detection system that leverages single-attribute location data, passively sensed from the WiFi infrastructure. Using the location data, it extracts a detailed set of movement, and physical group interaction pattern features, without requiring explicit user actions or software installation on mobile phones. These features are used in two different machine learning models to detect stress and depression. To validate StressMon, we conducted three different longitudinal studies at a university, with different groups of students, totaling up to 108 participants. In these experiments, StressMon detected severely stressed students with a 96% True Positive Rate (TPR), an 80% True Negative Rate (TNR), and a 0.97 area under the ROC curve (AUC) score, using a six-day prediction window. StressMon was also able to detect depression at 91% TPR, 66% TNR, and 0.88 AUC, using a 15-day window.