Center for Multimodal Neuroimaging

The Machine Learning in Brain Imaging Series

The Machine Learning in Brian Imaging Series is a talk series sponsored by the NIMH that takes place every month on NIH main campus in Bethesda, MD. Invited speakers work at the intersection of neuroscience and machine learning. They come to the NIH for one or two days to present their research at the Talk Series, but also to meet with NIH researchers with shared interests. Talks include discussions on new machine learning methods as well as applications of machine learning to neuroscience research. Previous speakers in the series include: Dr. Tulay Adali (University of Maryland), Dr. Joshua Vogelstein (John Hopkins University), Dr. Christopher Honey (John Hopkins University), Dr. Vernon Lawhern (Army Research Lab), Dr. Jonas Richiardi (Lausane University Hospital), Gaël Varoquaux (INRIA) and Dr. Yoshua Bengio (University of Montreal).

All announcements are distributed via the MachineLearning-BrainImaging NIH e-mail list. If you want to join the list, or want to provide suggestions for future speakers, please contact Javier Gonzalez-Castillo or Francisco Pereira.

The Machine Learning in Brian Imaging Series is a talk series sponsored by the NIMH that takes place every month on NIH main campus in Bethesda, MD. Invited speakers work at the intersection of neuroscience and machine learning. They come to the NIH for one or two days to present their research at the Talk Series, but also to meet with NIH researchers with shared interests. Talks include discussions on new machine learning methods as well as applications of machine learning to neuroscience research.

Recent Talks
Events in October 2019

Decoding Cognitive Functions

RussPoldrack_Photo

The goal of cognitive neuroscience is to understand how cognitive functions map onto brain systems, but cognitive neuroscience has largely focused on mapping or decoding task features rather than the cognitive functions that underlie them.  I will first discuss the challenge of characterizing cognitive functions, in the context of the Cognitive Atlas ontology.  I will then turn to discussing a set of studies that have used ontological frameworks to perform ontology driven decoding.  I will conclude by discussing the need to move from folk-psychological to computational cognitive ontologies.

Events in September 2019

Can we relate machine learning models to brain signals?

JessicaSchroof_Photo

Machine learning models are increasingly being used to study cognitive neuroscience or investigate clinical neuroscience questions. However, there are various limitations that can prevent from inferring direct relationships between the obtained model and underlying brain signals. In this talk, we will discuss these limitations with a focus on linear model weights and confounding factors, as well as explore promising avenues of research.

Events in July 2019

Mapping Representations of Language Semantics in Human Cortex

AlexHuth_Headshot

How does the human brain process and represent the meaning of language? We investigate this question by building computational models of language processing and then using those models to predict functional magnetic resonance imaging (fMRI) responses to natural language stimuli. The technique we use, voxel-wise encoding models, provides a sensitive method for probing richly detailed cortical representations. This method also allows us to take advantage of natural stimuli, which elicit stronger, more reliable, and more varied brain responses than tightly controlled experimental stimuli. In this talk I will discuss how we have used these methods to study how the human brain represents the meaning of language, and how those representations are linked to visual representations. The results suggest that the language and visual systems in the human brain might form a single, contiguous map over which meaning is represented.

Events in January 2018

AI and Deep Learning

Deep learning has arisen around 2006 as a renewal of neural networks research allowing such models to have more layers. It is leading the charge of a renewal of AI both inside and outside academia, with billions of dollars being invested and expected fallouts in the trillions by 2030. Theoretical investigations have shown that functions obtained as deep compositions of simpler functions (which includes both deep and recurrent nets) can express highly varying functions (with many ups and downs and different input regions that can be distinguished) much more efficiently (with fewer parameters) than otherwise. Empirical work in a variety of applications has demonstrated that, when well trained, such deep architectures can be highly successful, remarkably breaking through previous state-of-the-art in many areas, including speech recognition, object recognition, playing games, language models, machine translation and transfer learning. In terms of social impact, the most interesting applications probably are in the medical domain, starting with medical image analysis but expanding to many other areas. Finally, we summarize some of the recent work aimed at bridging the remaining gap between deep learning and neuroscience, including approaches to implement functional equivalents to backpropagation in a more biologically plausible way, as well as ongoing work connecting language, cognition, reinforcement learning and the learning of abstract representations.

Events in December 2017

What can deep neural networks tell us about human brain and behavior?

Deep neural networks can now achieve human-like levels of performance on tasks such as visual categorization, and are increasingly being viewed as a viable computational model for brain function. In this talk I will present recent work from my lab comparing deep neural networks with both behavioral and neuroimaging experiments (fMRI and MEG) investigating object and scene perception. While deep neural networks show a correspondence with both neuroimaging and behavioral data, our results reveal a complex relationship between the three domains. Given our findings, a key question is how can we move beyond establishing mere correspondences between models and brain data towards generating truly novel insight into the sensory representations underlying adaptive behavior.