Center for Multimodal Neuroimaging

Recent Talks

Events in May 2020

Using EEG to Decode Semantics During an Artificial Language Learning Task

AlonaFyshe_Headshot

As we learn a new language, we begin to map real world concepts onto new orthographic representations until words in a new language conjure as rich a semantic representation as our native language.  Using electroencephalography, we show that it is possible to detect a newly formed semantic mapping as it is learned.  We show that the localization of the neural representations is highly distributed, and the onset of the semantic representation is delayed when compared words in a native language. Our work shows that language learning can be monitored using EEG, suggesting new avenues for both language and learning research

Events in March 2020

Explainable AI in Neuro-Imaging: Challenges and Future Directions

Pamela Douglas Headshot

Decoding and encoding models are widely applied in cognitive neuroscience to find statistical associations between experimental context and brain response patterns. Depending on the nature of their application, these models can be used to read out representational content from functional activity data, determine if a brain region contains specific information, predict diagnoses, and test theories about brain information processing. These multivariate models typically contain fitted linear components. However, even with linear models - the limit of simplicity - interpretation is complex. Voxels that carry little predictive information alone may be assigned a strong weight if used for noise cancelation purposes, and informative voxels may be assigned a small weight when predictor variables carry overlapping information. In the deep learning setting, determining which inputs contribute to model predictions is even more complex. A variety of recent techniques are now available to map relevance weights through the layers of a deep learning network onto input brain images. However, small noise perturbations, common in the MRI scanning environment, can produce large alterations in the relevance maps without altering the model prediction. In certain settings, explanations can be highly divergent without even altering the model weights. In clinical applications, both false positives and omissions can have severe consequences. Explanatory methods should be reliable and complete before interpretation can appropriately reflect the level of generalization that the model provides.

Events in February 2020

Machine Learning in Neuroimaging: Applications to Clinical Neuroscience and Neurooncology

C. Davatzikos Speaker Photo

Machine learning has deeply penetrated the neuroimaging field in the past 15 years, by providing a means to construct imaging signatures of normal and pathologic brain states on an individual person basis. In this talk, I will discuss examples from our laboratory’s work on imaging signatures of brain aging and early stages of neurodegenerative diseases, brain development and neuropsychiatric disorders, as well as brain cancer precision diagnostics and estimation of molecular characteristics. I will discuss some challenges, such as disease heterogeneity and integration and harmonization of large datasets in multi-institutional consortia. I will present some of our work in these directions.

Events in December 2019

How computational models of words and sentences can reveal meaning in the brain

R. Raizada Speaker Photo

Linguistic meaning is full of structure, and in order to understand language the human brain must somehow represent that. My colleagues and I have explored how the brain achieves that, using computational models of the meanings of words, and also of words combined into phrases and sentences. In particular, we have been tackling the following questions: How are the brain's representations of meaning structured, and how do they relate to models of meaning from Computer Science and Cognitive Psychology? How do neural representations of individual words relate to the representations of multiple words that are combined together into phrases and sentences? Can people's behavioural judgments capture aspects of meaning that are missed by models derived from computing word co-occurrences in large bodies of text, and does this enable better neural decoding? I will address these questions, and outline some of the many unsolved problems that remain.

Events in November 2019

Passive Detection of Perceived Stress Using Location-driven Sensing Technologies at Scale

RajeshBalan

Stress and depression are a common affliction in all walks of life. When left unmanaged, stress can inhibit productivity or cause depression. Depression can also occur independently of stress. There has been a sharp rise in mobile health initiatives to monitor stress and depression. However, these initiatives usually require users to install dedicated apps or multiple sensors, making such solutions hard to scale. Moreover, they emphasize sensing individual factors and overlook social interactions, which plays a significant role in influencing stress and depression. In this talk I will describe StressMon, a stress and depression detection system that leverages single-attribute location data, passively sensed from the WiFi infrastructure. Using the location data, it extracts a detailed set of movement, and physical group interaction pattern features, without requiring explicit user actions or software installation on mobile phones. These features are used in two different machine learning models to detect stress and depression. To validate StressMon, we conducted three different longitudinal studies at a university, with different groups of students, totaling up to 108 participants. In these experiments, StressMon detected severely stressed students with a 96% True Positive Rate (TPR), an 80% True Negative Rate (TNR), and a 0.97 area under the ROC curve (AUC) score, using a six-day prediction window. StressMon was also able to detect depression at 91% TPR, 66% TNR, and 0.88 AUC, using a 15-day window.

Events in October 2019

Decoding Cognitive Functions

RussPoldrack_Photo

The goal of cognitive neuroscience is to understand how cognitive functions map onto brain systems, but cognitive neuroscience has largely focused on mapping or decoding task features rather than the cognitive functions that underlie them.  I will first discuss the challenge of characterizing cognitive functions, in the context of the Cognitive Atlas ontology.  I will then turn to discussing a set of studies that have used ontological frameworks to perform ontology driven decoding.  I will conclude by discussing the need to move from folk-psychological to computational cognitive ontologies.

Events in September 2019

Can we relate machine learning models to brain signals?

JessicaSchroof_Photo

Machine learning models are increasingly being used to study cognitive neuroscience or investigate clinical neuroscience questions. However, there are various limitations that can prevent from inferring direct relationships between the obtained model and underlying brain signals. In this talk, we will discuss these limitations with a focus on linear model weights and confounding factors, as well as explore promising avenues of research.

Open Science

TravisR_Headshot

There is a rising chorus of calls for science to become more reliable, replicable, and reproducible. Many of these demands focus on increasing statistical power by collecting larger sample sizes. However, science is not a heterogenous pursuit, and larger sample sizes may not make sense for many types of research problems. In this talk, I describe why improved statistical power solves many research problems, and what other tools are available to you when a larger sample size does not make sense.

Events in July 2019

Mapping Representations of Language Semantics in Human Cortex

AlexHuth_Headshot

How does the human brain process and represent the meaning of language? We investigate this question by building computational models of language processing and then using those models to predict functional magnetic resonance imaging (fMRI) responses to natural language stimuli. The technique we use, voxel-wise encoding models, provides a sensitive method for probing richly detailed cortical representations. This method also allows us to take advantage of natural stimuli, which elicit stronger, more reliable, and more varied brain responses than tightly controlled experimental stimuli. In this talk I will discuss how we have used these methods to study how the human brain represents the meaning of language, and how those representations are linked to visual representations. The results suggest that the language and visual systems in the human brain might form a single, contiguous map over which meaning is represented.

Events in June 2019