Center for Multimodal Neuroimaging

Recent Talks

Events in June 2022

NIMH Workshop on Combined PET-MRI

PET and fMRI provide two separate and partially overlapping methods for neuroimaging. While PET relies on metabolic processing as measured by the dissipation of radioactive isotopes, MRI measures changes in blood oxygenation level (BOLD). Each method is useful for providing a quantitative metrics for brain function and dysfunction useful in the diagnosis and understanding of clinical brain disorders and normal development.  The combination of methods adds values where the whole is greater than the sum of the parts.  

Specific Aims

  • Provide the NIMH community with a comprehensive overview of the latest developments, opportunities, and future directions in using combined PET-MRI for neuroimaging
  • Invigorate discussion and use of PET-MRI neuroimaging within the NIH/NIMH communities as well as potential implications for clinical diagnoses and treatments
  • Serve as a catalyst for collaborations between NIMH researchers and extramural experts studying using PET-MRI imaging to study clinical problems and advancing basic research knowledge. 

The workshop will be divided into two sections: (1) Individual lectures showcasing the use of combined PET-MRI neuroimaging with a particular focus on either theoretical insights, or methodological advances; (2) An open panel of the future of PET-MRI neuroimaging and identifying the necessary steps or evolution of data collection, analysis, and interpretation for the future good of neuroscience research.

Events in August 2021

NIMH Workshop on Naturalistic Stimuli and Individual Differences

The Center for Multimodal Neuroimaging is thrilled to announce the upcoming workshop on Naturalistic Stimuli and Individual Differences with a range of extramural and intramural speakers presenting on their cutting-edge work. The workshop seeks to:

  1. bring together top scientists using naturalistic stimuli and/or the study of individual differences in neuroimaging;
  2. showcase theoretical, methodological, and analytical advances in these areas of research; and
  3. serve as a catalyst for collaborations between presenters, biomedical NIH researchers, and experts from around the world.

    Speakers: Chris Baldassano, Janice Chen, Elizabeth DuPre, Emily S. Finn, Javier Gonzalez-Castillo, Uri Hasson, Jeremy Manning, Carolyn Parkinson, Elizabeth Redcay, Monica Rosenberg, Tamara Vanderwal, Gang Chen, Peter Molfese


Chris Baldassano - https://videocast.nih.gov/watch=42538&start=133
Elizabeth DuPre - https://videocast.nih.gov/watch=42538&start=2831
Janice Chen - https://videocast.nih.gov/watch=42538&start=5927
Gang Chen - https://videocast.nih.gov/watch=42538&start=7450
Emily S Finn - https://videocast.nih.gov/watch=42538&start=10164
Javier Gonzalez-Castillo - https://videocast.nih.gov/watch=42538&start=13214
Jeremey Manning - https://videocast.nih.gov/watch=42538&start=16766
Carolyn Parkinson - https://videocast.nih.gov/watch=42538&start=18368
Elizabeth Redcay - https://videocast.nih.gov/watch=42538&start=20638
Monica Rosenberg - https://videocast.nih.gov/watch=42538&start=23549
Tammy Vanderwal - https://videocast.nih.gov/watch=42538&start=26564
Uri Hasson - https://videocast.nih.gov/watch=42538&start=29491

For more information go to https://cmn.nimh.nih.gov/cmnworkshop2021

Events in May 2020

Using EEG to Decode Semantics During an Artificial Language Learning Task

AlonaFyshe_Headshot

As we learn a new language, we begin to map real world concepts onto new orthographic representations until words in a new language conjure as rich a semantic representation as our native language.  Using electroencephalography, we show that it is possible to detect a newly formed semantic mapping as it is learned.  We show that the localization of the neural representations is highly distributed, and the onset of the semantic representation is delayed when compared words in a native language. Our work shows that language learning can be monitored using EEG, suggesting new avenues for both language and learning research

Events in March 2020

Explainable AI in Neuro-Imaging: Challenges and Future Directions

Pamela Douglas Headshot

Decoding and encoding models are widely applied in cognitive neuroscience to find statistical associations between experimental context and brain response patterns. Depending on the nature of their application, these models can be used to read out representational content from functional activity data, determine if a brain region contains specific information, predict diagnoses, and test theories about brain information processing. These multivariate models typically contain fitted linear components. However, even with linear models - the limit of simplicity - interpretation is complex. Voxels that carry little predictive information alone may be assigned a strong weight if used for noise cancelation purposes, and informative voxels may be assigned a small weight when predictor variables carry overlapping information. In the deep learning setting, determining which inputs contribute to model predictions is even more complex. A variety of recent techniques are now available to map relevance weights through the layers of a deep learning network onto input brain images. However, small noise perturbations, common in the MRI scanning environment, can produce large alterations in the relevance maps without altering the model prediction. In certain settings, explanations can be highly divergent without even altering the model weights. In clinical applications, both false positives and omissions can have severe consequences. Explanatory methods should be reliable and complete before interpretation can appropriately reflect the level of generalization that the model provides.

Events in February 2020

Machine Learning in Neuroimaging: Applications to Clinical Neuroscience and Neurooncology

C. Davatzikos Speaker Photo

Machine learning has deeply penetrated the neuroimaging field in the past 15 years, by providing a means to construct imaging signatures of normal and pathologic brain states on an individual person basis. In this talk, I will discuss examples from our laboratory’s work on imaging signatures of brain aging and early stages of neurodegenerative diseases, brain development and neuropsychiatric disorders, as well as brain cancer precision diagnostics and estimation of molecular characteristics. I will discuss some challenges, such as disease heterogeneity and integration and harmonization of large datasets in multi-institutional consortia. I will present some of our work in these directions.

Events in December 2019

How computational models of words and sentences can reveal meaning in the brain

R. Raizada Speaker Photo

Linguistic meaning is full of structure, and in order to understand language the human brain must somehow represent that. My colleagues and I have explored how the brain achieves that, using computational models of the meanings of words, and also of words combined into phrases and sentences. In particular, we have been tackling the following questions: How are the brain's representations of meaning structured, and how do they relate to models of meaning from Computer Science and Cognitive Psychology? How do neural representations of individual words relate to the representations of multiple words that are combined together into phrases and sentences? Can people's behavioural judgments capture aspects of meaning that are missed by models derived from computing word co-occurrences in large bodies of text, and does this enable better neural decoding? I will address these questions, and outline some of the many unsolved problems that remain.

Events in November 2019

Passive Detection of Perceived Stress Using Location-driven Sensing Technologies at Scale

RajeshBalan

Stress and depression are a common affliction in all walks of life. When left unmanaged, stress can inhibit productivity or cause depression. Depression can also occur independently of stress. There has been a sharp rise in mobile health initiatives to monitor stress and depression. However, these initiatives usually require users to install dedicated apps or multiple sensors, making such solutions hard to scale. Moreover, they emphasize sensing individual factors and overlook social interactions, which plays a significant role in influencing stress and depression. In this talk I will describe StressMon, a stress and depression detection system that leverages single-attribute location data, passively sensed from the WiFi infrastructure. Using the location data, it extracts a detailed set of movement, and physical group interaction pattern features, without requiring explicit user actions or software installation on mobile phones. These features are used in two different machine learning models to detect stress and depression. To validate StressMon, we conducted three different longitudinal studies at a university, with different groups of students, totaling up to 108 participants. In these experiments, StressMon detected severely stressed students with a 96% True Positive Rate (TPR), an 80% True Negative Rate (TNR), and a 0.97 area under the ROC curve (AUC) score, using a six-day prediction window. StressMon was also able to detect depression at 91% TPR, 66% TNR, and 0.88 AUC, using a 15-day window.

Events in October 2019

Decoding Cognitive Functions

RussPoldrack_Photo

The goal of cognitive neuroscience is to understand how cognitive functions map onto brain systems, but cognitive neuroscience has largely focused on mapping or decoding task features rather than the cognitive functions that underlie them.  I will first discuss the challenge of characterizing cognitive functions, in the context of the Cognitive Atlas ontology.  I will then turn to discussing a set of studies that have used ontological frameworks to perform ontology driven decoding.  I will conclude by discussing the need to move from folk-psychological to computational cognitive ontologies.

Events in September 2019

Can we relate machine learning models to brain signals?

JessicaSchroof_Photo

Machine learning models are increasingly being used to study cognitive neuroscience or investigate clinical neuroscience questions. However, there are various limitations that can prevent from inferring direct relationships between the obtained model and underlying brain signals. In this talk, we will discuss these limitations with a focus on linear model weights and confounding factors, as well as explore promising avenues of research.

Open Science

TravisR_Headshot

There is a rising chorus of calls for science to become more reliable, replicable, and reproducible. Many of these demands focus on increasing statistical power by collecting larger sample sizes. However, science is not a heterogenous pursuit, and larger sample sizes may not make sense for many types of research problems. In this talk, I describe why improved statistical power solves many research problems, and what other tools are available to you when a larger sample size does not make sense.