NIMH Workshop on Combined PET-MRI
PET and fMRI provide two separate and partially overlapping methods for neuroimaging. While PET relies on metabolic processing as measured by the dissipation of radioactive isotopes, MRI measures changes in blood oxygenation level (BOLD). Each method is useful for providing a quantitative metrics for brain function and dysfunction useful in the diagnosis and understanding of clinical brain disorders and normal development. The combination of methods adds values where the whole is greater than the sum of the parts.
Specific Aims
Provide the NIMH community with a comprehensive overview of the latest developments, opportunities, and future directions in using combined PET-MRI for neuroimaging Invigorate discussion and use of PET-MRI neuroimaging within the NIH/NIMH communities as well as potential implications for clinical diagnoses and treatments Serve as a catalyst for collaborations between NIMH researchers and extramural experts studying using PET-MRI imaging to study clinical problems and advancing basic research knowledge. The workshop will be divided into two sections: (1) Individual lectures showcasing the use of combined PET-MRI neuroimaging with a particular focus on either theoretical insights, or methodological advances; (2) An open panel of the future of PET-MRI neuroimaging and identifying the necessary steps or evolution of data collection, analysis, and interpretation for the future good of neuroscience research.
NIMH Workshop on Naturalistic Stimuli and Individual Differences
The Center for Multimodal Neuroimaging is thrilled to announce the upcoming workshop on Naturalistic Stimuli and Individual Differences with a range of extramural and intramural speakers presenting on their cutting-edge work. The workshop seeks to:
bring together top scientists using naturalistic stimuli and/or the study of individual differences in neuroimaging; showcase theoretical, methodological, and analytical advances in these areas of research; and serve as a catalyst for collaborations between presenters, biomedical NIH researchers, and experts from around the world.
Speakers: Chris Baldassano, Janice Chen, Elizabeth DuPre, Emily S. Finn, Javier Gonzalez-Castillo, Uri Hasson, Jeremy Manning, Carolyn Parkinson, Elizabeth Redcay, Monica Rosenberg, Tamara Vanderwal, Gang Chen, Peter Molfese
Chris Baldassano - https://videocast.nih.gov/watch=42538&start=133
Elizabeth DuPre - https://videocast.nih.gov/watch=42538&start=2831
Janice Chen - https://videocast.nih.gov/watch=42538&start=5927
Gang Chen - https://videocast.nih.gov/watch=42538&start=7450
Emily S Finn - https://videocast.nih.gov/watch=42538&start=10164
Javier Gonzalez-Castillo - https://videocast.nih.gov/watch=42538&start=13214
Jeremey Manning - https://videocast.nih.gov/watch=42538&start=16766
Carolyn Parkinson - https://videocast.nih.gov/watch=42538&start=18368
Elizabeth Redcay - https://videocast.nih.gov/watch=42538&start=20638
Monica Rosenberg - https://videocast.nih.gov/watch=42538&start=23549
Tammy Vanderwal - https://videocast.nih.gov/watch=42538&start=26564
Uri Hasson - https://videocast.nih.gov/watch=42538&start=29491
For more information go to https://cmn.nimh.nih.gov/cmnworkshop2021
Using EEG to Decode Semantics During an Artificial Language Learning Task
As we learn a new language, we begin to map real world concepts onto new orthographic representations until words in a new language conjure as rich a semantic representation as our native language. Using electroencephalography, we show that it is possible to detect a newly formed semantic mapping as it is learned. We show that the localization of the neural representations is highly distributed, and the onset of the semantic representation is delayed when compared words in a native language. Our work shows that language learning can be monitored using EEG, suggesting new avenues for both language and learning research
Explainable AI in Neuro-Imaging: Challenges and Future Directions
Decoding and encoding models are widely applied in cognitive neuroscience to find statistical associations between experimental context and brain response patterns. Depending on the nature of their application, these models can be used to read out representational content from functional activity data, determine if a brain region contains specific information, predict diagnoses, and test theories about brain information processing. These multivariate models typically contain fitted linear components. However, even with linear models - the limit of simplicity - interpretation is complex. Voxels that carry little predictive information alone may be assigned a strong weight if used for noise cancelation purposes, and informative voxels may be assigned a small weight when predictor variables carry overlapping information. In the deep learning setting, determining which inputs contribute to model predictions is even more complex. A variety of recent techniques are now available to map relevance weights through the layers of a deep learning network onto input brain images. However, small noise perturbations, common in the MRI scanning environment, can produce large alterations in the relevance maps without altering the model prediction. In certain settings, explanations can be highly divergent without even altering the model weights. In clinical applications, both false positives and omissions can have severe consequences. Explanatory methods should be reliable and complete before interpretation can appropriately reflect the level of generalization that the model provides.
Machine Learning in Neuroimaging: Applications to Clinical Neuroscience and Neurooncology
Machine learning has deeply penetrated the neuroimaging field in the past 15 years, by providing a means to construct imaging signatures of normal and pathologic brain states on an individual person basis. In this talk, I will discuss examples from our laboratory’s work on imaging signatures of brain aging and early stages of neurodegenerative diseases, brain development and neuropsychiatric disorders, as well as brain cancer precision diagnostics and estimation of molecular characteristics. I will discuss some challenges, such as disease heterogeneity and integration and harmonization of large datasets in multi-institutional consortia. I will present some of our work in these directions.
How computational models of words and sentences can reveal meaning in the brain
Linguistic meaning is full of structure, and in order to understand language the human brain must somehow represent that. My colleagues and I have explored how the brain achieves that, using computational models of the meanings of words, and also of words combined into phrases and sentences. In particular, we have been tackling the following questions: How are the brain's representations of meaning structured, and how do they relate to models of meaning from Computer Science and Cognitive Psychology? How do neural representations of individual words relate to the representations of multiple words that are combined together into phrases and sentences? Can people's behavioural judgments capture aspects of meaning that are missed by models derived from computing word co-occurrences in large bodies of text, and does this enable better neural decoding? I will address these questions, and outline some of the many unsolved problems that remain.
Passive Detection of Perceived Stress Using Location-driven Sensing Technologies at Scale
Stress and depression are a common affliction in all walks of life. When left unmanaged, stress can inhibit productivity or cause depression. Depression can also occur independently of stress. There has been a sharp rise in mobile health initiatives to monitor stress and depression. However, these initiatives usually require users to install dedicated apps or multiple sensors, making such solutions hard to scale. Moreover, they emphasize sensing individual factors and overlook social interactions, which plays a significant role in influencing stress and depression. In this talk I will describe StressMon, a stress and depression detection system that leverages single-attribute location data, passively sensed from the WiFi infrastructure. Using the location data, it extracts a detailed set of movement, and physical group interaction pattern features, without requiring explicit user actions or software installation on mobile phones. These features are used in two different machine learning models to detect stress and depression. To validate StressMon, we conducted three different longitudinal studies at a university, with different groups of students, totaling up to 108 participants. In these experiments, StressMon detected severely stressed students with a 96% True Positive Rate (TPR), an 80% True Negative Rate (TNR), and a 0.97 area under the ROC curve (AUC) score, using a six-day prediction window. StressMon was also able to detect depression at 91% TPR, 66% TNR, and 0.88 AUC, using a 15-day window.
Decoding Cognitive Functions
Stress and depression are a common affliction in all walks of life. When left unmanaged, stress can inhibit productivity or cause depression. Depression can also occur independently of stress. There has been a sharp rise in mobile health initiatives to monitor stress and depression. However, these initiatives usually require users to install dedicated apps or multiple sensors, making such solutions hard to scale. Moreover, they emphasize sensing individual factors and overlook social interactions, which plays a significant role in influencing stress and depression. In this talk I will describe StressMon, a stress and depression detection system that leverages single-attribute location data, passively sensed from the WiFi infrastructure. Using the location data, it extracts a detailed set of movement, and physical group interaction pattern features, without requiring explicit user actions or software installation on mobile phones. These features are used in two different machine learning models to detect stress and depression. To validate StressMon, we conducted three different longitudinal studies at a university, with different groups of students, totaling up to 108 participants. In these experiments, StressMon detected severely stressed students with a 96% True Positive Rate (TPR), an 80% True Negative Rate (TNR), and a 0.97 area under the ROC curve (AUC) score, using a six-day prediction window. StressMon was also able to detect depression at 91% TPR, 66% TNR, and 0.88 AUC, using a 15-day window.
Open Science
There is a rising chorus of calls for science to become more reliable, replicable, and reproducible. Many of these demands focus on increasing statistical power by collecting larger sample sizes. However, science is not a heterogenous pursuit, and larger sample sizes may not make sense for many types of research problems. In this talk, I describe why improved statistical power solves many research problems, and what other tools are available to you when a larger sample size does not make sense.
Can we relate machine learning models to brain signals?
Machine learning models are increasingly being used to study cognitive neuroscience or investigate clinical neuroscience questions. However, there are various limitations that can prevent from inferring direct relationships between the obtained model and underlying brain signals. In this talk, we will discuss these limitations with a focus on linear model weights and confounding factors, as well as explore promising avenues of research.
Mapping Representations of Language Semantics in Human Cortex
How does the human brain process and represent the meaning of language? We investigate this question by building computational models of language processing and then using those models to predict functional magnetic resonance imaging (fMRI) responses to natural language stimuli. The technique we use, voxel-wise encoding models, provides a sensitive method for probing richly detailed cortical representations. This method also allows us to take advantage of natural stimuli, which elicit stronger, more reliable, and more varied brain responses than tightly controlled experimental stimuli. In this talk I will discuss how we have used these methods to study how the human brain represents the meaning of language, and how those representations are linked to visual representations. The results suggest that the language and visual systems in the human brain might form a single, contiguous map over which meaning is represented.
Introduction to Simultaneous EEG-fMRI at NIMH
The Center for Multimodal Neuroimaging (CMN; https://cmn.nimh.nih.gov/) hosted a presentation on “Introduction to Simultaneous EEG-fMRI” on June 3rd in Building 40 Room 1201/1203 at 10AM.
The presentation focused on the basics of EEG-fMRI and resources available at NIMH covering 1) equipment setup, 2) data acquisition, 3) data analysis.
Benefits of Multi-Echo fMRI
Members of the Section of Functional Imaging Methods (SFIM) and FMRI Facility (FMRIF) presented on the “Advantages of Multi-echo fMRI”. The presentation is geared to individuals new to multi-echo fMRI and covers: 1) conceptual introduction of multi-echo fMRI and its advantages; 2) denoising fMRI data using postprocessing of multi-echo fMRI in AFNI; and 3) How to add multi-echo fMRI to your paradigms on GE and Siemens scanners.