How computational models of words and sentences can reveal meaning in the brain
Linguistic meaning is full of structure, and in order to understand language the human brain must somehow represent that. My colleagues and I have explored how the brain achieves that, using computational models of the meanings of words, and also of words combined into phrases and sentences. In particular, we have been tackling the following questions: How are the brain's representations of meaning structured, and how do they relate to models of meaning from Computer Science and Cognitive Psychology? How do neural representations of individual words relate to the representations of multiple words that are combined together into phrases and sentences? Can people's behavioural judgments capture aspects of meaning that are missed by models derived from computing word co-occurrences in large bodies of text, and does this enable better neural decoding? I will address these questions, and outline some of the many unsolved problems that remain.