Abstract: Extracting meaning from spoken and written language is a uniquely human ability. A recent publication from our laboratory showed that semantic information from spoken language is represented in a broad network of semantically-selective areas distributed across the human cerebral cortex. However, it is unclear which of the representations revealed in that study are specific to the modality of speech, and which are amodal. Here we studied how the semantic content of narratives received through two different modalities, listening and reading, is represented in the human brain. We used functional magnetic resonance imaging (fMRI) to record brain activity in two separate experiments while participants listened to and read several hours of the same narrative stories. We then built voxel-wise encoding models to characterize selectivity for semantic content across the cerebral cortex. We found that, in a variety of regions across temporal, parietal and prefrontal cortices, voxel-wise models estimated from one modality (e.g. listening) accurately predicted responses in the other modality (e.g. reading). In fact these cross-modal predictions only failed in sensory regions such as early auditory and visual cortices. We then used principal components analysis on the estimated model weights to recover semantic selectivity for each voxel. This revealed four important components: the first PC distinguishes between humans, social interactions (e.g. social and emotional categories) and perceptual descriptions, quantitative descriptions (e.g. tactile and numeric categories); the second PC distinguishes between perceptual (e.g. tactile and visual categories) and non-perceptual descriptions (e.g. mental and professional categories); the third PC distinguishes between quantitative descriptions (e.g. numeric categories) and qualitative descriptions (e.g. abstract and emotional categories); and the fourth PC distinguishes between non-perceptual descriptions (e.g. temporal categories) and humans, social interactions (e.g. communal categories). We found strong correlations between the cortical maps of semantic content produced from listening and reading within these components (PC1: r=0.81±0.07; PC2: r=0.80±0.06; PC3: r=0.78±0.04; PC4: r=0.75±0.07, for n=7 participants). These results suggest that semantic representation of language outside of early sensory areas are not tied to the specific modality through which the semantic information is received.
SfN’s annual meeting is the world’s largest neuroscience conference for scientists and physicians devoted to understanding the brain and nervous system. SfN invites neuroscientists to collaborate and network with peers, learn from experts, explore the newest neuroscience tools and technologies, and discover great career opportunities.
Fatma Deniz (née Imamoglu) is a joint postdoctoral researcher at the Gallant Lab in UC Berkeley’s Helen Wills Neuroscience Institute and the International Computer Science Institute. She is interested in how sensory information is encoded in the brain and uses machine learning approaches to fit computational models to large-scale brain data acquired using functional magnetic resonance imaging (fMRI). Fatma works at the intersection between computer science, linguistics, music, and neuroscience. Her current focus is on the cross-modal representation of language in the human brain. In addition, she works on improving internet security applications using knowledge gained from cognitive neuroscience (MooneyAuth Project). She is an enthusiastic teacher for Berkeley's data-8 connector course Data Science for Cognitive Neuroscience (Fall2016 and Spring2017) and an instroctor in Software Carpentry, where she teaches scientific computing. As an advocate of reproducible research practices she is the co-editor of the book titled “The Practice of Reproducible Research”. As a data science fellow, she is interested in teaching and reproducible research and sees herself as a connector between diverse domains. She is a passionate coder, runner, baker, and cello player.
Jack L. Gallant received his PhD from Yale University and did post-doctoral work at the California Institute of Technology and Washington University Medical School. His research program focuses on computational modeling of the human brain. These models accurately describe how the brain encodes information during complex naturalistic tasks, and they show how information about the external and internal worlds are mapped systematically across the surface of the cerebral cortex. These models can also be used to decode information in the brain in order to reconstruct mental experiences. A brain-decoding algorithm developed in the Gallant lab was one of Times Magazine's Inventions of the Year in 2011, and Prof. Gallant appears frequently on radio and television. Further information about ongoing work in the Gallant lab, links to talks and papers, and links to an online interactive brain viewer can be found at the lab webpage (http://gallantlab.org).