Meeting Banner
Abstract #5298

Modulation of Expectation on Sound-to-Meaning Mapping during Speech Processing: An fMRI Study

Bingjiang Lyu1,2,3, Jianqiao Ge1,2,3, Zhendong Niu4, Li Hai Tan5, Tianyi Qian6, and Jia-Hong Gao1,2,3

1Center for MRI Research, Peking University, Beijing, People's Republic of China, 2McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China, 3Beijing City Key Lab for Medical Physics and Engineering, Peking University, Beijing, People's Republic of China, 4School of Computer Science and Technology, Beijing Institute of Technology, Beijing, People's Republic of China, 5Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, People's Republic of China, 6MR Collaborations NE Asia, Siemens Healthcare, Beijing, People's Republic of China

Spoken language comprehension relies on both the identification of individual words and the expectations arising from contextual information. A distributed fronto-temporal network is known to facilitate the mapping of speech sounds onto corresponding meanings. However, how prior expectations influence this efficient mapping at the neuroanatomical level, especially for individual words, remains unclear. Using functional magnetic resonance imaging, we addressed this question in the framework of the dual-stream model by investigating both the neural substrates and their mutual functional and effective connectivity. Our results revealed how this ubiquitous sound-to-meaning mapping in daily communication is achieved in a predictive manner.

This abstract and the presentation materials are available to members only; a login is required.

Join Here