What areas of the brain are involved in the linguistic processes underlying speech and listening and are there large differences between these? Neuroscientists from the Donders Institute for Brain, Cognition and Behaviour at Radboud University Nijmegen are the first to have successfully investigated this question using functional magnetic resonance imaging (fMRI). In what may come as a surprise to many scientists, the researchers have established that there is a large degree of overlap between the areas involved.
The results are published in the journal Psychological Science.
Within the scientific community there is a lot of discussion about whether the brain functions involved in speech production are also involved in the comprehension of speech. In the area of mirror neuron research in particular (a hot topic for the past 15 years), research has viewed the overlap between the areas of the brain involved in speech and listening as reaction and observed action, says neuroscientist Laura Menenti, who is currently working at the University of Glasgow. However, speaking and listening are more than just action and observation. They also involve linguistic processing. Menenti and her colleagues mainly focused on this last aspect: which areas of the brain are involved in the semantic (production and the comprehension of meaning), lexical (making and recognising words) and syntactic (being able to use and recognise grammar) processes?
Talking in the fMRI
One unique aspect of this research is that it is the first study to have investigated the production of sentences in detail using fMRI. Speech comprehension had already widely been studied in this manner. However, for speech production the problem up until now was that too much noise was present in the measurements due to study subjects moving their mouth, facial muscles and head, and the variable quantity of air in their mouths.
This noise cannot be prevented, but at the Donders Institute a method has been developed, which allows a more powerful signal to be measured, compared to the noise. Menenti said: "In a nutshell, whereas we usually make an image with the fMRI every two seconds, we now we make five images every 2 seconds, from which we take the average for further processing."
Striking result, especially for the scientists
The results reveal a considerable overlap between brain areas (a shared 'neuronal infrastructure') which are involved in the linguistic processes associated with speech production and comprehension. Menenti explained: "Within linguistics and brain science this is a striking result. Based on studies with aphasia patients one might equally expect that speech production and comprehension would show some neuronal overlap but would otherwise each cover their own areas." Even more striking was the fact that in their research, Menenti and her colleagues did not find any results which indicated that the motor system in the brain, involved in action and movement, makes a crucial contribution to speech perception. "From the perspective of mirror neuron research that is also an unexpected result."
- L. Menenti, S. M. E. Gierhan, K. Segaert, P. Hagoort. Shared Language: Overlap and Segregation of the Neuronal Infrastructure for Speaking and Listening Revealed by Functional MRI. Psychological Science, 2011; DOI: 10.1177/0956797611418347
Cite This Page: