Featured Research

from universities, journals, and other organizations

Revealing how the brain recognizes speech sounds

Date:
January 30, 2014
Source:
University of California - San Francisco
Summary:
Researchers are reporting a detailed account of how speech sounds are identified by the human brain. The finding, they said, may add to our understanding of language disorders, including dyslexia.

Edward F. Chang, MD, whose work may add to our understanding of reading disorders, in which printed words are imperfectly mapped onto speech sounds.
Credit: Cindy Chew

UC San Francisco researchers are reporting a detailed account of how speech sounds are identified by the human brain, offering an unprecedented insight into the basis of human language. The finding, they said, may add to our understanding of language disorders, including dyslexia.

Related Articles


Scientists have known for some time the location in the brain where speech sounds are interpreted, but little has been discovered about how this process works.

Now, in Science Express (January 30th, 2014), the fast-tracked online version of the journal Science, the UCSF team reports that the brain does not respond to the individual sound segments known as phonemes -- such as the b sound in "boy" -- but is instead exquisitely tuned to detect simpler elements, which are known to linguists as "features."

This organization may give listeners an important advantage in interpreting speech, the researchers said, since the articulation of phonemes varies considerably across speakers, and even in individual speakers over time.

The work may add to our understanding of reading disorders, in which printed words are imperfectly mapped onto speech sounds. But because speech and language are a defining human behavior, the findings are significant in their own right, said UCSF neurosurgeon and neuroscientist Edward F. Chang, MD, senior author of the new study.

"This is a very intriguing glimpse into speech processing," said Chang, associate professor of neurological surgery and physiology. "The brain regions where speech is processed in the brain had been identified, but no one has really known how that processing happens."

Although we usually find it effortless to understand other people when they speak, parsing the speech stream is an impressive perceptual feat. Speech is a highly complex and variable acoustic signal, and our ability to instantaneously break that signal down into individual phonemes and then build those segments back up into words, sentences and meaning is a remarkable capability.

Because of this complexity, previous studies have analyzed brain responses to just a few natural or synthesized speech sounds, but the new research employed spoken natural sentences containing the complete inventory of phonemes in the English language.

To capture the very rapid brain changes involved in processing speech, the UCSF scientists gathered their data from neural recording devices that were placed directly on the surface of the brains of six patients as part of their epilepsy surgery.

The patients listened to a collection of 500 unique English sentences spoken by 400 different people while the researchers recorded from a brain area called the superior temporal gyrus (STG; also known as Wernicke's area), which previous research has shown to be involved in speech perception. The utterances contained multiple instances of every English speech sound.

Many researchers have presumed that brain cells in the STG would respond to phonemes. But the researchers found instead that regions of the STG are tuned to respond to even more elemental acoustic features that reference the particular way that speech sounds are generated from the vocal tract. "These regions are spread out over the STG," said first author Nima Mesgarani, PhD, now an assistant professor of electrical engineering at Columbia University, who did the research as a postdoctoral fellow in Chang's laboratory. "As a result, when we hear someone talk, different areas in the brain 'light up' as we hear the stream of different speech elements."

"Features," as linguists use the term, are distinctive acoustic signatures created when speakers move the lips, tongue or vocal cords. For example, consonants such as p, t, k, b and d require speakers to use the lips or tongue to obstruct air flowing from the lungs. When this occlusion is released, there is a brief burst of air, which has led linguists to categorize these sounds as "plosives." Others, such as s, z and v, are grouped together as "fricatives," because they only partially obstruct the airway, creating friction in the vocal tract.

The articulation of each plosive creates an acoustic pattern common to the entire class of these consonants, as does the turbulence created by fricatives. The Chang group found that particular regions of the STG are precisely tuned to robustly respond to these broad, shared features rather than to individual phonemes like b or z.

Chang said the arrangement the team discovered in the STG is reminiscent of feature detectors in the visual system for edges and shapes, which allow us to recognize objects, like bottles, no matter which perspective we view them from. Given the variability of speech across speakers and situations, it makes sense, said co-author Keith Johnson, PhD, professor of linguistics at the University of California, Berkeley, for the brain to employ this sort of feature-based algorithm to reliably identify phonemes.

"It's the conjunctions of responses in combination that give you the higher idea of a phoneme as a complete object," Chang said. "By studying all of the speech sounds in English, we found that the brain has a systematic organization for basic sound feature units, kind of like elements in the periodic table."


Story Source:

The above story is based on materials provided by University of California - San Francisco. The original article was written by Peter Farley. Note: Materials may be edited for content and length.


Journal Reference:

  1. N. Mesgarani, C. Cheung, K. Johnson, E. F. Chang. Phonetic Feature Encoding in Human Superior Temporal Gyrus. Science, 2014; DOI: 10.1126/science.1245994

Cite This Page:

University of California - San Francisco. "Revealing how the brain recognizes speech sounds." ScienceDaily. ScienceDaily, 30 January 2014. <www.sciencedaily.com/releases/2014/01/140130141305.htm>.
University of California - San Francisco. (2014, January 30). Revealing how the brain recognizes speech sounds. ScienceDaily. Retrieved December 22, 2014 from www.sciencedaily.com/releases/2014/01/140130141305.htm
University of California - San Francisco. "Revealing how the brain recognizes speech sounds." ScienceDaily. www.sciencedaily.com/releases/2014/01/140130141305.htm (accessed December 22, 2014).

Share This


More From ScienceDaily



More Mind & Brain News

Monday, December 22, 2014

Featured Research

from universities, journals, and other organizations


Featured Videos

from AP, Reuters, AFP, and other news services

Brain-Dwelling Tapeworm Reveals Genetic Secrets

Brain-Dwelling Tapeworm Reveals Genetic Secrets

Reuters - Innovations Video Online (Dec. 22, 2014) Cambridge scientists have unravelled the genetic code of a rare tapeworm that lived inside a patient's brain for at least four year. Researchers hope it will present new opportunities to diagnose and treat this invasive parasite. Matthew Stock reports. Video provided by Reuters
Powered by NewsLook.com
Researchers Test Colombian Village With High Alzheimer's Rates

Researchers Test Colombian Village With High Alzheimer's Rates

AFP (Dec. 19, 2014) In Yarumal, a village in N. Colombia, Alzheimer's has ravaged a disproportionately large number of families. A genetic "curse" that may pave the way for research on how to treat the disease that claims a new victim every four seconds. Duration: 02:42 Video provided by AFP
Powered by NewsLook.com
Double-Amputee Becomes First To Move Two Prosthetic Arms With His Mind

Double-Amputee Becomes First To Move Two Prosthetic Arms With His Mind

Buzz60 (Dec. 19, 2014) A double-amputee makes history by becoming the first person to wear and operate two prosthetic arms using only his mind. Jen Markham has the story. Video provided by Buzz60
Powered by NewsLook.com
Prenatal Exposure To Pollution Might Increase Autism Risk

Prenatal Exposure To Pollution Might Increase Autism Risk

Newsy (Dec. 18, 2014) Harvard researchers found children whose mothers were exposed to high pollution levels in the third trimester were twice as likely to develop autism. Video provided by Newsy
Powered by NewsLook.com

Search ScienceDaily

Number of stories in archives: 140,361

Find with keyword(s):
Enter a keyword or phrase to search ScienceDaily for related topics and research stories.

Save/Print:
Share:

Breaking News:

Strange & Offbeat Stories


Health & Medicine

Mind & Brain

Living & Well

In Other News

... from NewsDaily.com

Science News

Health News

Environment News

Technology News



Save/Print:
Share:

Free Subscriptions


Get the latest science news with ScienceDaily's free email newsletters, updated daily and weekly. Or view hourly updated newsfeeds in your RSS reader:

Get Social & Mobile


Keep up to date with the latest news from ScienceDaily via social networks and mobile apps:

Have Feedback?


Tell us what you think of ScienceDaily -- we welcome both positive and negative comments. Have any problems using the site? Questions?
Mobile: iPhone Android Web
Follow: Facebook Twitter Google+
Subscribe: RSS Feeds Email Newsletters
Latest Headlines Health & Medicine Mind & Brain Space & Time Matter & Energy Computers & Math Plants & Animals Earth & Climate Fossils & Ruins