Dec. 18, 2001 Researchers in California have created a new, publicly available database of acoustic measurements of human subjects that may help engineers build personalized sound systems for computers that could rival or even exceed the experience of listening to a high-end home theater system.
Richard Duda and V. Ralph Algazi of the University of California, Davis said the database could have a wide range of applications, including teleconferencing, mobile computing and home entertainment. The National Science Foundation (NSF) funded their work.
"One day," said Algazi, "computer users could operate a small, 'wearable' computer using voice commands, with spatial sound replacing a visual display." He added that the database could aid in the development of "immersion" systems that could allow scientists to interact with their data in a computer-generated, three-dimensional space incorporating both images and sound.
People use a number of complex sound cues to experience their surroundings. But reproducing these cues accurately is a difficult technical problem. The cues that stem from the complex interaction between sound waves and the human body are particularly important but difficult to reproduce.
Listeners experience sound in three dimensions: left/right, up/down, and near/far (azimuth, elevation and range). Typical two-speaker systems can control only the left/right aspect. Even state-of-the-art "three-dimensional" sound systems generally can only locate sounds on a circle around a listener, and not in all three dimensions.
Among the challenges to creating true three-dimensional sound fields is that each person's spatial sound cues are strongly influenced by individual physical factors such as the shape and position of their ears. These factors -- which are captured by so-called Head-Related Transfer Functions (HRTFs) -- vary greatly from person to person. To mass-produce digital systems that accurately reproduce three-dimensional sound fields requires information about an individual listener's HRTF. The new database provides the information that engineers need for their designs.
Said Duda, "I believe that this customization of systems to individual characteristics represents an important and achievable goal for computer technology. Our current NSF-supported work with colleagues at the University of Maryland and Duke University is taking the next step toward this goal by using computer vision techniques and high-performance computing to obtain personalized HRTFs."
To develop the database, Duda and Algazi meticulously measured 45 different people to see exactly how the sizes and shapes of their ears and bodies influenced the sounds that reached their ears. Acoustic measurements were stored in a database, together with measurements of the size and shape of the listener's ears, heads and torsos.
By knowing how a click pattern gets changed on the way to a listener's ears, an engineer can modify any sound presented over headphones to make it seem to be coming from a particular location in space. Because people have individual sizes and shapes, the modifications must be individually tailored, much as eyeglasses must be individually fit. Lacking data, engineers previously have had to base their designs on an "average" set of values, with results for the listeners similar to using poorly fit eyeglasses. The new database will provide the engineers with the information to properly adjust their designs to account for individual differences.
The information is freely available for research or commercial use on a compact disk or can be downloaded from the Internet.
For more information, see http://interface.cipic.ucdavis.edu/
Editors: The entire HRTF database can be downloaded from: http://interface.cipic.ucdavis.edu/CIL_html/CIL_HRTF_database.htm
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by National Science Foundation.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.