February 1, 2007 Human factors psychologists have teamed up with computer scientists to develop technology that can do the job of a seeing-eye dog -- help the blind walk around safely and without getting lost. The wearable system tracks the person's position using GPS, and emits sounds to alert them of obstacles such as fire hydrants or park benches.
Nearly 10 million Americans are either blind or visually impaired. Mobility training teaches them to use canes and Seeing Eye dogs, but just exploring a new street or a different area of town can be daunting.
Now, a new research tool could help these people broaden their adventures. A simple series of beeps may one day do the job of a Seeing Eye dog.
"SWAN, or the System for Wearable Audio Navigation, is a system that we have developed here at Georgia Tech to help people -- often blind people, but any person who can't see -- get from point A to point B and know what's around them as they go," says Bruce Walker, an assistant professor at Georgia Tech's School of Psychology and College of Computing, in Atlanta.
Developed by Walker, a human factors psychologist and computing expert, and his computer science colleague Frank Dellaert, SWAN uses a global positioning system (GPS) to locate the person. An iCube detects the direction they're facing. Both feed into a computer that merges the information with the software and sends out a series of sounds.
"Imagine there's a ring around your head, about a meter away. Using headphones, we can make a sound seem to come from any point on that circle," Walker says.
A series of beeps leads them on the path to their destination. A beep points out a park bench, a different beep combined with a ring might signal an information booth. Fast beeps warn of a fire hydrant in the path.
Dellaert says, "What would make me happy is that, that, this is truly in the hands of, of, of blind people and visually impaired and they come to me and say, 'This changed my life.'"
Since GPS only works outside right now, Walker and Dellaert are looking at another way, using cameras, to make SWAN work inside.
SWAN is still about five years away from hitting the market. One day it may also help firefighters in smoky buildings and soldiers in dark terrain. The professors are currently developing a universal set of audio cues. For example, a knocking noise might represent passing an office door or a series of chords could signal a skyscraper.
BACKGROUND: Georgia Tech researchers are developing a wearable computing system called the System for Wearable Audio Navigation (SWAN) to help the blind, firefighters, soldiers, and others who must navigate their in unknown territory, particularly when vision is obstructed or impaired. The system provides auditory cues to guide the user from place to place. The researchers are now adapting the system for indoor use, and for use on PDAs and cell phones.
HOW IT WORKS: The SWAN system combines research developments in artificial intelligence, robot tracking and audio interfaces. It consists of a small laptop worn in a backpack, a proprietary tracking chip, Global Positioning System (GPS) sensors, a digital compass, a head tracker, four cameras, a light sensor. There is also special headphones that send auditory signals via vibrations through the skull without plugging the user's ear. (The blind in particular rely heavily on their hearing.) The sensors and tracking chip send data to the SWAN applications on the laptop, which computes the user's location and in what direction he is looking. It then maps the travel route, and sends 3D audio cues to the bone phones to guide the traveler along a path to the destination.
ABOUT 3D AUDIO: The 3D audio for navigation is unique to SWAN; other navigation systems use speech cues. 3D cues sound like they are coming from 1 meter away from the user's body, in whichever direction he needs to travel. This is a common sound effect, created by taking advantage of humans' natural ability to detect inter-aural time differences. The 3D sound application schedules sounds to reach one ear slightly faster than the other, and the human brain uses the timing difference to figure out where the sound originated.
HEAR, HEAR: Our ears detect sound as vibrations in the air. Those sound waves cause the eardrum to vibrate, sending waves through a fluid inside the cochlea This in turn causes tiny hairs -- each tuned to the different pitches of the sound -- to vibrate as well, stimulating nerves which send electrical signals to the rain for processing. Having two ears makes it possible to determine from where a sound is coming. Time lag and differences in volume provide useful clues. For instance, sound coming from one direction will reach the ear furthest away about 1/500 second later than the closer ear, and the brain can detect this time lag. A difference in volume between the two ears depends on the frequency of the sounds. It is easier for us to tell the direction of high frequency sounds better than low frequency sounds, because the higher frequencies are more easily blocked by the hear, and therefore do not easily reach the far ear.
The Human Factors and Ergonomics Society contributed to the information contained in the TV portion of this report.
Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.