Featured Research

from universities, journals, and other organizations

Extracting audio from visual information: Algorithm recovers speech from vibrations of a potato-chip bag filmed through soundproof glass

Date:
August 4, 2014
Source:
Massachusetts Institute of Technology
Summary:
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.

A new algorithm recovers speech from the vibrations of a potato-chip bag filmed through soundproof glass.
Credit: Christine Daniloff/MIT

Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.

Related Articles


In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted plant. The researchers will present their findings in a paper at this year's Siggraph, the premier computer graphics conference.

"When sound hits an object, it causes the object to vibrate," says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. "The motion of this vibration creates a very subtle visual signal that's usually invisible to the naked eye. People didn't realize that this information was there."

Joining Davis on the Siggraph paper are Frédo Durand and Bill Freeman, both MIT professors of computer science and engineering; Neal Wadhwa, a graduate student in Freeman's group; Michael Rubinstein of Microsoft Research, who did his PhD with Freeman; and Gautham Mysore of Adobe Research.

Reconstructing audio from video requires that the frequency of the video samples -- the number of frames of video captured per second -- be higher than the frequency of the audio signal. In some of their experiments, the researchers used a high-speed camera that captured 2,000 to 6,000 frames per second. That's much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras, which can top 100,000 frames per second.

Commodity hardware

In other experiments, however, they used an ordinary digital camera. Because of a quirk in the design of most cameras' sensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second. While this audio reconstruction wasn't as faithful as it was with the high-speed camera, it may still be good enough to identify the gender of a speaker in a room; the number of speakers; and even, given accurate enough information about the acoustic properties of speakers' voices, their identities.

The researchers' technique has obvious applications in law enforcement and forensics, but Davis is more enthusiastic about the possibility of what he describes as a "new kind of imaging."

"We're recovering sounds from objects," he says. "That gives us a lot of information about the sound that's going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways." In ongoing work, the researchers have begun trying to determine material and structural properties of objects from their visible response to short bursts of sound.

In the experiments reported in the Siggraph paper, the researchers also measured the mechanical properties of the objects they were filming and determined that the motions they were measuring were about a tenth of micrometer. That corresponds to five thousandths of a pixel in a close-up image, but from the change of a single pixel's color value over time, it's possible to infer motions smaller than a pixel.

Suppose, for instance, that an image has a clear boundary between two regions: Everything on one side of the boundary is blue; everything on the other is red. But at the boundary itself, the camera's sensor receives both red and blue light, so it averages them out to produce purple. If, over successive frames of video, the blue region encroaches into the red region -- even less than the width of a pixel -- the purple will grow slightly bluer. That color shift contains information about the degree of encroachment.

Putting it together

Some boundaries in an image are fuzzier than a single pixel in width, however. So the researchers borrowed a technique from earlier work on algorithms that amplify minuscule variations in video, making visible previously undetectable motions: the breathing of an infant in the neonatal ward of a hospital, or the pulse in a subject's wrist.

That technique passes successive frames of video through a battery of image filters, which are used to measure fluctuations, such as the changing color values at boundaries, at several different orientations -- say, horizontal, vertical, and diagonal -- and several different scales.

The researchers developed an algorithm that combines the output of the filters to infer the motions of an object as a whole when it's struck by sound waves. Different edges of the object may be moving in different directions, so the algorithm first aligns all the measurements so that they won't cancel each other out. And it gives greater weight to measurements made at very distinct edges -- clear boundaries between different color values.

The researchers also produced a variation on the algorithm for analyzing conventional video. The sensor of a digital camera consists of an array of photodetectors -- millions of them, even in commodity devices. As it turns out, it's less expensive to design the sensor hardware so that it reads off the measurements of one row of photodetectors at a time. Ordinarily, that's not a problem, but with fast-moving objects, it can lead to odd visual artifacts. An object -- say, the rotor of a helicopter -- may actually move detectably between the reading of one row and the reading of the next.

For Davis and his colleagues, this bug is a feature. Slight distortions of the edges of objects in conventional video, though invisible to the naked eye, contain information about the objects' high-frequency vibration. And that information is enough to yield a murky but potentially useful audio signal.

"This is new and refreshing. It's the kind of stuff that no other group would do right now," says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. "We're scientists, and sometimes we watch these movies, like James Bond, and we think, 'This is Hollywood theatrics. It's not possible to do that. This is ridiculous.' And suddenly, there you have it. This is totally out of some Hollywood thriller. You know that the killer has admitted his guilt because there's surveillance footage of his potato chip bag vibrating."

Efros agrees that the characterization of material properties could be a fruitful application of the technology. But, he adds, "I'm sure there will be applications that nobody will expect. I think the hallmark of good science is when you do something just because it's cool and then somebody turns around and uses it for something you never imagined. It's really nice to have this type of creative stuff."

Video: https://www.youtube.com/watch?v=FKXOucXB4a8


Story Source:

The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty. Note: Materials may be edited for content and length.


Cite This Page:

Massachusetts Institute of Technology. "Extracting audio from visual information: Algorithm recovers speech from vibrations of a potato-chip bag filmed through soundproof glass." ScienceDaily. ScienceDaily, 4 August 2014. <www.sciencedaily.com/releases/2014/08/140804100559.htm>.
Massachusetts Institute of Technology. (2014, August 4). Extracting audio from visual information: Algorithm recovers speech from vibrations of a potato-chip bag filmed through soundproof glass. ScienceDaily. Retrieved October 30, 2014 from www.sciencedaily.com/releases/2014/08/140804100559.htm
Massachusetts Institute of Technology. "Extracting audio from visual information: Algorithm recovers speech from vibrations of a potato-chip bag filmed through soundproof glass." ScienceDaily. www.sciencedaily.com/releases/2014/08/140804100559.htm (accessed October 30, 2014).

Share This



More Computers & Math News

Thursday, October 30, 2014

Featured Research

from universities, journals, and other organizations


Featured Videos

from AP, Reuters, AFP, and other news services

Samsung's Incredible Shrinking Smartphone Profits

Samsung's Incredible Shrinking Smartphone Profits

Reuters - Business Video Online (Oct. 30, 2014) — The world's top mobile maker is under severe pressure, delivering a 60 percent drop in Q3 profit as its handset business struggles. Turning it around may not prove easy, says Reuters' Jon Gordon. Video provided by Reuters
Powered by NewsLook.com
Ban On Wearable Cameras In Movie Theaters Surprises No One

Ban On Wearable Cameras In Movie Theaters Surprises No One

Newsy (Oct. 30, 2014) — The Motion Picture Association of America and the National Association of Theatre Owners now prohibit wearable cameras such as Google Glass. Video provided by Newsy
Powered by NewsLook.com
Microsoft Launches Fitness Band After Accidental Reveal

Microsoft Launches Fitness Band After Accidental Reveal

Newsy (Oct. 30, 2014) — Microsoft accidentally revealed its upcoming fitness band on Wednesday, so the company went ahead and announced it. Video provided by Newsy
Powered by NewsLook.com
Mind-Controlled Prosthetic Arm Restores Amputee Dexterity

Mind-Controlled Prosthetic Arm Restores Amputee Dexterity

Reuters - Innovations Video Online (Oct. 29, 2014) — A Swedish amputee who became the first person to ever receive a brain controlled prosthetic arm is able to manipulate and handle delicate objects with an unprecedented level of dexterity. The device is connected directly to his bone, nerves and muscles, giving him the ability to control it with his thoughts. Matthew Stock reports. Video provided by Reuters
Powered by NewsLook.com

Search ScienceDaily

Number of stories in archives: 140,361

Find with keyword(s):
 
Enter a keyword or phrase to search ScienceDaily for related topics and research stories.

Save/Print:
Share:  

Breaking News:

Strange & Offbeat Stories

 

Space & Time

Matter & Energy

Computers & Math

In Other News

... from NewsDaily.com

Science News

Health News

Environment News

Technology News



Save/Print:
Share:  

Free Subscriptions


Get the latest science news with ScienceDaily's free email newsletters, updated daily and weekly. Or view hourly updated newsfeeds in your RSS reader:

Get Social & Mobile


Keep up to date with the latest news from ScienceDaily via social networks and mobile apps:

Have Feedback?


Tell us what you think of ScienceDaily -- we welcome both positive and negative comments. Have any problems using the site? Questions?
Mobile iPhone Android Web
Follow Facebook Twitter Google+
Subscribe RSS Feeds Email Newsletters
Latest Headlines Health & Medicine Mind & Brain Space & Time Matter & Energy Computers & Math Plants & Animals Earth & Climate Fossils & Ruins