Featured Research

from universities, journals, and other organizations

New method helps computer vision systems decipher outdoor scenes

Date:
September 10, 2010
Source:
Carnegie Mellon University
Summary:
Computer vision systems can struggle to make sense of a single image, but a new method enables computers to gain a deeper understanding of an image by reasoning about the physical constraints of the scene.

Computer vision systems can struggle to make sense of a single image, but a new method devised by computer scientists at Carnegie Mellon University enables computers to gain a deeper understanding of an image by reasoning about the physical constraints of the scene. Here, the computer uses virtual blocks to build a three-dimensional approximation of the image at left that makes sense based on volume and mass.
Credit: Carnegie Mellon University

Computer vision systems can struggle to make sense of a single image, but a new method devised by computer scientists at Carnegie Mellon University enables computers to gain a deeper understanding of an image by reasoning about the physical constraints of the scene.

Related Articles


In much the same way that a child might use a set of toy building blocks to assemble something that looks like a building depicted on the cover of the toy set, the computer would analyze an outdoor scene by using virtual blocks to build a three-dimensional approximation of the image that makes sense based on volume and mass.

"When people look at a photo, they understand that the scene is geometrically constrained," said Abhinav Gupta, a post-doctoral fellow in CMU's Robotics Institute. "We know that buildings aren't infinitely thin, that most towers do not lean, and that heavy objects require support. It might not be possible to know the three-dimensional size and shape of all the objects in the photo, but we can narrow the possibilities. In the same way, if a computer can replicate an image, block by block, it can better understand the scene."

This novel approach to automated scene analysis could eventually be used to understand not only the objects in a scene, but the spaces in between them and what might lie behind areas obscured by objects in the foreground, said Alexei A. Efros, associate professor of robotics and computer science at CMU. That level of detail would be important, for instance, if a robot needed to plan a route where it might walk, he noted.

Gupta presented the research, which he conducted with Efros and Robotics Professor Martial Hebert, at the European Conference on Computer Vision, Sept. 5-11 in Crete, Greece.

Understanding outdoor scenes remains one of the great challenges of artificial intelligence. One approach has been to identify features of a scene, such as buildings, roads and cars, but this provides no understanding of the geometry of the scene, such as the location of walkable surfaces. Another approach, which Hebert and Efros pioneered with former student Derek Hoiem, now of the University of Illinois, Urbana-Champaign, has been to map the planar surfaces of an image to create a rough 3-D depiction of an image, similar to a pop-up book. But that approach can lead to depictions that are highly unlikely and sometimes physically impossible.

In the new method devised by Gupta, Efros and Hebert, the image is first broken into various segments corresponding to objects in the image. Once the ground and sky are identified, other segments are assigned potential geometric shapes. The shapes also are categorized as light or heavy, depending on appearance; a surface that appears to be a brick wall, for instance, would be classified as heavy.

The computer then attempts to reconstruct the image using the virtual blocks. If a heavy block appears unsupported, the computer must substitute an appropriately shaped block, or make assumptions that the original block was obscured in the original image.

Gupta said because this qualitative volumetric approach to scene understanding is so new, no established datasets or evaluation methodologies exist for it. He said in estimating the layout of surfaces, other than sky and ground, the method is better than 70 percent accurate, and its performance is almost as good when comparing its segmentation to ground truth. Overall, Gupta assesses the analysis as very good for 30 to 40 percent of the images and adequate for another 20 to 30 percent.


Story Source:

The above story is based on materials provided by Carnegie Mellon University. Note: Materials may be edited for content and length.


Cite This Page:

Carnegie Mellon University. "New method helps computer vision systems decipher outdoor scenes." ScienceDaily. ScienceDaily, 10 September 2010. <www.sciencedaily.com/releases/2010/09/100909114108.htm>.
Carnegie Mellon University. (2010, September 10). New method helps computer vision systems decipher outdoor scenes. ScienceDaily. Retrieved October 24, 2014 from www.sciencedaily.com/releases/2010/09/100909114108.htm
Carnegie Mellon University. "New method helps computer vision systems decipher outdoor scenes." ScienceDaily. www.sciencedaily.com/releases/2010/09/100909114108.htm (accessed October 24, 2014).

Share This



More Computers & Math News

Friday, October 24, 2014

Featured Research

from universities, journals, and other organizations


Featured Videos

from AP, Reuters, AFP, and other news services

Microsoft Riding High On Strong Surface, Cloud Performance

Microsoft Riding High On Strong Surface, Cloud Performance

Newsy (Oct. 24, 2014) — Microsoft's Q3 earnings showed its tablets and cloud services are really hitting their stride. Video provided by Newsy
Powered by NewsLook.com
The Best Apps to Organize Your Life

The Best Apps to Organize Your Life

Buzz60 (Oct. 23, 2014) — Need help organizing your bills, schedules and other things? Ko Im (@konakafe) has the best apps to help you stay on top of it all! Video provided by Buzz60
Powered by NewsLook.com
Nike And Apple Team Up To Create Wearable ... Something

Nike And Apple Team Up To Create Wearable ... Something

Newsy (Oct. 23, 2014) — For those looking for wearable tech that's significantly less nerdy than Google Glass, Nike CEO Mark Parker says don't worry, It's on the way. Video provided by Newsy
Powered by NewsLook.com
Chameleon Camouflage to Give Tanks Cloaking Capabilities

Chameleon Camouflage to Give Tanks Cloaking Capabilities

Reuters - Innovations Video Online (Oct. 22, 2014) — Inspired by the way a chameleon changes its colour to disguise itself; scientists in Poland want to replace traditional camouflage paint with thousands of electrochromic plates that will continuously change colour to blend with its surroundings. The first PL-01 concept tank prototype will be tested within a few years, with scientists predicting that a similar technology could even be woven into the fabric of a soldiers' clothing making them virtually invisible to the naked eye. Matthew Stock reports. Video provided by Reuters
Powered by NewsLook.com

Search ScienceDaily

Number of stories in archives: 140,361

Find with keyword(s):
 
Enter a keyword or phrase to search ScienceDaily for related topics and research stories.

Save/Print:
Share:  

Breaking News:

Strange & Offbeat Stories

 

Space & Time

Matter & Energy

Computers & Math

In Other News

... from NewsDaily.com

Science News

Health News

Environment News

Technology News



Save/Print:
Share:  

Free Subscriptions


Get the latest science news with ScienceDaily's free email newsletters, updated daily and weekly. Or view hourly updated newsfeeds in your RSS reader:

Get Social & Mobile


Keep up to date with the latest news from ScienceDaily via social networks and mobile apps:

Have Feedback?


Tell us what you think of ScienceDaily -- we welcome both positive and negative comments. Have any problems using the site? Questions?
Mobile iPhone Android Web
Follow Facebook Twitter Google+
Subscribe RSS Feeds Email Newsletters
Latest Headlines Health & Medicine Mind & Brain Space & Time Matter & Energy Computers & Math Plants & Animals Earth & Climate Fossils & Ruins