Science News
from research organizations

Illumination-Aware Imaging

Date:
October 19, 2009
Source:
Optical Society of America
Summary:
Conventional imaging systems incorporate a light source for illuminating an object and a separate sensing device for recording the light rays scattered by the object. By using lenses and software, the recorded information can be turned into a proper image. Human vision is an ordinary process: the use of two eyes (and a powerful brain that processes visual information) provides human observers with a sense of depth perception. But how does a video camera attached to a robot "see" in three dimensions?
Share:
         
Total shares:  
FULL STORY

Conventional imaging systems incorporate a light source for illuminating an object and a separate sensing device for recording the light rays scattered by the object. By using lenses and software, the recorded information can be turned into a proper image. Human vision is an ordinary process: the use of two eyes (and a powerful brain that processes visual information) provides human observers with a sense of depth perception. But how does a video camera attached to a robot "see" in three dimensions?

Carnegie Mellon scientist Srinivasa Narasimhan believes that efficiently producing 3-D images for computer vision can best be addressed by thinking of a light source and sensor device as being equivalent. That is, they are dual parts of a single vision process.

For example, when a light illuminates a complicated subject, such as a fully-branching tree, many views of the object must be captured. This requires the camera to be moved, making it hard to find corresponding locations in different views.

In Narasimhan's approach, the camera and light constitute a single system. Since the light source can be moved without changing the corresponding points in the images, complex reconstruction problems can be solved easily for the first time. Another approach is to use a pixilated mask interposed at the light or camera to selectively remove certain light rays from the imaging process. With proper software, the resulting series of images can more efficiently render detailed 3-D vision information, especially when the object itself is moving.

Narasimhan calls this process alternatively illumination-aware imaging or imaging-aware illumination. He predicts it will be valuable for producing better robotic vision and rendering 3-D shapes in computer graphics.

Reference: Paper CtuD5, "Illuminating Cameras" is at 5:15 p.m. Tuesday, Oct. 13.

The latest technology in optics and lasers will be on display at the Optical Society's (OSA) Annual Meeting, Frontiers in Optics (FiO), which takes place Oct. 11-15 at the Fairmont San Jose Hotel and the Sainte Claire Hotel in San Jose, Calif.


Story Source:

The above story is based on materials provided by Optical Society of America. Note: Materials may be edited for content and length.


Cite This Page:

Optical Society of America. "Illumination-Aware Imaging." ScienceDaily. ScienceDaily, 19 October 2009. <www.sciencedaily.com/releases/2009/10/091015191043.htm>.
Optical Society of America. (2009, October 19). Illumination-Aware Imaging. ScienceDaily. Retrieved April 28, 2015 from www.sciencedaily.com/releases/2009/10/091015191043.htm
Optical Society of America. "Illumination-Aware Imaging." ScienceDaily. www.sciencedaily.com/releases/2009/10/091015191043.htm (accessed April 28, 2015).

Share This Page:


Matter & Energy News
April 28, 2015

Latest Headlines
updated 12:56 pm ET