Researchers develop AI that can understand light in photographs
- Date:
- February 20, 2024
- Source:
- Simon Fraser University
- Summary:
- Despite significant progress in developing AI systems that can understand the physical world like humans do, researchers have struggled with modelling a certain aspect of our visual system: the perception of light.
- Share:
Despite significant progress in developing AI systems that can understand the physical world like humans do, researchers have struggled with modelling a certain aspect of our visual system: the perception of light.
"Determining the influence of light in a given photograph is a bit like trying to separate the ingredients out of an already baked cake." explains Chris Careaga, a PhD student in the Computational Photography Lab at SFU. The task requires undoing the complicated interactions between light and surfaces in a scene. This problem is referred to as intrinsic decomposition, and has been studied for nearly half a century.
In a new paper published in the journal ACM Transactions on Graphics, researchers in the Computational Photography Lab develop an AI approach to intrinsic decomposition that works on a wide range of images. Their method automatically separates an image into two layers: one with only lighting effects and one with the true colours of objects in the scene. "The main innovation behind our work is to create a system of neural networks that are individually tasked with easier problems. They work together to understand the illumination in a photograph," Careaga adds.
Although intrinsic decomposition has been studied for decades, SFU's new invention is the first in the field to accomplish this task for any HD image that a person might take with their camera. "By editing the lighting and colours separately, a whole range of applications that are reserved for CGI and VFX become possible for regular image editing," says Dr. Ya??z Aksoy, who leads the Computational Photography Lab at SFU. "This physical understanding of light makes it an invaluable and accessible tool for content creators, photo editors, and post-production artists, as well as for new technologies such as augmented reality and spatial computing."
The group has since extended their intrinsic decomposition approach, applying it to the problem of image compositing: "When you insert an object or person from one image into another, it's usually obvious that it's edited since the lighting and colours don't match" explains Careaga. "Using our intrinsic decomposition technique, we can alter the lighting of the inserted object to make it appear more realistic in the new scene." In addition to publishing a paper on this, presented at SIGGRAPH Asia last December, the group has also developed a computer interface that allows users to interactively edit the lighting of these "composited" images. S. Mahdi H. Miangoleh, a PhD student in Aksoy's lab, also contributed to this work.
Aksoy and his team plan to extend their methods to video for use in film post-production, and further develop AI capabilities in terms of interactive illumination editing. They emphasize a creativity-driven approach to AI in film production, aiming to empower independent and low-budget productions. To better understand the challenges in these production settings, the group has developed a computational photography studio at the Simon Fraser University campus where they conduct research in an active production environment. They also produce videos explaining their work which you can check out here:
Story Source:
Materials provided by Simon Fraser University. Note: Content may be edited for style and length.
Journal Reference:
- Chris Careaga, Yağız Aksoy. Intrinsic Image Decomposition via Ordinal Shading. ACM Transactions on Graphics, 2023; 43 (1): 1 DOI: 10.1145/3630750
Cite This Page: