Google's new system captures character lighting for virtually any environment

Google's new system captures character lighting for virtually any environment
Computer scientists at Google are revolutionizing this area of volumetric capture technology with a novel, comprehensive system that is able, for the first time, to capture full-body reflectance of 3D human performances, and seamlessly blend them into the real world through AR or into digital scenes in films, games, and more. Credit: SIGGRAPH Asia

Even novice photographers and videographers who rely on their handheld devices to snap photos or make videos often consider their subject's lighting. Lighting is critical in filmmaking, gaming, and virtual/augmented reality environments and can make or break the quality of a scene and the actors and performers in it. Replicating realistic character lighting has remained a difficult challenge in computer graphics and computer vision.

While significant progress has been made on volumetric capture systems, focusing on 3-D geometric reconstruction with high resolution textures, such as methods to achieve realistic shapes and textures of the human face, much less work has been done to recover photometric properties needed for relighting characters. Results from such systems lack fine details and the subject's shading is prebaked into the texture.

Computer scientists at Google are revolutionizing this area of volumetric capture technology with a novel, comprehensive system that is able, for the first time, to capture full-body reflectance of 3-D human performances, and seamlessly blend them into the real world through AR or into digital scenes in films, games, and more. Google will present their new system, called The Relightables, at ACM SIGGRAPH Asia, held Nov. 17 to 20 in Brisbane, Australia. SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in , animation, interactivity, gaming, and emerging technologies.

There have been major advances in this realm of work that the industry calls 3-D capture systems. Through these sophisticated systems, viewers have been able to experience digital characters come to life on the big screen, for instance, in blockbusters such as Avatar and the Avengers series and much more.

Indeed, the volumetric capture technology has reached a high level of quality, but many of these reconstructions still lack true photorealism. In particular, despite these systems using high-end studio setups with green screens, they still struggle to capture high frequency details of humans and they only recover a fixed illumination condition. This makes these volumetric capture systems unsuitable for photorealistic rendering of actors or performers in arbitrary scenes under different lighting conditions.

Google's Relightables system makes it possible to customize lighting on characters in real time or re-light them in any given scene or environment.

They demonstrate this on subjects that are recorded inside a custom geodesic sphere outfitted with 331 custom color LED lights (also called a Light Stage capture system), an array of high-resolution cameras, and a set of custom high-resolution depth sensors. The Relightables system captures about 65 GB per second of raw data from nearly 100 cameras and its computational framework enables processing the data effectively at this scale.

Their system captures the reflectance information on a person—the way lighting interacts with skin is a major factor in how realistic digital people appear. Previous attempts used either flat lighting or required computer generated characters. Not only are they able to capture reflectance information on a person, but they are able to record while the person is moving freely within the volume. As a result, they are able to relight their animation in arbitrary environments.

Historically, cameras record people from a single viewpoint and lighting condition. This new system, note the researchers, allows users to record someone then view them from any viewpoint and lighting condition, removing the need for a green screen to create special effects and allowing for more flexible lighting conditions.

The interactions of space, , and shadow between a performer and their environment play a critical role in creating a sense of presence. Beyond just 'cutting-and-pasting' a 3-D video capture, the system gives the ability to record someone and then seamlessly place them into new environments—whether in their own space for AR experiences—or in the world of a VR, film, or game experience.

At SIGGRAPH Asia, The Relightables team will present the components of their system, from capture to processing to display, with video demos of each stage. They will walk attendees through the ins and outs of building The Relightables, describing the major challenges they tackled in the work and showcasing some cool applications and renderings.

More information: The researchers' paper can be accessed at dl.acm.org/citation.cfm?id=3356571.

Provided by Association for Computing Machinery
Citation: Google's new system captures character lighting for virtually any environment (2019, November 18) retrieved 28 March 2024 from https://techxplore.com/news/2019-11-google-captures-character-virtually-environment.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New computational method introduced for lighting in computer graphics

136 shares

Feedback to editors