Credit: CC0 Public Domain

Scientists from the Skoltech ADASE (Advanced Data Analytics in Science and Engineering) lab have found a way to enhance depth map resolution, which should make virtual reality and computer graphics more realistic. They presented their research results at the prestigious International Conference on Computer Vision 2019 in Korea.

When taking a photo, we capture about objects around us, with the different pixels in the image containing the colors of the respective parts of the object. Depth maps are photos that capture spatial information and their pixels contain the distances from the camera to the respective points in space. Applications such as computer graphics and augmented or use to reconstruct a 3-D object's shape and, for instance, display it on a computer screen.

One of the issues of depth cameras is that their , that is, the spatial frequency of distance measurements, is insufficient for restoring the high quality shape of the object, making the virtual reconstructions look all but unrealistic.

The researchers are faced with the challenge of finding a way to obtain high-resolution depth maps from low-resolution depth maps.

Scientists from the Skoltech ADASE lab have proposed to assess the reconstruction quality using a novel method closely related to human perception. Training an artificial neural network with this quality assessment technique produces a depth map super-resolution method that largely outperforms the existing methods in the visual quality of the result.

"When dealing with super-resolved depth maps, one should assess the quality of the result to first compare the performance of different methods, and, secondly, to use it as feedback for further improvements. The easiest way is to compare the result to some reference. The overwhelming majority of works on depth map super-resolution use for this purpose mean the difference between super-resolved and reference depth values. By no means does this method reflect the visual quality of the 3-D reconstruction obtained from the super-resolved depth map," explains the first author of the study, Oleg Voynov.

"We propose an altogether different method, which leverages the human perception of the difference between visualizations of the 3-D reconstructions obtained from super-resolved and reference depth maps. The graphics you obtain with this method looks highly realistic. We hope that our method will find extensive use," says one of the developers, Alexey Artemov.

More information: Oleg Voynov, et al. Perceptual Deep Depth Super-Resolution The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 5653-5663. openaccess.thecvf.com/content_ … ICCV_2019_paper.html