This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:


trusted source


Science fiction meets reality as researchers develop techniques to overcome obstructed views

Science fiction meets reality as researchers develop techniques to overcome obstructed views
Two-edge-resolved NLOS imaging scenario and hidden scene representation. a Depiction of the imaging scenario and proposed projected-elevation spherical coordinate. With the origin at the upper-left corner of the door frame, a hidden scene point is identified by its range ρ, azimuth θ, and projected-elevation ψ. b Shows the projected-elevation ψ in the proposed projected-elevation spherical coordinate system, it is the projection of the conventional elevation angle of spherical coordinates onto the xz-plane and is such that tan⁡(ψ)=tan⁡(φ)sec⁡(θ)\tan (\psi )=\tan (\varphi )\sec (\theta ). (For clarity, the z-axis is flipped from (a) to point upward.) c Elemental surface representation resulting from 10 equal divisions of azimuth and projected-elevation axes with fixed range, ρ. Indicated by the red dot is an example surface element whose center is at (ρ, θ, ψ) = (1, 11π/40, 13π/40) and angular extents equal π/20 along azimuth and projected-elevation. d, e Depict the changes in the observed measurement due to a hidden point source (red dot) moving from its position in (d) to a new position in (e) such that its range and projected-elevation angle is fixed and only its azimuthal angle changes. The light from a hidden scene point is occluded by the doorway edges to create an illuminated region of trapezoidal shape on the ceiling. The observation in (d) has an illuminated trapezoidal region whose slanted edge is steeper than that of (e) because the azimuthal angle of the point source increases from (d) to (e); the heights of the illuminated trapezoid portions in (d) and (e) are otherwise equal because the projected-elevation angle is unchanged. Credit: Nature Communications (2024). DOI: 10.1038/s41467-024-45397-7

After a recent car crash, John Murray-Bruce wished he could have seen the other car coming. The crash reaffirmed the USF assistant professor of computer science and engineering's mission to create a technology that could do just that: See around obstacles and ultimately expand one's line of vision.

Using a single photograph, Murray-Bruce and his doctoral student, Robinson Czajkowski, created an algorithm that computes highly accurate, full-color three-dimensional reconstructions of areas behind obstacles—a concept that can not only help prevent car crashes but help law enforcement experts in hostage situations search-and-rescue and strategic military efforts.

"We're turning ordinary surfaces into mirrors to reveal regions, objects, and rooms that are outside our line of vision," Murray-Bruce said. "We live in a 3D world, so obtaining a more complete 3D picture of a scenario can be critical in a number of situations and applications."

As published in Nature Communications, Czajkowski and Murray-Bruce's research is the first of its kind to successfully reconstruct a hidden scene in 3D using an ordinary digital camera. The algorithm works by using information from the photo of faint shadows cast on nearby surfaces to create a high-quality reconstruction of the scene. While it is more technical for the average person, it could have broad applications.

"These shadows are all around us," Czajkowski said. "The fact we can't see them with our naked eye doesn't mean they're not there."

The idea of seeing around obstacles has been a topic of science-fiction movies and books for decades. Murray-Bruce says this research takes significant strides in bringing that concept to life.

Prior to this work, researchers have only used ordinary cameras to create rough 2D reconstructions of small spaces. The most successful demonstrations of 3D imaging of hidden scenes all required specialized, expensive equipment.

"Our work achieves a similar result using far less," Czajkowski said. "You don't need to spend a million dollars on equipment for this anymore."

Czajkowski and Murray-Bruce expect it will be 10 to 20 years before the technology is robust enough to be adopted by law enforcement and car manufacturers. Right now, they plan to continue their research to improve the technology's speed and accuracy further to expand its applications in the future, including self-driving cars to improve their safety and situational awareness.

"In just over a decade since the idea of seeing around corners emerged, there has been remarkable progress, and there is accelerating interest and research activity in the area," Murray-Bruce said. "This increased activity, along with access to better, more sensitive cameras and faster computing power, form the basis for my optimism on how soon this technology will become practical for a wide range of scenarios."

While the is still in the , it is available for other researchers to test and reproduce in their own space.

More information: Robinson Czajkowski et al, Two-edge-resolved three-dimensional non-line-of-sight imaging with an ordinary camera, Nature Communications (2024). DOI: 10.1038/s41467-024-45397-7

Citation: Science fiction meets reality as researchers develop techniques to overcome obstructed views (2024, February 20) retrieved 12 April 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New digital-camera-based system can 'see' around corners


Feedback to editors