Research news on Computational 3D vision

Computational 3D vision concerns algorithms and sensor systems that infer three-dimensional structure, motion, and semantics from visual and related signals. Methods span monocular and multi-view 3D reconstruction, depth estimation, inverse rendering, and 4D scene capture, often integrating LiDAR, radar, infrared, and event or neuromorphic sensors. Deep learning architectures and data-driven simulation play central roles in segmentation, pose estimation, anomaly detection, and novel view synthesis, enabling robust perception, mapping, and editing of complex environments for robotics, autonomous systems, and immersive displays.

Engineering

Excuse me, is that solar panel pointing in the right direction?

On a bright morning, graduate student Jeremy Klotz and professor Shree Nayar walked through upper Manhattan with a tall tripod and a camera that takes 360-degree images. Their route took them to bike docking stations, which ...

Security

AI system detects manipulated video frames with 95% accuracy

With the rapid spread of digital content, doctored videos pose growing risks across media, security, and legal domains. A new study published in The Journal of Engineering Research introduces an automated approach to detect ...

page 1 from 20