Page 16: Research news on Computational 3D vision

Computational 3D vision concerns algorithms and sensor systems that infer three-dimensional structure, motion, and semantics from visual and related signals. Methods span monocular and multi-view 3D reconstruction, depth estimation, inverse rendering, and 4D scene capture, often integrating LiDAR, radar, infrared, and event or neuromorphic sensors. Deep learning architectures and data-driven simulation play central roles in segmentation, pose estimation, anomaly detection, and novel view synthesis, enabling robust perception, mapping, and editing of complex environments for robotics, autonomous systems, and immersive displays.

Computer Sciences

3D streaming gets leaner by seeing only what matters

A new approach to streaming technology may significantly improve how users experience virtual reality and augmented reality environments, according to a study from NYU Tandon School of Engineering.

page 16 from 17