Page 8: Research news on Computational 3D vision

Computational 3D vision concerns algorithms and sensor systems that infer three-dimensional structure, motion, and semantics from visual and related signals. Methods span monocular and multi-view 3D reconstruction, depth estimation, inverse rendering, and 4D scene capture, often integrating LiDAR, radar, infrared, and event or neuromorphic sensors. Deep learning architectures and data-driven simulation play central roles in segmentation, pose estimation, anomaly detection, and novel view synthesis, enabling robust perception, mapping, and editing of complex environments for robotics, autonomous systems, and immersive displays.

Computer Sciences

Making simulations more accurate than ever with deep learning

Future events such as the weather or satellite trajectories are computed in tiny time steps, so the computation must be both efficient and as accurate as possible at each step lest errors pile up. A Kobe University team has ...

page 8 from 20