Page 4: Research news on Computational 3D vision

Computational 3D vision concerns algorithms and sensor systems that infer three-dimensional structure, motion, and semantics from visual and related signals. Methods span monocular and multi-view 3D reconstruction, depth estimation, inverse rendering, and 4D scene capture, often integrating LiDAR, radar, infrared, and event or neuromorphic sensors. Deep learning architectures and data-driven simulation play central roles in segmentation, pose estimation, anomaly detection, and novel view synthesis, enabling robust perception, mapping, and editing of complex environments for robotics, autonomous systems, and immersive displays.

Computer Sciences

Creating realistic 3D scenes from everyday online photos

A new approach is making it easier to visualize lifelike 3D environments from everyday photos already shared online, opening new possibilities in industries such as gaming, virtual tourism and cultural preservation.

Computer Sciences

Making simulations more accurate than ever with deep learning

Future events such as the weather or satellite trajectories are computed in tiny time steps, so the computation must be both efficient and as accurate as possible at each step lest errors pile up. A Kobe University team has ...

page 4 from 17