Mapping the old with the new: Handheld 3D scanning of Oxford's Sheldonian Theater

Mapping the old with the new: Handheld 3D scanning of Oxford's Sheldonian Theater
Credit: University of Oxford

A collaborative project involving the Oxford Robotics Institute, the Sheldonian theater, and multinational construction equipment company Hilti is bringing forward the next generation of digital mapping technologies.

Accurate, 3D maps of buildings are valuable assets for many different purposes, from preserving historic sites to mapping disaster zones. A key challenge limiting their wider use, however, is that producing them is highly labor-intensive and requires expensive technical equipment. Researchers from Oxford Robotics Institute (ORI), part of the Department of Engineering Science, have addressed this by developing lightweight, handheld techniques capable of mapping entire buildings in a matter of hours.

To produce these, the team has to overcome a in robotics and autonomous technologies: simultaneous localization and mapping (SLAM). Associate Professor Maurice Fallon, who leads the Dynamic Robot Systems Group, explained, "SLAM is the computational problem of equipping robots with situational awareness so that they can construct a map of an unknown environment while simultaneously keeping track of their location within it. To map a building quickly, this requires robust algorithms that can combine the sensor and positional data to reconstruct the surrounding environment."

The ORI team have developed a new handheld device that fuses different types of sensors so that their SLAM algorithms can work in many environments. This increases the robustness and accuracy of the resulting 3D model. The device combines five cameras, an inertial measurement unit (IMU), and a laser scanner (LIDAR) into a single unit.

To test the capabilities of their novel SLAM algorithms, the team partnered with the operations team for the Sheldonian Theater: a six-floor, Grade I listed building that is the official ceremonial hall of the University of Oxford. First, the team produced a high-resolution map using a standard survey-grade 3D laser scanner, mounted on a tripod and weighing ten kilograms.

Credit: University of Oxford

"The disadvantages of the conventional mapping method became immediately apparent," said project lead Lintong Zhang, a DPhil student in the Dynamic Robot Systems group. "Completing the full survey required placing the tripod and scanner in hundreds of different positions, and it was very challenging to maneuver it up the narrow staircases. The entire process took four days, during which the theater had to be closed to the public." In contrast, mapping the Sheldonian with the handheld device took about 30 minutes: the time required to walk around the building.

Having compiled datasets from both devices, the team partnered with Hilti Corporation to give researchers from around the world the opportunity to work with the data. The information captured by the handheld device became the basis of the Hilti SLAM Challenge 2022: an annual competition to encourage the development of improved SLAM algorithms by providing datasets from challenging real-world environments.

"SLAM algorithms are typically developed in confined laboratory environments that aren't representative of real world conditions," said Lintong. "With its unique architecture, dark corners, long corridors, and dynamic objects, the Sheldonian dataset was perfectly designed to break existing SLAM algorithms and challenge researchers."

The competition attracted over 40 submissions from both industry and academic groups worldwide. After submitting their algorithms based on the dataset from the , each team received a detailed accuracy report based on the map produced using the survey-grade 3D scanner. The best performing teams, CSIRO (a research lab from Australia) and V&R Vision and Robotics (a company based in Germany), achieved results that were within 1cm of the survey-grade model throughout. This is a key threshold for the algorithms to be applied within the construction and surveying sectors.

The team foresees that rapid 3D scanning could have widespread uses within building preservation, and could also help improve accessibility by enabling those with limited mobility to tour sites of interest virtually. "We are also investigating how we can integrate this technology with our work on autonomous robots and drones' added Associate Professor Fallon. "This could enable rapid mapping of zones that are difficult or unsafe for humans to access, such as disaster zones and decommissioned nuclear power plants."

Professor Fallon's team has also taken the first steps to commercialize the technology through a spin-out called NavLive.

The chair of the Sheldonian Curators, Professor Georgina Paul, said, "We are delighted that the Sheldonian Theater, one of the University's most prominent historic buildings, was the site for testing this up-to-the-minute 3D mapping technology.

"In thinking about increasing the accessibility of the Sheldonian to a more diverse array of visitors and audiences, having an interactive 3D model of the Theater helps the Curators and the teams who support them understand the structural detail of this fascinating 17th-century building. But the digital model is itself an attraction which we hope will draw those sitting at their computer screens to explore the Sheldonian."

The study is published in IEEE Robotics and Automation Letters.

More information: Lintong Zhang et al, Hilti-Oxford Dataset: A Millimeter-Accurate Benchmark for Simultaneous Localization and Mapping, IEEE Robotics and Automation Letters (2022). DOI: 10.1109/LRA.2022.3226077

Citation: Mapping the old with the new: Handheld 3D scanning of Oxford's Sheldonian Theater (2022, December 13) retrieved 1 June 2023 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Robot's in-hand eye maps surroundings, determines hand's location


Feedback to editors