The missing piece to faster, cheaper and more accurate 3D mapping

- EN - FR
The authors Davide A. Cucci, Aurélien Brun and Jan Skaloud. © Alain Herzog/EPFL
The authors Davide A. Cucci, Aurélien Brun and Jan Skaloud. © Alain Herzog/EPFL

Engineers at EPFL and the University of Geneva believe they hold the key to automated drone mapping. By combining artificial intelligence with a new algorithm, their method promises to considerably reduce the time and resources needed to accurately scan complex landscapes.

Three-dimensional (3D) mapping is a very useful tool, such as for monitoring construction sites, tracking the effects of climate change on ecosystems and verifying the safety of roads and bridges. However, the technology currently used to automate the mapping process is limited, making it a long and costly endeavor.

"Switzerland is currently mapping its entire landscape using airborne laser scanners - the first time since 2000. But the process will take four to five years since the scanners have to fly at an altitude below one kilometer if they are to collect data with sufficient detail and accuracy," says Jan Skaloud, a senior scientist at the Geodetic Engineering Laboratory (Topo) within EPFL’s School of Architecture, Civil and Environmental Engineering (ENAC). "With our method, surveyors can send laser scanners as high as five kilometers and still maintain accuracy. Our lasers are more sensitive and can beam light over a much wider area, making the process five times faster."

The method is described in a paper published in ISPRS Journal of Photogrammetry and Remote Sensing by Davide Cucci, a senior research associate at the Research Center for Statistics of the Geneva School of Economics and Management of the University of Geneva, who works with Topo regularly, Jan Skaloud, and Aurélien Brun, lead author, a recent Master’s graduate from EPFL and winner of an award from the Western Switzerland Association of Surveyor Engineers (IGSO).

Missing the point LiDAR laser scanners beam millions of pulses of light on surfaces to create high-resolution digital twins - computer-based replicas of objects or landscapes - that can be used in architecture, road systems and manufacturing, for example. Lasers are particularly effective at collecting spatial data since they don’t depend on ambient light, can collect accurate data at large distances and can essentially "see through" vegetation. But lasers’ accuracy is often lost when they’re mounted on drones or other moving vehicles, especially in areas with numerous obstacles like dense cities, underground infrastructure sites, and places where GPS signals are interrupted. This results in gaps and misalignments in the datapoints used to generate 3D maps (also known as laser-point clouds), and can lead to double vision of scanned objects. These errors must be corrected manually before a map can be used.

"For now, there’s no way to generate perfectly aligned 3D maps without a manual data-correction step," says Cucci. "A lot of semi-automatic methods are being explored to overcome this problem, but ours has the advantage of resolving the issue directly at the scanner level, where measurements are taken, eliminating the need to subsequently make corrections. It’s also fully software-driven, meaning it can be implemented quickly and seamlessly by end users."

On the road to automation The Topo method leverages recent advancements in artificial intelligence to detect when a given object has been scanned several times from different angles. The method involves selecting correspondences and inserting them into what’s called a Dynamic Network, in order to correct gaps and misalignments in the laser-point cloud.

"We’re bringing more automation to 3D mapping technology, which will go a long way towards improving its efficiency and productivity and allow for a much wider range of applications," says Skaloud.

References

Aurélien Brun, Davide A. Cucci and Jan Skaloud. "LiDAR Point-to-point Correspondences for Rigorous Registration of Kinematic Scanning in Dynamic Networks." ISPRS Journal of Photogrammetry and Remote Sensing., 19 May 2022