Mapping the 3D volume of an environment during online operation is necessary for tasks like autonomous navigation or mobile manipulation. However, these measurements are affected by sensor noise and pose estimation errors.
Volumetric Mapping technique is used in different fields, where image segmentation or other types of image processing are required. Image credit: Toorumer via Wikimedia, CC-BY-SA-4.0
A recent study published on arXiv.org proposes a system for robust volumetric mapping that produces occupancy probabilities from given sparse unoriented point clouds, e. g. from LiDAR or RGBD sensor.
Given new measurements, the map is updated directly in latent space instead of updating only the occupancy probabilities. The proposed 3D mapping approach can accurately capture noisy measurements and preserve the overall geometry of the scenes better than classical methods.
The demonstrated accuracy helps to ensure safety during robot operation. The system generalizes to a large variety of configurations and sensors and can operate in real-time, even in CPU-only configurations.
We present a novel 3D mapping method leveraging the recent progress in neural implicit representation for 3D reconstruction. Most existing state-of-the-art neural implicit representation methods are limited to object-level reconstructions and can not incrementally perform updates given new data. In this work, we propose a fusion strategy and training pipeline to incrementally build and update neural implicit representations that enable the reconstruction of large scenes from sequential partial observations. By representing an arbitrarily sized scene as a grid of latent codes and performing updates directly in latent space, we show that incrementally built occupancy maps can be obtained in real-time even on a CPU. Compared to traditional approaches such as Truncated Signed Distance Fields (TSDFs), our map representation is significantly more robust in yielding a better scene completeness given noisy inputs. We demonstrate the performance of our approach in thorough experimental validation on real-world datasets with varying degrees of added pose noise.
Research paper: Lionar, S., Schmid, L., Cadena, C., Siegwart, R., and Cramariuc, A., “NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping”, 2021. Link: https://arxiv.org/abs/2110.09415