Home > Robotics > Multi-Robot Collaborative Perception with Graph Neural Networks

Multi-Robot Collaborative Perception with Graph Neural Networks

Multi-robot perception based on artificial neural networks includes various tasks like multi-robot detection, tracking, or localization and mapping.

A recent paper on arXiv.org looks into the multi-robot perception problem with Graph Neural Networks (GNNs). The researchers propose a generalizable GNN-based perception framework for multi-robot systems to increase single robots’ inference perception accuracy.

Industrial robots. Image credit: Auledas via Wikimedia, CC-BY-SA-4.0

The approach embeds the spatial relationship between neighbor nodes into the messages. The message weights are adjusted according to the correlation of node features of different robots. The researchers have applied the method in two tasks: collaborative monocular depth estimation and semantic segmentation.

The approach is effective even in challenging cases such as camera sensors affected by heavy image noise or occlusions. It is the first time a GNN has been employed to solve a multi-view perception task using real robot image data.

Multi-robot systems such as swarms of aerial robots are naturally suited to offer additional flexibility, resilience, and robustness in several tasks compared to a single robot by enabling cooperation among the agents. To enhance the autonomous robot decision-making process and situational awareness, multi-robot systems have to coordinate their perception capabilities to collect, share, and fuse environment information among the agents in an efficient and meaningful way such to accurately obtain context-appropriate information or gain resilience to sensor noise or failures. In this paper, we propose a general-purpose Graph Neural Network (GNN) with the main goal to increase, in multi-robot perception tasks, single robots’ inference perception accuracy as well as resilience to sensor failures and disturbances. We show that the proposed framework can address multi-view visual perception problems such as monocular depth estimation and semantic segmentation. Several experiments both using photo-realistic and real data gathered from multiple aerial robots’ viewpoints show the effectiveness of the proposed approach in challenging inference conditions including images corrupted by heavy noise and camera occlusions or failures.

Research paper: Zhou, Y., Xiao, J., Zhou, Y., and Loianno, G., “Multi-Robot Collaborative Perception with Graph Neural Networks”, 2021. Link: https://arxiv.org/abs/2201.01760


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x