Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots

The perception of self, that is, the ability to detect its own body and distinguish it from the background, is beneficial both for self-centered actions and interaction with other agents. Full spatial information of the hand must be known to perform difficult tasks, such as object grasping. There, simple methods such as 2D hand keypoints are not enough.

Robonaut. Image credit NASA via Pixabay

Therefore, a recent paper proposes to use hand segmentation for visual self-recognition. All the pixels belonging to a real robot hand are segmented using RGB images from the robot cameras.

The method uses convolutional neural networks trained with exclusively simulated data. It thus solves the lack of pre-existing training datasets. In order to fit the model to the specific domain, the pre-trained weights and the hyperparameters are fine-tuned. The proposed solution achieves an intersection over union accuracy better than the state-of-the-art.

The ability to distinguish between the self and the background is of paramount importance for robotic tasks. The particular case of hands, as the end effectors of a robotic system that more often enter into contact with other elements of the environment, must be perceived and tracked with precision to execute the intended tasks with dexterity and without colliding with obstacles. They are fundamental for several applications, from Human-Robot Interaction tasks to object manipulation. Modern humanoid robots are characterized by high number of degrees of freedom which makes their forward kinematics models very sensitive to uncertainty. Thus, resorting to vision sensing can be the only solution to endow these robots with a good perception of the self, being able to localize their body parts with precision. In this paper, we propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view. It is known that CNNs require a huge amount of data to be trained. To overcome the challenge of labeling real-world images, we propose the use of simulated datasets exploiting domain randomization techniques. We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy. We focus our attention on developing a methodology that requires low amounts of data to achieve reasonable performance while giving detailed insight on how to properly generate variability in the training dataset. Moreover, we analyze the fine-tuning process within the complex model of Mask-RCNN, understanding which weights should be transferred to the new task of segmenting robot hands. Our final model was trained solely on synthetic images and achieves an average IoU of 82% on synthetic validation data and 56.3% on real test data. These results were achieved with only 1000 training images and 3 hours of training time using a single GPU.

Research paper: Almeida, A., Vicente, P., and Bernardino, A., “Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots”, 2021. Link: https://arxiv.org/abs/2102.04750

Source

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x