Current vision-based manipulation technologies are slow, expensive, and do not generalize well to unseen objects.
A recent paper on arXiv.org suggests learning from human development to find more effective approaches for this task. Infants learn to perceive the world passively before reaching for objects actively. Similarly, the researchers propose to learn the ability to detect objects before performing vision-based manipulation.
Image: pixnio.com, CC0 Public Domain
It is shown that transferring the entire vision model, including both features from the backbone and the visual predictions from the head, leads to the best results. It was shown that various vision tasks could help learn grasping and suction. The experiments confirm that the suggested approach improves both training speed and final performance for learning manipulation in a new environment.
Does having visual priors (e.g. the ability to detect objects) facilitate learning to perform vision-based manipulation (e.g. picking up objects)? We study this problem under the framework of transfer learning, where the model is first trained on a passive vision task, and adapted to perform an active manipulation task. We find that pre-training on vision tasks significantly improves generalization and sample efficiency for learning to manipulate objects. However, realizing these gains requires careful selection of which parts of the model to transfer. Our key insight is that outputs of standard vision models highly correlate with affordance maps commonly used in manipulation. Therefore, we explore directly transferring model parameters from vision networks to affordance prediction networks, and show that this can result in successful zero-shot adaptation, where a robot can pick up certain objects with zero robotic experience. With just a small amount of robotic experience, we can further fine-tune the affordance model to achieve better results. With just 10 minutes of suction experience or 1 hour of grasping experience, our method achieves ~80% success rate at picking up novel objects.
Research paper: Yen-Chen, L., Zeng, A., Song, S., Isola, P., and Lin, T.-Y., “Learning to See before Learning to Act: Visual Pre-training for Manipulation”, 2021 . Link: https://arxiv.org/abs/2107.00646
Link to the project page: https://yenchenlin.me/vision2action/