Effectively automating robotic manipulation of transparent objects would help to perform a lot of tasks. A recent study on arXiv.org proposes Dex-NeRF, a new method based on Neural Radiance Field to sense the geometry of transparent objects and allow for robots to interact with them.
Transparent objects. Image credit: Piqsels, CC0 Public Domain
It uses a Neural Radiance Fields (NeRF) as part of a pipeline. NeRF learns the density of all points in space, which corresponds to how much the view-dependent color of each point contributes to rays passing through it. The view-dependent nature of the NeRF enables it to represent the geometry associated with transparency.
The geometry is recovered through a combination of additional lights to create specular reflections and thresholding to find transparent points visible from some view directions. Then, the geometry is passed to a grasp planner. Experimental results show that NeRF-based grasp-planning achieves high accuracy and 90 % or better grasp success rates on real objects.
The ability to grasp and manipulate transparent objects is a major challenge for robots. Existing depth cameras have difficulty detecting, localizing, and inferring the geometry of such objects. We propose using neural radiance fields (NeRF) to detect, localize, and infer the geometry of transparent objects with sufficient accuracy to find and grasp them securely. We leverage NeRF’s view-independent learned density, place lights to increase specular reflections, and perform a transparency-aware depth-rendering that we feed into the Dex-Net grasp planner. We show how additional lights create specular reflections that improve the quality of the depth map, and test a setup for a robot workcell equipped with an array of cameras to perform transparent object manipulation. We also create synthetic and real datasets of transparent objects in real-world settings, including singulated objects, cluttered tables, and the top rack of a dishwasher. In each setting we show that NeRF and Dex-Net are able to reliably compute robust grasps on transparent objects, achieving 90% and 100% grasp success rates in physical experiments on an ABB YuMi, on objects where baseline methods fail.
Research paper: Ichnowski, J., Avigal, Y., Kerr, J., and Goldberg, K., “Dex-NeRF: Using a Neural Radiance Field to Grasp Transparent Objects”, 2021. Link: https://arxiv.org/abs/2110.14217