Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Isabella Morona, Maria Bauza, Nikhil Chavan Dafle, Weber Liu, Francois R. Hogan, Ferran Alet, Rachel Holladay, Melody Liu, Nima Fazeli, Ian Taylor, Thomas Funkhouser, Prem Qu Nair, Alberto Rodriguez, Druck Green, Andy Zeng, Shuran Song, Orion Taylor, Daolin Ma, Eudald Romo, Kuan-Ting Yu, Elliott Donlon
Journal/Conference Name International Journal of Robotics Research
Paper Category
Paper Abstract This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http//
Date of publication 2017
Code Programming Language Lua

Copyright Researcher 2022