Perceiving, Learning, and Exploiting Object Affordances for Autonomous Pile Manipulation
Any type of content formally published in an academic journal, usually following a peer-review process.
Autonomous manipulation in unstructured environments presents roboticists with three fundamental challenges: object segmentation, action selection, and motion generation. These challenges become more pronounced when unknown manmade or natural objects are cluttered together in a pile. We present an end-to-end approach to the problem of manipulating unknown objects in a pile, with the objective of removing all objects from the pile and placing them into a bin. Our robot perceives the environment with an RGB-D sensor, segments the pile into objects using non-parametric surface models, computes the affordances of each object, and selects the best affordance and its associated action to execute. Then, our robot instantiates the proper compliant motion primitive to safely execute the desired action. For efficient and reliable action selection, we developed a framework for supervised learning of manipulation expertise. We conducted dozens of trials and report on several hours of experiments involving more than 1500 interactions. The results show that our learning-based approach for pile manipulation outperforms a common sense heuristic as well as a random strategy, and is on par with human action selection.