Gesture-Based Programming, Part 1: A Multi-Agent Approach
Gesture-Based Programming is a paradigm for the evolutionary programming of rapidly deployable manipulation systems by human demonstration. The goal is to provide a more natural environment for the user and to generate more complete and successful programs by focusing on task experts rather than programming experts. What makes it unique from other programming by human demonstration approaches are the same things that make it evolutionary: a composable knowledge base of expertise agents and a facility for supervised practice after initial training. Prior knowledge of previously acquired skills (sensorimotor expertise) facilitates the interpretation of “gestures” during training and then provides closed-loop control of execution during run-time. This paper, part one of two, presents a high-level description of the system as well as descriptions of capabilities we’ve demonstrated on a PUMA robot and Utah/MIT hand. The companion paper provides a detailed account of one method of acquiring, matching, and even transforming sensorimotor expertise.