Gesture-Based Programming, Part 2: Primordial Learning
In part one of this two-part series, we described our Gesture-Based Programming paradigm for programming by human demonstration. This paradigm depends on a pre-existing knowledge base of capabilities, collectively called “encapsulated expertise,” that comprise the real-time sensorimotor primitives from which the run-time executable is constructed as well as providing the basis for interpreting the teacher’s actions during programming. In this paper we present a technique based on principal components analysis, augmentable with model-based information, for learning and recognizing sensorimotor primitives. We describes simple applications of the technique to a mobile robot and a PUMA manipulator. The mobile robot learned to escape from jams while the manipulator learned guarded moves and accommodations that are composable to allow flat plate mating operations. While these initial applications are simple, they demonstrate the ability to extract primitives from demonstration, recognize the learned primitives in subsequent demonstrations, and combine and transform primitives to create different capabilities.