Carnegie Mellon University
Browse

Multi-resolution Corrective Demonstration for Efficient Task Execution and Refinement

Download (1.74 MB)
journal contribution
posted on 1986-01-01, 00:00 authored by Cetin Mericli, Manuela M. Veloso, H. Levent Akin

Computationally efficient task execution is very important for autonomous mobile robots endowed with limited on-board computational resources. Most robot control approaches assume a fixed state and action representation, and use a single algorithm to map states to actions. However, not all situations in a given task require equally complex algorithms and equally detailed state and action representations. The main motivation for this work is a desire to reduce the computational footprint of performing a task by allowing the robot to run simpler algorithms whenever possible, and resort to a more complex algorithm only when needed. We contribute the Multi-Resolution Task Execution (MRTE) algorithm that utilizes human feedback to learn a mapping from a given state to an appropriate detail resolution consisting of a state and action representation, and an algorithm providing a mapping from states to actions at that resolution. The robot learns a policy from human demonstration to switch between different detail resolutions as needed while favoring lower detail resolutions to reduce computational cost of task execution. We then present the Model Plus Correction (M+C) algorithm to improve the performance of an algorithm using corrective human feedback without modifying the algorithm itself. Finally, we introduce the Multi-Resolution Model Plus Correction (MRM+C) algorithm as a combination of MRTE and M+C. MRM+C learns how to select an appropriate detail resolution to operate at in a given state from human demonstration. Furthermore, it allows the teacher to provide corrective demonstration at different detail resolutions to improve overall task execution performance. We provide formal definitions of MRTE, M+C, and MRM+C algorithms, and show how they relate to general robot control problem and Learning from Demonstration (LfD) approach. We present experimental results de-monstrating the effectiveness of proposed methods on a goal-directed humanoid obstacle avoidance task.

History

Publisher Statement

All Rights Reserved

Date

1986-01-01