Online Customization of Teleoperation Interfaces
In teleoperation, the user's input is mapped onto the robot via a motion retargetting function. This function must differ between robots because of their different kinematics, between users because of their different preferences, and even between tasks that the users perform with the robot. Our work enables users to customize this retargetting function, and achieve any of these required differences. In our approach, the robot starts with an initial function. As the user teleoperates the robot, he can pause and provide example correspondences, which instantly update the retargetting function. We select the algorithm underlying these updates by formulating the problem as an instance of online function approximation. The problem's requirements, as well as the semantics and constraints of motion retargetting, lead to an extension of Online Learning with Kernel Machines in which the width of the kernel can vary. Our central hypothesis is that this method enables users to train retargetting functions to good outcomes. We validate this hypothesis in a user study, which also reveals the importance of providing users with tools to verify their examples: much like an actor needs a mirror to verify his pose, a user needs to verify his input before providing an example. We conclude with a demonstration from an expert user that shows the potential of the method for achieving more sophisticated customization that makes particular tasks easier to complete, once users get expertise with the system.