Coordinated Multi-Agent Teams and Sliding Autonomy for Large-Scale Assembly
Recent research in human-robot interaction has investigated the concept of Sliding, or Adjustable, Autonomy, a mode of operation bridging the gap between explicit teleoperation and complete robot autonomy. This work has largely been in single-agent domains-involving only one human and one robot-and has not examined the issues that arise in multiagent domains. We discuss the issues involved in adapting Sliding Autonomy concepts to coordinated multiagent teams. In our approach, remote human operators have the ability to join, or leave, the team at will to assist the autonomous agents with their tasks (or aspects of their tasks) while not disrupting the team's coordination. Agents model their own and the human operator's performance on subtasks to enable them to determine when to request help from the operator. To validate our approach, we present the results of two experiments. The first evaluates the human/multirobot team's performance under four different collaboration strategies including complete teleoperation, pure autonomy, and two distinct versions of Sliding Autonomy. The second experiment compares a variety of user interface configurations to investigate how quickly a human operator can attain situational awareness when asked to help. The results of these studies support our belief that by incorporating a remote human operator into multiagent teams, the team as a whole becomes more robust and efficient