Carnegie Mellon University
Browse

Visual Sensing for Developing Autonomous Behavior in Snake Robots

Download (1.6 MB)
journal contribution
posted on 2014-06-01, 00:00 authored by Hugo Ponte, Max Queenan, Chaohui GongChaohui Gong, Christoph MertzChristoph Mertz, Matthew TraversMatthew Travers, Florian Enner, Martial Hebert, Howie Choset

Snake robots are uniquely qualified to investigate a large variety of settings including archaeological sites, natural disaster zones, and nuclear power plants. For these applications, modular snake robots have been tele-operated to perform specific tasks using images returned to it from an onboard camera in the robots head. In order to give the operator an even richer view of the environment and to enable the robot to perform autonomous tasks we developed a structured light sensor that can make three-dimensional maps of the environment. This paper presents a sensor that is uniquely qualified to meet the severe constraints in size, power and computational footprint of snake robots. Using range data, in the form of 3D pointclouds, we show that it is possible to pair high-level planning with mid-level control to accomplish complex tasks without operator intervention.

History

Publisher Statement

© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Date

2014-06-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC