posted on 1995-01-01, 00:00authored byAndrew Johnson, Patrick Leger, Regis Hoffman, Martial Hebert, James Osborn
This paper describes a system that semi-automatically
builds a virtual world for remote operations by
constructing 3-D models of a robot’s work environment.
With a minimum of human interaction, planar and quadric
surface representations of objects typically found in manmade
facilities are generated from laser rangefinder data.
The surface representations are used to recognize complex
models of objects in the scene. These object models are
incorporated into a larger world model that can be viewed
and analyzed by the operator, accessed by motion planning
and robot safeguarding algorithms, and ultimately used by
the operator to command the robot through graphical
programming and other high level constructs. Limited
operator interaction, combined with assumptions about the
robots task environment, make the problem of modeling
and recognizing objects tractable and yields a solution that
can be readily incorporated into many telerobotic control
schemes.