posted on 2003-01-01, 00:00authored byCaroline Pantofaru, Ranjith Unnikrishnan, Martial Hebert
This paper addresses the problem of extracting
information from range and color data acquired by a
mobile robot in urban environments. Our approach extracts
geometric structures from clouds of 3-D points and regions
from the corresponding color images, labels them based on
prior models of the objects expected in the environment
- buildings in the current experiments - and combines
the two sources of information into a composite labeled
map. Ultimately, our goal is to generate maps that are
segmented into objects of interest, each of which is labeled
by its type, e.g., buildings, vegetation, etc. Such a map
provides a higher-level representation of the environment
than the geometric maps normally used for mobile robot
navigation. The techniques presented here are a step toward
the automatic construction of such labeled maps.