posted on 2002-01-01, 00:00authored byMahesh Saptharishi, C. Spence Oliver, Christopher P. Diehl, Kiran S. Bhat, John M. Dolan, Ashitey Trebi-Ollennu, Pradeep K. Khosla
The objective of the CyberScout project is to
develop an autonomous surveillance and reconnaissance system
using a network of all-terrain vehicles. In this paper, we focus on
two facets of this system: 1) vision for surveillance and 2)
autonomous navigation and dynamic path planning.
In the area of vision-based surveillance, we have developed
robust, efficient algorithms to detect, classify, and track moving
objects of interest (person, people, or vehicle) with a static
camera. Adaptation through feedback from the classifier and
tracker allow the detector to use grayscale imagery, but
perform as well as prior color-based detectors. We have
extended the detector using scene mosaicing to detect and index
moving objects when the camera is panning or tilting. The
classification algorithm performs well (less than 8% error rate for
all classes) with coarse inputs (20x20-pixel binary image chips),
has unparalleled rejection capabilities (rejects 72% of spurious
detections), and can flag novel moving objects. The tracking
algorithm achieves highly accurate (96%) frame-to-frame
correspondence for multiple moving objects in cluttered scenes by
determining the discriminant relevance of object features.
We have also developed a novel mission coordination
architecture, CPAD (Checkpoint/Priority/Action Database),
which performs path planning via checkpoint and dynamic
priority assignment, using statistical estimates of the
environment’s motion structure. The motion structure is used to
make both preplanning and reactive behaviors more efficient by
applying global context. This approach is more computationally
efficient than centralized approaches and exploits robot
cooperation in dynamic environments better than decoupled
approaches.