posted on 2003-01-01, 00:00authored byYaron Rachlin, John M. Dolan, Pradeep K. Khosla
Deployed vision systems often encounter image
variations poorly represented in their training data.
While observing their environment, such vision systems
obtain unlabeled data that could be used to compensate
for incomplete training. In order to exploit these
relatively cheap and abundant unlabeled data we present
a family of algorithms called λMEEM. Using these
algorithms, we train an appearance-based people
detection model. In contrast to approaches that rely on a
large number of manually labeled training points, we use
a partially labeled data set to capture appearance
variation. One can both avoid the tedium of additional
manual labeling and obtain improved detection
performance by augmenting a labeled training set with
unlabeled data. Further, enlarging the original training
set with new unlabeled points enables the update of
detection models after deployment without human
intervention. To support these claims we show people
detection results, and compare our performance to a
purely generative Expectation Maximization-based
approach to learning over partially labeled data.