Carnegie Mellon University
Browse
file.pdf (3.89 MB)

The Dependent Dirichlet Process Mixture of Objects for Detection-free Tracking and Object Modeling

Download (3.89 MB)
journal contribution
posted on 2014-04-01, 00:00 authored by Willie Neiswanger, Frank Wood, Eric P Xing
This paper explores how to find, track, and learn models of arbitrary objects in a video without a predefined method for object detection. We present a model that localizes objects via unsupervised tracking while learning a representation of each object, avoiding the need for pre-built detectors. Our model uses a dependent Dirichlet process mixture to capture the uncertainty in the number and appearance of objects and requires only spatial and color video data that can be efficiently extracted via frame differencing. We give two inference algorithms for use in both online and offline settings, and use them to perform accurate detection-free tracking on multiple real videos. We demonstrate our method in difficult detection scenarios involving occlusions and appearance shifts, on videos containing a large number of objects, and on a recent human-tracking benchmark where we show performance comparable to state of the art detector-based methods.

History

Publisher Statement

Copyright 2014 by the authors

Date

2014-04-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC