Improving the Design and Use of Correlation Filters in Visual Tracking
The volume of video data collected will only increase as the prevalence of automated systems continues to grow and those systems start to rely more on vision sensors to make decisions. Visual tracking is the process of automatically estimating the location of an object through the course of a video. The ability to track objects in video is useful in applications such as autonomous driving, surveillance, and robotics. The ability to track objects allows for more effective decision making processes for tasks such as predictive driving, anomaly detection, and face recognition. With the amount of data to parse and the benefits of doing so accurately, the need for fast and reliable visual tracking is clear. Correlation filters, previously used in detection and recognition tasks within single images, have become a popular approach to visual tracking because of their ability to efficiently match and align two images. Correlation filters have been adapted for visual tracking by developing incremental learning techniques, allowing efficient updating of correlation filters. Tracker elements such as more powerful feature representations and improved scale tolerance have led to state-of-the-art tracking performance. Still, despite the recent improvements in correlation filter trackers, there remain unexplored aspects of the union of correlation filters and visual tracking. This work explores alternative correlation filter designs that have not previously been adapted to visual tracking. We also introduce an occlusion detection system to address situations where the targets are temporarily not visible; one of the most challenging aspects of tracking. We validate our approaches on widely used benchmarks while also introducing a new evaluation metric that reflects the amount of activity that occurs within a given video.