Carnegie Mellon University
Browse

Video Analytics for Conflict Monitoring and Human Rights Documentation

Download (3.03 MB)
journal contribution
posted on 2015-07-01, 00:00 authored by Jay AronsonJay Aronson, Shicheng Xu, Alex Hauptmann

In this technical report, we describe how a powerful machine learning and computer vision-based video analysis system called Event Labeling through Analytic Media Processing (E-LAMP) can be used to monitor conflicts and human rights abuse situations. E-LAMP searches through large volumes of video for objects (e.g., weapons, military vehicles, buildings, etc.), actions (e.g., explosions, tank movement, gunfire, structures collapsing, etc.), written text (assuming it can be processed by optical character recognition systems), speech acts, and human behaviors (running, crowd formation, crying, screaming, etc.) without recourse to metadata. It can also identify particular classes of people such as soldiers, children, or corpses. We first describe the history of E-LAMP and explain how it works. We then provide an introduction to building novel classifiers (search models) for use in conflict monitoring and human rights documentation. Finally, we offer preliminary accuracy data on four test classifiers we built in the context of the Syria conflict (helicopter, tank, corpse, and gunshots), and highlight the limitations that E-LAMP currently possesses. Moving forward, we will be working with several conflict monitoring and human rights organizations to help them identify the benefits and challenges of implementing E-LAMP into their workflows.

History

Date

2015-07-01

Usage metrics

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC