posted on 2009-01-01, 00:00authored byEkaterina H. Spriggs, Fernando de la Torre, Martial Hebert
Temporal segmentation of human motion into actions is
central to the understanding and building of computational
models of human motion and activity recognition. Several
issues contribute to the challenge of temporal segmentation
and classification of human motion. These include the
large variability in the temporal scale and periodicity of
human actions, the complexity of representing articulated
motion, and the exponential nature of all possible movement
combinations. We provide initial results from investigating
two distinct problems - classification of the overall
task being performed, and the more difficult problem of
classifying individual frames over time into specific actions.
We explore first-person sensing through a wearable camera
and Inertial Measurement Units (IMUs) for temporally segmenting
human motion into actions and performing activity
classification in the context of cooking and recipe preparation
in a natural environment. We present baseline results
for supervised and unsupervised temporal segmentation,
and recipe recognition in the CMU-Multimodal activity
database (CMU-MMAC).