Human Action Recognition and Localization in Video using Structured Learning of Local Space-Time Features
This paper presents a unified framework for human action classification and localization in video using structured learning of local space-time features. Each human action class is represented by a set of its own compact set of local patches. In our approach, we first use a discriminative hierarchical Bayesian classifier to select those spacetime interest points that are constructive for each particular action. Those concise local features are then passed to a Support Vector Machine with Principle Component Analysis projection for the classification task. Meanwhile, the action localization is done using Dynamic Conditional Random Fields developed to incorporate the spatial and temporal structure constraints of superpixels extracted around those features. Each superpixel in the video is defined by the shape and motion information of its corresponding feature region. Compelling results obtained from experiments on KTH , Weizmann , HOHA  and TRECVid  datasets have proven the efficiency and robustness of our framework for the task of human action recognition and localization in video.
Keywords: Action Recognition and Localization, Structured learning