IMPLICIT MOTION-SHAPE MODEL: A GENERIC APPROACH FOR ACTION MATCHING
We develop a robust technique to search for videos having similar motion patterns. Using a query video, we construct a motion history image (MHI) of the main action taken inside the search region. Dividing the MHI into concise spacetime regions allows us to analyze the action as a dynamic 3D structure of sparse motion patches. We adopt the idea of Generalized Hough Transform to integrate statistics of all those motion shapes into an Implicit Model, which describes the dynamic characteristics of the query action. Motion segments retrieved in the same way from video candidates are projected onto the Hough hyperspace of the query model. Matching scoring is then derived by running Parzen window density estimation under different scales. Empirical results obtained from KTH andWeizmann datasets have proven the efficiency of this approach, returning highly accurate matches within acceptable processing time. In addition, the nonparametric nature of this modeling algorithm makes it highly generic and adaptive to various applications in video search.
Keywords: Video Content Retrieval, Action Recognition, Implicit Motion-Shape Model, Motion History Image