Abstract. In this paper we introduce a novel approach for learning view-invariant gait representation that does not require synthesizing particular views or any camera calibration. Given walking sequences captured from multiple views for multiple people, we fit a multilinear generative model using higher-order singular value decomposition which decomposes view factors, body configuration factors, and gait-style factors. Gait-style is a view-invariant, time-invariant, and speedinvariant gait signature that can then be used in recognition. In the recognition phase, a new walking cycle of unknown person in unknown view is automatically aligned to the learned model and then iterative procedure is used to solve for both the gait-style parameter and the view. The proposed framework allows for scalability to add a new person to already learned model even if a single cycle of a single view is available.
Chan-Su Lee, Ahmed M. Elgammal