Modeling Shape, Appearance and Motion for Human Movement Analysis
dc.contributor.advisor | Davis, Larry S | en_US |
dc.contributor.author | Lin, Zhe | en_US |
dc.contributor.department | Electrical Engineering | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2009-07-02T06:10:32Z | |
dc.date.available | 2009-07-02T06:10:32Z | |
dc.date.issued | 2009 | en_US |
dc.description.abstract | Shape, Appearance and Motion are the most important cues for analyzing human movements in visual surveillance. Representation of these visual cues should be rich, invariant and discriminative. We present several approaches to model and integrate them for human detection and segmentation, person identification, and action recognition. First, we describe a hierarchical part-template matching approach to simultaneous human detection and segmentation combining local part-based and global shape-based schemes. For learning generic human detectors, a pose-adaptive representation is developed based on a hierarchical tree matching scheme and combined with an support vector machine classifier to perform human/non-human classification. We also formulate multiple occluded human detection using a Bayesian framework and optimize it through an iterative process. We evaluated the approach on several public pedestrian datasets. Second, given regions of interest provided by human detectors, we introduce an approach to iteratively estimates segmentation via a generalized Expectation-Maximization algorithm. The approach incorporates local Markov random field constraints and global pose inferences to propagate beliefs over image space iteratively to determine a coherent segmentation. Additionally, a layered occlusion model and a probabilistic occlusion reasoning scheme are introduced to handle inter-occlusion. The approach is tested on a wide variety of real-life images. Third, we describe an approach to appearance-based person recognition. In learning, we perform discriminative analysis through pairwise coupling of training samples, and estimate a set of normalized invariant profiles by marginalizing likelihood ratio functions which reflect local appearance differences. In recognition, we calculate discriminative information-based distances by a soft voting approach, and combine them with appearance-based distances for nearest neighbor classification. We evaluated the approach on videos of 61 individuals under significant illumination and viewpoint changes. Fourth, we describe a prototype-based approach to action recognition. During training, a set of action prototypes are learned in a joint shape and motion space via $k$-means clustering; During testing, humans are tracked while a frame-to-prototype correspondence is established by nearest neighbor search, and then actions are recognized using dynamic prototype sequence matching. Similarity matrices used for sequence matching are efficiently obtained by look-up table indexing. We experimented the approach on several action datasets. | en_US |
dc.format.extent | 9364046 bytes | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | http://hdl.handle.net/1903/9279 | |
dc.language.iso | en_US | |
dc.subject.pqcontrolled | Engineering, Electronics and Electrical | en_US |
dc.subject.pqcontrolled | Computer Science | en_US |
dc.subject.pquncontrolled | Action Recognition | en_US |
dc.subject.pquncontrolled | Appearance Matching | en_US |
dc.subject.pquncontrolled | Human Detection | en_US |
dc.subject.pquncontrolled | Human Segmentation | en_US |
dc.subject.pquncontrolled | Video Surveillance | en_US |
dc.title | Modeling Shape, Appearance and Motion for Human Movement Analysis | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1