Show simple item record

dc.contributor.authorGuerra-Filho, Gutemberg
dc.contributor.authorAloimonos, Yiannis
dc.date.accessioned2006-12-12T20:01:57Z
dc.date.available2006-12-12T20:01:57Z
dc.date.issued2006-10-15
dc.identifier.urihttp://hdl.handle.net/1903/4013
dc.description.abstractHuman-centered computing (HCC) is centered on humans and what they do, i.e. human actions. Thus, developing an infrastructure for HCC requires understanding human action, at some level of detail. We need to be able to talk about actions, synthesize actions, recognize actions, manipulate actions, imitate actions, imagine and predict actions. How could we achieve this in a principled fashion? This paper proposes that the space of human actions has a linguistic structure. This is a sensory-motor space consisting of the evolution of the joint angles of the human body in movement. The space of human activity has its own phonemes, morphemes, and sentences. We present a Human Activity Language (HAL) for symbolic non-arbitrary representation of visual and motor information. In phonology, we define atomic segments (kinetemes) that are used to compose human activity. In morphology, we propose parallel learning to incorporate associative learning into a language inference approach. Parallel learning solves the problem of overgeneralization and is effective in identifying the active joints and motion patterns in a particular action. In syntax, we point out some of the basic constraints for sentence formation. Finally, we demonstrate this linguistic framework on a praxicon of 200 human actions (motion capture data obtained by a suit) and we discuss the implications of HAL on HCC.en
dc.format.extent445455 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen_USen
dc.relation.ispartofseriesUM Computer Science Departmenten
dc.relation.ispartofseriesCS-TR-4838en
dc.titleA Language for Human Actionen
dc.typeTechnical Reporten


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record