Multi-Level Audio-Visual Interactions in Speech and Language Perception

dc.contributor.advisorIdsardi, William Jen_US
dc.contributor.authorRhone, Ariane E.en_US
dc.contributor.departmentLinguisticsen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2011-10-08T06:24:10Z
dc.date.available2011-10-08T06:24:10Z
dc.date.issued2011en_US
dc.description.abstractThat we perceive our environment as a unified scene rather than individual streams of auditory, visual, and other sensory information has recently provided motivation to move past the long-held tradition of studying these systems separately. Although they are each unique in their transduction organs, neural pathways, and cortical primary areas, the senses are ultimately merged in a meaningful way which allows us to navigate the multisensory world. Investigating how the senses are merged has become an increasingly wide field of research in recent decades, with the introduction and increased availability of neuroimaging techniques. Areas of study range from multisensory object perception to cross-modal attention, multisensory interactions, and integration. This thesis focuses on audio-visual speech perception, with special focus on facilitatory effects of visual information on auditory processing. When visual information is concordant with auditory information, it provides an advantage that is measurable in behavioral response times and evoked auditory fields (Chapter 3) and in increased entrainment to multisensory periodic stimuli reflected by steady-state responses (Chapter 4). When the audio-visual information is incongruent, the combination can often, but not always, combine to form a third, non-physically present percept (known as the McGurk effect). This effect is investigated (Chapter 5) using real word stimuli. McGurk percepts were not robustly elicited for a majority of stimulus types, but patterns of responses suggest that the physical and lexical properties of the auditory and visual stimulus may affect the likelihood of obtaining the illusion. Together, these experiments add to the growing body of knowledge that suggests that audio-visual interactions occur at multiple stages of processing.en_US
dc.identifier.urihttp://hdl.handle.net/1903/12039
dc.subject.pqcontrolledLinguisticsen_US
dc.subject.pqcontrolledNeurosciencesen_US
dc.subject.pquncontrolledAuditory Evoked Fieldsen_US
dc.subject.pquncontrolledMagnetoencephalographyen_US
dc.subject.pquncontrolledMcGurk illusionen_US
dc.subject.pquncontrolledMultisensoryen_US
dc.subject.pquncontrolledSteady-State Responseen_US
dc.titleMulti-Level Audio-Visual Interactions in Speech and Language Perceptionen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Rhone_umd_0117E_12621.pdf
Size:
8.04 MB
Format:
Adobe Portable Document Format