Investigations into the Neural Basis of Structured Representations

dc.contributor.advisorWeinberg, Amyen_US
dc.contributor.authorWhitney, Carolen_US
dc.contributor.departmentNeuroscience and Cognitive Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2005-02-02T06:32:11Z
dc.date.available2005-02-02T06:32:11Z
dc.date.issued2004-11-22en_US
dc.description.abstractThe problem of how the brain encodes structural representations is investigated via the formulation of computational theories constrained from the bottom-up by neurobiological factors, and from the top-down by behavioral data. This approach is used to construct models of letter-position encoding in visual word recognition, and of hierarchical representations in sentence parsing. The problem of letter-position encoding entails the specification of how the retinotopic representation of a stimulus (a printed word) is progressively converted into an abstract representation of letter order. Consideration of the architecture of the visual system, letter perceptibility studies, and form-priming experiments led to the SERIOL model, which is comprised of five layers: (1) a (retinotopic) edge layer, in which letter activations are determined by the acuity gradient; (2) a (retinotopic) feature layer, in which letter activations conform to a monotonically decreasing activation gradient, dubbed the locational gradient; (3) an abstract letter layer, in which letter order is encoded sequentially. (4) a bigram layer, in which contextual units encode letter pairs that fire in a particular order; (5) a word layer. Because the acuity and locational gradients are congruent to each other in one hemisphere but not the other, formation of the locational gradient requires hemisphere-specific processing. It is proposed that this processing underlies visual-field asymmetries associated with word length and orthographic-neighborhood size. Hemifield lexical-decision experiments in which contrast manipulations were used to modify activation patterns confirmed this account. In contrast to the linear relationships between letters, a parse of a sentence requires hierarchical representations. Consideration of a fixed-connectivity constraint, brain imaging studies, sentence-complexity phenomena, and insights from the SERIOL model led to the TPARRSE model, in which hierarchical relationships are represented by a predefined distributed encoding. This encoding is constructed with the support of working memory, which encodes relationships between phrases via two synchronized sequential representations. The model explains complexity phenomena based on specific proposals as to how information is represented and manipulated in syntactic working memory. In contrast to capacity-based metrics, the TPARRSE model provides a more comprehensive account of these phenomena.en_US
dc.format.extent1018824 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/2030
dc.language.isoen_US
dc.subject.pqcontrolledPsychology, Cognitiveen_US
dc.subject.pquncontrolledcomputational modellingen_US
dc.subject.pquncontrolledvisual word recognitionen_US
dc.subject.pquncontrolledpsycholinguisticsen_US
dc.titleInvestigations into the Neural Basis of Structured Representationsen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
umi-umd-1988.pdf
Size:
994.95 KB
Format:
Adobe Portable Document Format