Hearing & Speech Sciences Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2776
Browse
8 results
Search Results
Item ISOLATING EFFECTS OF PERCEPTUAL ANALYSIS AND SOCIOCULTURAL CONTEXT ON CHILDREN’S COMPREHENSION OF TWO DIALECTS OF ENGLISH, AFRICAN AMERICAN ENGLISH AND GENERAL AMERICAN ENGLISH(2023) Erskine, Michelle E; Edwards, Jan; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)There is a long-standing gap in literacy achievement between African American and European American students (e.g., NAEP, 2019, 2022). A large body of research has examined different factors that continue to reinforce performance differences across students. One variable that has been a long-term interest to sociolinguists and applied scientists is children’s use of different dialects in the classroom. Many African American students speak African American English (AAE), a rule-governed, but socially stigmatized, dialect of English that differs in phonology, morphosyntax, and pragmatics from General American English (GAE), the dialect of classroom instruction. Empirical research on dialect variation and literacy achievement has demonstrated that linguistic differences between dialects make it more difficult to learn to read (Buhler et al., 2018; Charity et al., 2004; Gatlin & Wanzek, 2015; Washington et al., 2018, inter alia) and recently, more difficult to comprehend spoken language (Byrd et al., 2022, Edwards et al., 2014; Erskine, 2022a; Johnson, 2005; de Villiers & Johnson, 2007; JM Terry, Hendrick, Evangelou, et al., 2010; JM Terry, Thomas, Jackson, et al., 2022). The prevailing explanation for these results has been the perceptual analysis hypothesis, a framework that asserts that linguistic differences across dialects creates challenges in mapping variable speech signals to listeners’ stored mental representations (Adank et al., 2009; Clopper, 2012; Clopper & Bradlow, 2008; Cristia et al., 2012). However, spoken language comprehension is more than perceptual analysis, requiring the integration of perceptual information with communicative intent and sociocultural information (speaker identity). To this end, it is proposed that the perceptual analysis hypothesis views dialect variation as another form of signal degradation. Simplifying dialect variation to a signal-mapping problem potentially limits our understanding of the contribution of dialect variation to spoken language comprehension. This dissertation proposes that research on spoken language comprehension should integrate frameworks that are more sensitive to the contributions of the sociocultural aspects of dialect variation, such as the role of linguistic and nonlinguistic cues that are associated with speakers of different dialects. This dissertation includes four experiments that use the visual world paradigm to explore the effects of dialect variation on spoken language comprehension among children between the ages of 3;0 to 11;11 years old (years;months) from two linguistic communities, European American speakers of GAE and African American speakers with varying degrees of exposure to AAE and GAE. Chapter 2 (Erskine [2022a]) investigates the effects of dialect variation in auditory-only contexts in two spoken word recognition tasks that vary in linguistic complexity: a) word recognition in simple phrases and b) word recognition in sentences that vary in semantic predictability. Chapter 3 [Erskine (2022b)] examine the effects of visual and auditory speaker identity cues on dialect variation on literal semantic comprehension (i.e., word recognition in semantically facilitating sentences). Lastly, Chapter 4 [Erskine (2022c)] examines the effects of visual and auditory speaker identity cues on children’s comprehension of different dialects in a task that evaluates pragmatic inferencing (i.e., scalar implicature). Each of the studies investigate the validity of the perceptual analysis against sociolinguistcally informed hypotheses that account for the integration of linguistic and nonlinguistic speaker identity cues as adequate explanations for relationships that are observed between dialect variation and spoken language comprehension. Collectively, these studies address the question of how dialect variation impacts spoken language comprehension. This dissertation provides evidence that traditional explanations that focus on perceptual costs are limited in their ability to account for correlations typically reported between spoken language comprehension and dialect use. Additionally, it shows that school-age children rapidly integrate linguistic and nonlinguistic socioindexical cues in ways that meaningfully guide their comprehension of different speakers. The implication of these findings and future research directions are also addressed within.Item EFFECTS OF INTERRUPTING NOISE AND SPEECH REPAIR MECHANISMS IN ADULT COCHLEAR-IMPLANT USERS(2020) Jaekel, Brittany Nicole; Goupell, Matthew J; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The long-term objective of this project is to help cochlear-implant (CI) users achieve better speech understanding in noisy, real-world listening environments. The specific objective of the proposed research is to evaluate why speech repair (“restoration”) mechanisms are often atypical or absent in this population. Restoration allows for improved speech understanding when signals are interrupted with noise, at least among normal-hearing listeners. These experiments measured how CI device factors like noise-reduction algorithms and compression and listener factors like peripheral auditory encoding and linguistic skills affected restoration mechanisms. We hypothesized that device factors reduce opportunities to restore speech; noise in the restoration paradigm must act as a plausible masker in order to prompt the illusion of intact speech, and CIs are designed to attenuate noise. We also hypothesized that CI users, when listening with an ear with better peripheral auditory encoding and provided with a semantic cue, would show improved restoration ability. The interaction of high-quality bottom-up acoustic information with top-down linguistic knowledge is integral to the restoration paradigm, and thus restoration could be possible if CI users listen to noise-interrupted speech with a “better ear” and have opportunities to utilize their linguistic knowledge. We found that CI users generally failed to restore speech regardless of device factors, ear presentation, and semantic cue availability. For CI users, interrupting noise apparently serves as an interferer rather than a promoter of restoration. The most common concern among CI users is difficulty understanding speech in noisy listening conditions; our results indicate that one reason for this difficulty could be that CI users are unable to utilize tools like restoration to process noise-interrupted speech effectively.Item AN ANALYSIS OF CODE SWITCHING EVENTS IN TYPICALLY DEVELOPING SPANISH-ENGLISH BILINGUAL CHILDREN(2020) Guevara, Sandra Stephanie; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Code-switching (CS) patterns were investigated in language samples of 14 typically-developing Spanish-English bilingual preschool-aged children. CS occurred primarily when the children spoke in Spanish. We investigated code-switched events, vocabulary measures, and disfluencies to better understand if children utilize code-switching to fill in lexical gaps in Spanish, as measured by disfluencies surrounding the code-switch. Results indicate that children’s spoken vocabulary diversity is not related to code-switching frequency, although their receptive vocabulary skills are negatively correlated to proportions of code-switched events. We also found no significant relationship between code-switched events and disfluencies across participants. Findings suggest clinical implications related to best practice for speech-language pathologists when working with bilingual children, as they observe language attrition, and code-switching related to language proficiency and dominance.Item Language Outcomes of the Play and Language for Autistic Youngsters (PLAY) Project Home Consultation model—An Extended Analysis(2016) Catalano, Allison; Bernstein-Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The current study is a post-hoc analysis of data from the original randomized control trial of the Play and Language for Autistic Youngsters (PLAY) Home Consultation program, a parent-mediated, DIR/Floortime based early intervention program for children with ASD (Solomon, Van Egeren, Mahone, Huber, & Zimmerman, 2014). We examined 22 children from the original RCT who received the PLAY program. Children were split into two groups (high and lower functioning) based on the ADOS module administered prior to intervention. Fifteen-minute parent-child video sessions were coded through the use of CHILDES transcription software. Child and maternal language, communicative behaviors, and communicative functions were assessed in the natural language samples both pre- and post-intervention. Results demonstrated significant improvements in both child and maternal behaviors following intervention. There was a significant increase in child verbal and non-verbal initiations and verbal responses in whole group analysis. Total number of utterances, word production, and grammatical complexity all significantly improved when viewed across the whole group of participants; however, lexical growth did not reach significance. Changes in child communicative function were especially noteworthy, and demonstrated a significant increase in social interaction and a significant decrease in non-interactive behaviors. Further, mothers demonstrated an increase in responsiveness to the child’s conversational bids, increased ability to follow the child’s lead, and a decrease in directiveness. When separated for analyses within groups, trends emerged for child and maternal variables, suggesting greater gains in use of communicative function in both high and low groups over changes in linguistic structure. Additional analysis also revealed a significant inverse relationship between maternal responsiveness and child non-interactive behaviors; as mothers became more responsive, children’s non-engagement was decreased. Such changes further suggest that changes in learned skills following PLAY parent training may result in improvements in child social interaction and language abilities.Item Fast mapping in linguistic context: Processing and complexity effects(2015) Arnold, Alison Reese; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Young children readily use syntactic cues for word learning in structurally-simple contexts (Naigles, 1990). However, developmental differences in children's language processing abilities might interfere with their access to syntactic cues when novel words are presented in structurally-challenging contexts. To understand the role of processing on syntactic bootstrapping, we used an eye-tracking paradigm to examine children's fast-mapping abilities in active (structurally-simple) and passive (structurally-complex) sentences. Actions after sentences indicated children were more successful mapping words in passive sentences when novel words were presented in NP2 ("The seal will be quickly eaten by the blicket") than when novel words were presented in NP1 ("The blicket will be quickly eaten by the seal"), indicating presenting more prominent nouns in NP1 increases children's agent-first bias and sabotages interpretation of passives. Later recall data indicate children were less likely to remember new words in structurally-challenging contexts.Item The role of executive functions in typical and atypical preschoolers' speech sound development(2014) Eaton, Catherine Torrington; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)For most children, the acquisition of adult-like speech production is a seamless process. Yet for children with cognitive-linguistic speech sound disorder (SSD), in the absence of any obvious etiology such as hearing-related or motor processing deficits, the rules that govern their native phonology or speech sound system must be explicitly taught in speech therapy. A fundamental question asks why children with SSD are often unable to transition to adult-like production without direct therapy. One plausible, yet relatively unexplored explanation for this difficult transition is that there are differences in executive function abilities (EFs) in children with SSD as compared to typically-developing (TD) children. The core EFs (inhibitory control, cognitive flexibility, and working memory) are the cognitive functions needed to control initial or habituated impulses, shift flexibly between rule sets, and store and manipulate information; these could logically be involved in the process of replacing early, inaccurate production patterns with adult phonology. For this study, 4- to 5-year-old children, 20 with SSD and 45 with TD speech, participated in a battery of EF, speech production, and speech perception tasks. In addition, children were assessed using a modified version of the Syllable-Repetition Task (SRT; Shriberg et al., 2009), which is a variant of non-word repetition for children with SSD. Performance accuracy was compared across groups and also correlated with speech sound accuracy from a single-word naming task. It was found that children with SSD performed more poorly than the TD speech group on the forward digit span, SRT, and Flexible Item Selection (FIST; Jacques & Zelazo, 2001) tasks. Only forward digit span and SRT performances were positively correlated with speech production accuracy. Factor and regression analyses suggested that phonological memory capacity, but not inhibitory control, cognitive flexibility or mental manipulation is likely impaired in this population. Results from the SRT suggest that an additional cognitive component, such as phonological encoding or quality of underlying representations, may also be implicated. Interpretations for these and other results as well as their clinical implications are discussed.Item Maternal Voice Onset Time in Infant- and Adult-Directed Speech: Characteristics and Possible Impacts on Language Development(2013) Sampson, Julia Lauren; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Infant-directed speech (IDS) contains many unique characteristics that may facilitate language development. One acoustic cue that may differ in IDS compared to adult-directed speech (ADS) is voice onset time (VOT). The present study examines the VOT of open- and closed-class words in speech to infants at 10/11, 18, and 24 months of age, as well as in speech to adults. This study also looks at correlations between clarification of VOT in speech to infants, and language outcomes at 2 years. Results show that VOT clarification in IDS did not differ significantly at any of the ages. Overlap between voicing categories for open class words was significantly less in ADS than IDS. The overlap for closed class words at 18 months was significantly related to language outcomes, with lower overlap relating to higher outcome scores. Possible explanations are discussed.Item EFFECTS OF COGNITIVE DEMAND ON WORD ENCODING IN ADULTS WHO STUTTER(2011) Tsai, Pei-Tzu; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The etiology of persistent stuttering is unknown, but stuttering has been attributed to multiple potential factors, including difficulty in processing language-related information, but findings remain inconclusive regarding any specific linguistic deficit potentially causing stuttering. One particular challenge in drawing conclusions is the highly variable task demands across studies. Different tasks could potentially reflect either different processes, or different levels of demand. This study examined the role of cognitive demand in semantic and phonological processes to evaluate the role of linguistic processing in the etiology of stuttering. The study examined concurrent processing of picture naming and tone-identification in typically fluent young adults, adults who stutter (AWS) and matched adults who do not stutter (NS), with varying temporal overlap between the dual tasks as manipulation of cognitive demand. The study found 1) that in both AWS and NS, semantic and phonological encoding both interacted with non-linguistic processing during concurrent processing, suggesting that both linguistic processes are demanding in cognitive resources, 2) that there was no observable relationship between dual-task interference to word encoding and stuttering, 3) that AWS and NS showed different trends of phonological encoding under high but not low cognitive demand, suggesting a subtle phonological deficit in AWS, and 4) that the phonological encoding effect correlated with stuttering rate, suggesting that phonological deficit could potentially play a role in the etiology or persistence of stuttering. Additional findings include potential differences in semantic encoding between typically fluent young adults and middle-age adults, as well as potential strategic differences in processing semantic information between AWS and NS. Findings were taken to support stuttering theories suggesting specific deficits in phonological encoding and argue against a primary role of semantic encoding deficiency or lexical access deficit in stuttering.