College of Behavioral & Social Sciences

Permanent URI for this communityhttp://hdl.handle.net/1903/8

The collections in this community comprise faculty research works, as well as graduate theses and dissertations..

Browse

Search Results

Now showing 1 - 7 of 7
  • Thumbnail Image
    Item
    Evaluating the role of acoustic cues in identifying the presence of a code-switch
    (2024) Exton, Erika Lynn; Newman, Rochelle S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Code-switching (switching between languages) is a common linguistic behavior in bilingual speech directed to infants and children. In adult-directed speech (ADS), acoustic-phonetic properties of one language may transfer to the other language close to a code-switch point; for example, English stop consonants may be more Spanish-like near a switch. This acoustically-natural code-switching may be easier for bilingual listeners to comprehend than code-switching without these acoustic changes; however, it effectively results in the languages being more phonetically similar at the point of a code-switch, which could make them difficult for an unfamiliar listener to distinguish. The goal of this research was to assess the acoustic-phonetic cues to code-switching available to listeners unfamiliar with the languages by studying the perception and production of these cues. In Experiment 1 Spanish-English bilingual adults (particularly those who hear code-switching frequently), but not English monolingual adults, were sensitive to natural acoustic cues to code-switching in unfamiliar languages and could use them to identify language switches between French and Mandarin. Such cues were particularly helpful when they allowed listeners to anticipate an upcoming language switch (Experiment 2). In Experiment 3 monolingual children appeared unable to continually identify which language they were hearing. Experiment 4 provides some preliminary evidence that monolingual infants can identify a switch between French and Mandarin, though without addressing the utility of natural acoustic cues for infants. The acoustic detail of code-switched speech to infants was investigated to evaluate how acoustic properties of bilingual infant-directed speech (IDS) are impacted by the presence of and proximity to code-switching. Spanish-English bilingual women narrated wordless picture books in IDS and ADS, and the voice onset times of their English voiceless stops were analyzed in code-switching and English-only stories in each register. In ADS only, English voiceless stops that preceded an English-to-Spanish code-switch and were closer to that switch point were produced with more Spanish-like voice onset times than more distant tokens. This effect of distance to Spanish on English VOTs was not true for tokens that followed Spanish in ADS, or in either direction in IDS, suggesting that parents may avoid producing these acoustic cues when speaking to young children.
  • Thumbnail Image
    Item
    Utterance-level predictors of stuttering-like, stall, and revision disfluencies in the speech of young children who do and do not stutter
    (2021) Garbarino, Julianne; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Disfluencies are generally divided into two types: stuttering-like disfluencies (SLDs), which are characteristic of the speech of people who stutter, and typical disfluencies (TDs), which are produced by nearly all speakers. In several studies, TDs have been further divided into stalls and revisions; stalls (fillers, repetitions) are thought to be prospective, occurring due to glitches in planning upcoming words and structures, while revisions (word and phrase repetitions, word fragments) are thought to be retrospective, occurring when a speaker corrects language produced in error.This dissertation involved the analysis of 15,782 utterances produced by 32 preschool-age children who stutter (CWS) and 32 matched children who do not stutter (CWNS). The first portion of this dissertation focused on how syntactic factors relate to disfluency. Disfluencies (of all three types) were more likely to occur when utterances were ungrammatical. The disfluency types thought a priori to relate to planning (SLDs and stalls) occurred significantly more often before errors, which is consistent with these disfluencies occurring, in part, due to difficulty planning the error-containing portion of the utterance. Previous findings of a distributional dichotomy between stalls and revisions were not replicated. Both stalls and revisions increased in likelihood in ungrammatical utterances, as the length of the utterance increased, and as the language level of the child who produced the utterance increased. This unexpected result suggests that both stalls and revisions are more likely to occur in utterances that are harder to plan (those that are ungrammatical and/or longer), and that as children’s language develops, so do the skills they need to produce both stalls and revisions. The second part of this dissertation assessed the evidence base for the widespread recommendation that caregivers of young CWS should avoid asking them questions, as CWS have been thought to stutter more often when answering questions. CWS were, in fact, less likely to stutter when answering questions than in other utterance types. Given this finding, the absence of previous evidence connecting question-answering to stuttering, and the potential benefits of asking children questions, clinicians should reconsider the recommendation for caregivers of CWS to reduce their question-asking.
  • Thumbnail Image
    Item
    Syntactic Processing and Word Learning with a Degraded Auditory Signal
    (2017) Martin, Isabel A.; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The current study examined real-time processing and word learning in children receiving a degraded audio signal, similar to the signal children with cochlear implants hear. Using noise-vocoded stimuli, this study assessed whether increased uncertainty in the audio signal alters the developmental strategies available for word learning via syntactic cues. Normal-hearing children receiving a degraded signal were found to be able to differentiate between active and passive sentences nearly as well as those hearing natural speech. However, they had the most difficulty when correct interpretation of a sentence required revision of initial misinterpretations. This pattern is similar to that found with natural speech. While further testing is needed to confirm these effects, the current evidence suggests that a degraded signal may make revision even harder than it is in natural speech. This provides important information about language learning with a cochlear implant, with implications for intervention strategies.
  • Thumbnail Image
    Item
    Early Phonological Predictors of Toddler Language Outcomes
    (2015) Gerhold, Kayla; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Several studies have explored relationships between children's early phonological development and later language performance. This literature has included a more recent focus on the potential for early vocalization profiles in infancy to predict later language outcomes, including those characterized by delay or disorder. The present study examines phonetic inventories and syllable structure patterns in a large cohort of infants as they relate to expressive language outcomes at 2 years of age. Results suggest that as early as 11 months, phonetic inventory and mean syllable structure level are related to two year expressive language outcomes (MLU, MCDI, and types). If specific patterns of production can be established for a typically-developing population then this will additionally inform clinical decision-making. Possible applications are discussed.
  • Thumbnail Image
    Item
    Fast mapping in linguistic context: Processing and complexity effects
    (2015) Arnold, Alison Reese; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Young children readily use syntactic cues for word learning in structurally-simple contexts (Naigles, 1990). However, developmental differences in children's language processing abilities might interfere with their access to syntactic cues when novel words are presented in structurally-challenging contexts. To understand the role of processing on syntactic bootstrapping, we used an eye-tracking paradigm to examine children's fast-mapping abilities in active (structurally-simple) and passive (structurally-complex) sentences. Actions after sentences indicated children were more successful mapping words in passive sentences when novel words were presented in NP2 ("The seal will be quickly eaten by the blicket") than when novel words were presented in NP1 ("The blicket will be quickly eaten by the seal"), indicating presenting more prominent nouns in NP1 increases children's agent-first bias and sabotages interpretation of passives. Later recall data indicate children were less likely to remember new words in structurally-challenging contexts.
  • Thumbnail Image
    Item
    PATTERNS AND POSSIBLE INFLUENCES OF MATERNAL VOWEL CLARIFICATION ON CHILD LANGUAGE DEVELOPMENT
    (2013) Hartman, Kelly Marie; Ratner, Nan B; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    There have been many studies examining the differences between infant-directed speech (IDS) and adult-directed speech (ADS). However, very few longitudinal studies exist that explore how patterns of maternal vowel articulation in IDS change as children get older, or whether these changes have any effect on a child's developing language skills. This study examines the vowel clarification of mothers' IDS at 10-11 months, 18 months, and 24 months, as compared to their vowel production in ADS. Relationships between vowel space, vowel duration, and vowel variability and child language outcomes at 2 years are also explored. Results show that vowel space and vowel duration tend to be greater in IDS than in ADS, and that a mother's vowel space at 18 months is significantly related to expressive and receptive child language outcomes at 2 years. Possible explanations are discussed.
  • Thumbnail Image
    Item
    Parental Perceptions of Children’s Communicative Development at Stuttering Onset
    (American Speech-Language-Hearing Association, 2000-10) Ratner, Nan Bernstein; Silverman, Stacy
    There has been clinical speculation that parents of young stuttering children have expectations of their children’s communication abilities that are not well-matched to the children’s actual skills. We appraised the language abilities of 15 children close to the onset of stuttering symptoms and 15 age-, sex-, and SES-matched fluent children using an array of standardized tests and spontaneous language sample measures. Parents concurrently completed two parent-report measures of the children’s communicative development. Results indicated generally depressed performance on all child speech and language measures by the children who stutter. Parent report was closely attuned to child performance for the stuttering children; parents of nonstuttering children were less accurate in their predictions of children’s communicative performance. Implications for clinical advisement to parents of stuttering children are discussed.