College of Behavioral & Social Sciences

Permanent URI for this communityhttp://hdl.handle.net/1903/8

The collections in this community comprise faculty research works, as well as graduate theses and dissertations..

Browse

Search Results

Now showing 1 - 10 of 11
  • Thumbnail Image
    Item
    Adult discrimination of children’s voices over time: Voice discrimination of auditory samples from longitudinal research studies
    (2024) Opusunju, Shelby; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The human voice is subject to change over the lifespan, and these changes are even more pronounced in children. Acoustic properties of speech, such as fundamental frequency, amplitude, speech rate, and fluency, change dramatically as children grow and develop (Lee et al., 1999). Previous studies have established that listeners have a generally strong capacity to discriminate between adult speakers, as well as identify the age of a speaker, based solely on the voice (Kreiman and Sidtis, 2011; Park, 2019). However, few studies have been performed on the listener’s capacity to discriminate between the voices of children, particularly as the voice matures over time. This study examines how well adult listeners can discriminate between the voices of young children of the same age and at different ages. Single-word child language samples from different children (N = 6) were obtained from Munson et al. (2021) and used to create closed-set online AX voice discrimination tasks for adult listeners (N= 31). Three tasks examined listeners’ accuracy and sensitivity in identifying whether a voice was that of the same child or a different child under three conditions: 1) between two children that are both three-years old, 2) between two children that are five-years old, and 3) between two children of different ages (three- vs. five-years old). Listeners’ performance showed above-chance levels of accuracy and sensitivity at discriminating between the voices of children at three-years-old and at two children at five-years-old. Listener performance was not significantly different in these two tasks. No listeners demonstrated above-chance levels of accuracy in discriminating between the voices of a single child at two different ages. Listener performance was significantly poorer in this task compared to the previous two. The findings from this experiment demonstrated a sizable difference in adults' ability to recognize child voices at two different ages than at one age. Possible explanations and implications for understanding child talker discrimination across different ages are discussed.
  • Thumbnail Image
    Item
    Evaluating the role of acoustic cues in identifying the presence of a code-switch
    (2024) Exton, Erika Lynn; Newman, Rochelle S.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Code-switching (switching between languages) is a common linguistic behavior in bilingual speech directed to infants and children. In adult-directed speech (ADS), acoustic-phonetic properties of one language may transfer to the other language close to a code-switch point; for example, English stop consonants may be more Spanish-like near a switch. This acoustically-natural code-switching may be easier for bilingual listeners to comprehend than code-switching without these acoustic changes; however, it effectively results in the languages being more phonetically similar at the point of a code-switch, which could make them difficult for an unfamiliar listener to distinguish. The goal of this research was to assess the acoustic-phonetic cues to code-switching available to listeners unfamiliar with the languages by studying the perception and production of these cues. In Experiment 1 Spanish-English bilingual adults (particularly those who hear code-switching frequently), but not English monolingual adults, were sensitive to natural acoustic cues to code-switching in unfamiliar languages and could use them to identify language switches between French and Mandarin. Such cues were particularly helpful when they allowed listeners to anticipate an upcoming language switch (Experiment 2). In Experiment 3 monolingual children appeared unable to continually identify which language they were hearing. Experiment 4 provides some preliminary evidence that monolingual infants can identify a switch between French and Mandarin, though without addressing the utility of natural acoustic cues for infants. The acoustic detail of code-switched speech to infants was investigated to evaluate how acoustic properties of bilingual infant-directed speech (IDS) are impacted by the presence of and proximity to code-switching. Spanish-English bilingual women narrated wordless picture books in IDS and ADS, and the voice onset times of their English voiceless stops were analyzed in code-switching and English-only stories in each register. In ADS only, English voiceless stops that preceded an English-to-Spanish code-switch and were closer to that switch point were produced with more Spanish-like voice onset times than more distant tokens. This effect of distance to Spanish on English VOTs was not true for tokens that followed Spanish in ADS, or in either direction in IDS, suggesting that parents may avoid producing these acoustic cues when speaking to young children.
  • Thumbnail Image
    Item
    Understanding and remembering pragmatic inferences
    (2018) Kowalski, Alix; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation examines the extent to which sentence interpretations are incrementally encoded in memory. While traditional models of sentence processing assume that comprehension results in a single interpretation, evidence from syntactic parsing indicates that initial misinterpretations are sometimes maintained in memory along with their revised counterparts (e.g., Christianson, Hollingworth, Halliwell & Ferreira, 2001). However, this evidence has largely come from experiments featuring sentences that are presented in isolation and words that are biased toward incorrect syntactic analyses. Because there is typically enough sentential context in natural speech to avoid the incorrect analysis (Roland, Elman, & Ferreira, 2006), it is unclear whether initial interpretations are incrementally encoded in memory when there is sufficient context. The scalar term “some” provides a test case where context is necessary to select between two interpretations, one based on semantics (some and possibly all) and one based on pragmatic inference (some but not all) (Horn, 1989). Although listeners strongly prefer the pragmatic interpretation (e.g., Van Tiel, Van Miltenburg, Zevakhina, & Geurts, 2016), prior research suggests that the semantic meaning is considered before the inference is adopted (Rips, 1975; Noveck & Posada, 2003; Bott & Noveck, 2004; Breheny, Katsos, & Williams, 2006; De Neys & Schaeken, 2007; Huang & Snedeker, 2009, 2011). I used a word-learning and recall task to show that there is evidence of the semantic meaning in the memory representation of sentences featuring “some,” even when the pragmatic interpretation is ultimately adopted. This raises two possibilities: first, the memory representation was of poor quality because both interpretations were available during encoding, or the semantic meaning was computed and encoded first and lingered even after the pragmatic interpretation was computed and encoded. Data from a conflict-adaptation experiment revealed a facilitating effect of cognitive control engagement. However, there was still a delay before the pragmatic inference was adopted. This suggests that only the semantic meaning is available initially and the system failed to override it in memory when the pragmatic interpretation was computed. Taken together, these findings demonstrate the incrementality of memory encoding during sentence processing.
  • Thumbnail Image
    Item
    Syntactic Processing and Word Learning with a Degraded Auditory Signal
    (2017) Martin, Isabel A.; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The current study examined real-time processing and word learning in children receiving a degraded audio signal, similar to the signal children with cochlear implants hear. Using noise-vocoded stimuli, this study assessed whether increased uncertainty in the audio signal alters the developmental strategies available for word learning via syntactic cues. Normal-hearing children receiving a degraded signal were found to be able to differentiate between active and passive sentences nearly as well as those hearing natural speech. However, they had the most difficulty when correct interpretation of a sentence required revision of initial misinterpretations. This pattern is similar to that found with natural speech. While further testing is needed to confirm these effects, the current evidence suggests that a degraded signal may make revision even harder than it is in natural speech. This provides important information about language learning with a cochlear implant, with implications for intervention strategies.
  • Thumbnail Image
    Item
    SES-RELATED DIFFERENCES IN WORD LEARNING: EFFECTS OF COGNITIVE INHIBITION AND WORD LEARNING
    (2016) Hollister, Erin Marie; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Socioeconomic status (SES) influences language and cognitive development, with discrepancies particularly noticeable in vocabulary development. This study examines how SES-related differences impact the development of syntactic processing, cognitive inhibition, and word learning. 38 4-5-year-olds from higher- and lower-SES backgrounds completed a word-learning task, in which novel words were embedded in active and passive sentences. Critically, unlike the active sentences, all passive sentences required a syntactic revision. Measures of cognitive inhibition were obtained through a modified Stroop task. Results indicate that lower-SES participants had more difficulty using inhibitory functions to resolve conflict compared to their higher-SES counterparts. However, SES did not impact language processing, as the language outcomes were similar across SES background. Additionally, stronger inhibitory processes were related to better language outcomes in the passive sentence condition. These results suggest that cognitive inhibition impact language processing, but this function may vary across children from different SES backgrounds
  • Thumbnail Image
    Item
    Fast mapping in linguistic context: Processing and complexity effects
    (2015) Arnold, Alison Reese; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Young children readily use syntactic cues for word learning in structurally-simple contexts (Naigles, 1990). However, developmental differences in children's language processing abilities might interfere with their access to syntactic cues when novel words are presented in structurally-challenging contexts. To understand the role of processing on syntactic bootstrapping, we used an eye-tracking paradigm to examine children's fast-mapping abilities in active (structurally-simple) and passive (structurally-complex) sentences. Actions after sentences indicated children were more successful mapping words in passive sentences when novel words were presented in NP2 ("The seal will be quickly eaten by the blicket") than when novel words were presented in NP1 ("The blicket will be quickly eaten by the seal"), indicating presenting more prominent nouns in NP1 increases children's agent-first bias and sabotages interpretation of passives. Later recall data indicate children were less likely to remember new words in structurally-challenging contexts.
  • Thumbnail Image
    Item
    Effects of Statistical Learning on the Acquisition of Grammatical Categories through Qur'anic Memorization: A Natural Experiment
    (2013) Zuhurudeen, Fathima Manaar; Huang, Yi Ting; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This study investigated the effects of ambient exposure to Arabic through Qur'anic memorization versus formal classroom exposure to Arabic on the ability to acquire knowledge of Arabic grammatical categories. To do this, we exposed participants to a 5-minute familiarization language of Arabic phrases. Then, we measured accuracy on a two-alternative forced choice grammatical judgment task, which required participants to identify a grammatical phrase based on rules that followed the statistical properties of items in the familiarization language. We compared results of this task with those of language background surveys, and found that memorizers were more accurate than non-memorizers in distinguishing between novel grammatical phrases and ungrammatical phrases. While classroom experience had no effect on accuracy, naïve listeners also experienced statistical learning. Thus, semantic representations are not required to abstract rules of Arabic grammar. We discuss possible explanations for these findings and implications for language acquisition.
  • Thumbnail Image
    Item
    Effects of Acoustic Perception of Gender on Nonsampling Errors in Telephone Surveys
    (2012) Kenney McCulloch, Susan; Kreuter, Frauke; Survey Methodology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Many telephone surveys require interviewers to observe and record respondents' gender based solely on respondents' voice. Researchers may rely on these observations to: (1) screen for study eligibility; (2) determine skip patterns; (3) foster interviewer tailoring strategies; (4) contribute to nonresponse assessment and adjustments; (5) inform post-stratification weighting; and (6) design experiments. Gender is also an important covariate to understand attitudes and behavior in many disciplines. Yet, despite this fundamental role in research, survey documentation suggests there is significant variation in how gender is measured and collected across organizations. Variations of collecting respondent gender may include: (1) asking the respondent; (2) interviewer observation only; (3) a combination of observation aided by asking when needed; or (4) another method. But what is the efficacy of these approaches? Are there predictors of observational errors? What are the consequences of interviewer misclassification of respondent gender to survey outcomes? Measurement error in interviewer's observations of respondent gender has never been examined by survey methodologists. This dissertation explores the accuracy and utility of interviewer judgments specifically with regard to gender observations. Using the recent paradata work and linguistics literature as a foundation to explore acoustic gender determination, the goal of my dissertation is to identify implications for survey research of using interviewers' observations collected in a telephone interviewing setting. Organized into three journal-style papers, through a survey of survey organizations, the first paper finds that more than two-thirds of firms collect respondent gender by some form of interviewer observation. Placement of the observation, rationale for chosen collection methods, and uses of these paradata are documented. In paper two, utilizing existing recording of survey interviews, the experimental research finds that the accuracy of interviewer observations improves with increased exposure. The noisy environment of a centralized phone room does not appear to threaten the quality of gender observations. Interviewer and respondent level covariates of misclassification are also discussed. Analyzing secondary data, the third paper finds there are some consequences of incorrect interviewer observations of respondents' gender on survey estimates. Findings from this dissertation will contribute to the paradata literature and provide survey practitioners guidance in the use and collection of interviewer observations, specifically gender, to reduce sources of nonsampling error.
  • Thumbnail Image
    Item
    The use of acoustic cues in phonetic perception: Effects of spectral degradation, limited bandwidth and background noise
    (2011) Winn, Matthew Brandon; Chatterjee, Monita; Idsardi, William J; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Hearing impairment, cochlear implantation, background noise and other auditory degradations result in the loss or distortion of sound information thought to be critical to speech perception. In many cases, listeners can still identify speech sounds despite degradations, but understanding of how this is accomplished is incomplete. Experiments presented here tested the hypothesis that listeners would utilize acoustic-phonetic cues differently if one or more cues were degraded by hearing impairment or simulated hearing impairment. Results supported this hypothesis for various listening conditions that are directly relevant for clinical populations. Analysis included mixed-effects logistic modeling of contributions of individual acoustic cues for various contrasts. Listeners with cochlear implants (CIs) or normal-hearing (NH) listeners in CI simulations showed increased use of acoustic cues in the temporal domain and decreased use of cues in the spectral domain for the tense/lax vowel contrast and the word-final fricative voicing contrast. For the word-initial stop voicing contrast, NH listeners made less use of voice-onset time and greater use of voice pitch in conditions that simulated high-frequency hearing impairment and/or masking noise; influence of these cues was further modulated by consonant place of articulation. A pair of experiments measured phonetic context effects for the "s/sh" contrast, replicating previously observed effects for NH listeners and generalizing them to CI listeners as well, despite known deficiencies in spectral resolution for CI listeners. For NH listeners in CI simulations, these context effects were absent or negligible. Audio-visual delivery of this experiment revealed enhanced influence of visual lip-rounding cues for CI listeners and NH listeners in CI simulations. Additionally, CI listeners demonstrated that visual cues to gender influence phonetic perception in a manner consistent with gender-related voice acoustics. All of these results suggest that listeners are able to accommodate challenging listening situations by capitalizing on the natural (multimodal) covariance in speech signals. Additionally, these results imply that there are potential differences in speech perception by NH listeners and listeners with hearing impairment that would be overlooked by traditional word recognition or consonant confusion matrix analysis.
  • Thumbnail Image
    Item
    EFFECTS OF COGNITIVE DEMAND ON WORD ENCODING IN ADULTS WHO STUTTER
    (2011) Tsai, Pei-Tzu; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The etiology of persistent stuttering is unknown, but stuttering has been attributed to multiple potential factors, including difficulty in processing language-related information, but findings remain inconclusive regarding any specific linguistic deficit potentially causing stuttering. One particular challenge in drawing conclusions is the highly variable task demands across studies. Different tasks could potentially reflect either different processes, or different levels of demand. This study examined the role of cognitive demand in semantic and phonological processes to evaluate the role of linguistic processing in the etiology of stuttering. The study examined concurrent processing of picture naming and tone-identification in typically fluent young adults, adults who stutter (AWS) and matched adults who do not stutter (NS), with varying temporal overlap between the dual tasks as manipulation of cognitive demand. The study found 1) that in both AWS and NS, semantic and phonological encoding both interacted with non-linguistic processing during concurrent processing, suggesting that both linguistic processes are demanding in cognitive resources, 2) that there was no observable relationship between dual-task interference to word encoding and stuttering, 3) that AWS and NS showed different trends of phonological encoding under high but not low cognitive demand, suggesting a subtle phonological deficit in AWS, and 4) that the phonological encoding effect correlated with stuttering rate, suggesting that phonological deficit could potentially play a role in the etiology or persistence of stuttering. Additional findings include potential differences in semantic encoding between typically fluent young adults and middle-age adults, as well as potential strategic differences in processing semantic information between AWS and NS. Findings were taken to support stuttering theories suggesting specific deficits in phonological encoding and argue against a primary role of semantic encoding deficiency or lexical access deficit in stuttering.