Hearing & Speech Sciences Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2776
Browse
Item Adult discrimination of children’s voices over time: Voice discrimination of auditory samples from longitudinal research studies(2024) Opusunju, Shelby; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The human voice is subject to change over the lifespan, and these changes are even more pronounced in children. Acoustic properties of speech, such as fundamental frequency, amplitude, speech rate, and fluency, change dramatically as children grow and develop (Lee et al., 1999). Previous studies have established that listeners have a generally strong capacity to discriminate between adult speakers, as well as identify the age of a speaker, based solely on the voice (Kreiman and Sidtis, 2011; Park, 2019). However, few studies have been performed on the listener’s capacity to discriminate between the voices of children, particularly as the voice matures over time. This study examines how well adult listeners can discriminate between the voices of young children of the same age and at different ages. Single-word child language samples from different children (N = 6) were obtained from Munson et al. (2021) and used to create closed-set online AX voice discrimination tasks for adult listeners (N= 31). Three tasks examined listeners’ accuracy and sensitivity in identifying whether a voice was that of the same child or a different child under three conditions: 1) between two children that are both three-years old, 2) between two children that are five-years old, and 3) between two children of different ages (three- vs. five-years old). Listeners’ performance showed above-chance levels of accuracy and sensitivity at discriminating between the voices of children at three-years-old and at two children at five-years-old. Listener performance was not significantly different in these two tasks. No listeners demonstrated above-chance levels of accuracy in discriminating between the voices of a single child at two different ages. Listener performance was significantly poorer in this task compared to the previous two. The findings from this experiment demonstrated a sizable difference in adults' ability to recognize child voices at two different ages than at one age. Possible explanations and implications for understanding child talker discrimination across different ages are discussed.Item Age Effects on Perceptual Organization of Speech in Realistic Environments(2017) Bologna, William Joseph; Dubno, Judy R; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Communication often occurs in environments where background sounds fluctuate and mask portions of the intended message. Listeners use envelope and periodicity cues to group together audible glimpses of speech and fill in missing information. When the background contains other talkers, listeners also use focused attention to select the appropriate target talker and ignore competing talkers. Whereas older adults are known to experience significantly more difficulty with these challenging tasks than younger adults, the sources of these difficulties remain unclear. In this project, three related experiments explored the effects of aging on several aspects of speech understanding in realistic listening environments. Experiments 1 and 2 determined the extent to which aging affects the benefit of envelope and periodicity cues for recognition of short glimpses of speech, phonemic restoration of missing speech segments, and/or segregation of glimpses with a competing talker. Experiment 3 investigated effects of age on the ability to focus attention on an expected voice in a two-talker environment. Twenty younger adults and 20 older adults with normal hearing participated in all three experiments and also completed a battery of cognitive measures to examine contributions from specific cognitive abilities to speech recognition. Keyword recognition and cognitive data were analyzed with an item-level logistic regression based on a generalized linear mixed model. Results indicated that older adults were poorer than younger adults at glimpsing short segments of speech but were able use envelope and periodicity cues to facilitate phonemic restoration and speech segregation. Whereas older adults performed poorer than younger adults overall, these groups did not differ in their ability to focus attention on an expected voice. Across all three experiments, older adults were poorer than younger adults at recognizing speech from a female talker both in quiet and with a competing talker. Results of cognitive tasks indicated that faster processing speed and better visual-linguistic closure were predictive of better speech understanding. Taken together these results suggest that age-related declines in speech recognition may be partially explained by difficulty grouping short glimpses of speech into a coherent message, which may be particularly difficult for older adults when the talker is female.Item Age-related Effects on the Threshold Equalizing Noise (TEN) Test(2008-05-30) Gmitter, Christine; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Some individuals with sensorineural hearing loss have certain places along the basilar membrane where inner hair cells and/or neurons are damaged or destroyed and consequently have ceased to function. These regions have been referred to as "dead regions" in the literature. The TEN (HL) test is a relatively quick behavioral test designed to identify cochlear dead regions. The test relies on the detection of pure-tone signals in the presence of a specially designed broadband noise (threshold equalizing noise) masker. The TEN (HL) test was validated on young to middle aged adult listeners, an age group which does not represent that of all adults with hearing loss. The goal of this study was to evaluate the effects of age on the TEN (HL) test. The TEN (HL) test was administered to 18 younger and 18 older adults with normal to near-normal hearing sensitivity at seven different frequencies in three different levels of TEN noise. These measures were conducted twice to assess test re-test reliability. The older group demonstrated significantly poorer (higher) SNRs compared to the younger group at all three TEN noise levels and for all seven test frequencies. The greatest difference between groups was observed for the highest level of TEN noise. The greatest difference in SNRs was at 4000 Hz compared to other test frequencies for both groups. Both groups performed best (lowest SNRs) at 4000 Hz compared to the other test frequencies. Finally, a main effect of trial was found, revealing that both groups performed statistically better (lower SNRs) on the second trial; however the small magnitude of this improvement (0.37 dB), suggests that the TEN (HL) test has good repeatability for clinical use, at least within the time period assessed. Although there were significant differences between the two groups, overall the TEN (HL) test yielded accurate results in classifying all normal to near-normal hearing participants as not having a dead region. The significantly higher (poorer) SNRs associated with age, combined with the expected difference in SNRs associated with hearing loss, may allow for older hearing-impaired individuals to demonstrate abnormally high SNRs on the TEN (HL) test in the absence of a cochlear dead region. Future studies that include younger and older participants with normal hearing and hearing loss are needed to assess these differences and examine whether different norms are needed for this older population.Item AN ANALYSIS OF CODE SWITCHING EVENTS IN TYPICALLY DEVELOPING SPANISH-ENGLISH BILINGUAL CHILDREN(2020) Guevara, Sandra Stephanie; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Code-switching (CS) patterns were investigated in language samples of 14 typically-developing Spanish-English bilingual preschool-aged children. CS occurred primarily when the children spoke in Spanish. We investigated code-switched events, vocabulary measures, and disfluencies to better understand if children utilize code-switching to fill in lexical gaps in Spanish, as measured by disfluencies surrounding the code-switch. Results indicate that children’s spoken vocabulary diversity is not related to code-switching frequency, although their receptive vocabulary skills are negatively correlated to proportions of code-switched events. We also found no significant relationship between code-switched events and disfluencies across participants. Findings suggest clinical implications related to best practice for speech-language pathologists when working with bilingual children, as they observe language attrition, and code-switching related to language proficiency and dominance.Item Auditory Temporal Processing Ability in Cochlear-Implant Users: The Effects of Age and Peripheral Neural Survival(2019) Shader, Maureen Joyce; Goupell, Matthew J; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Cochlear implants (CIs) are a valuable tool in the treatment of hearing loss and are considered a safe and effective option for adults of all ages. Nevertheless, older adults with CIs do not always achieve comparable speech recognition performance to younger adults following implantation. The mechanism(s) underlying this age limitation are unknown. It was hypothesized that older CI users would demonstrate age-related deficits in auditory temporal processing ability, which could contribute to an age limitation in CI performance. This is because the ability to accurately encode temporal information is critical to speech recognition through a CI. The current studies were aimed at identifying age-related limitations for processing temporal information using a variety of electrical stimulation parameters with the goal of identifying parameters that could mitigate the negative effects of age on CI performance. Studies 1 and 2 measured auditory temporal processing ability for non-speech signals at the single-electrode level for various electrical stimulation rates. Specifically, Study 1 measured gap detection thresholds, which constitutes a simple, static measurement of temporal processing. Study 2 measured amplitude-modulation detection thresholds, which utilized relatively more complex and dynamic signals. Peripheral neural survival was estimated on each electrode location that was tested in Studies 1 and 2. Study 3 measured phoneme recognition ability for consonant contrasts that varied in discrete temporal cues at multiple stimulation rates and envelope modulation frequencies. Results demonstrated significant effects of age and/or peripheral neural survival on temporal processing ability in each study. However, age and the degree of neural survival were often strongly correlated, with older participants exhibiting poorer neural survival compared to younger participants. This result suggested that a substantial reduction in peripheral neural survival accompanies aging in older CI users, and that these factors should be considered together, rather than separately. Parametric variation in the stimulation settings impacted performance for some participants, but this effect was not consistent across participants, nor was it predicted by age or peripheral neural survival.Item Automatic Syntactic Processing in Agrammatic Aphasia: The Effect of Grammatical Violations(2020) Kim, Minsun; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study aimed to examine syntactic processing in agrammatic aphasia. We hypothesized that agrammatic individuals’ automatic syntactic processing would be preserved, as measured by word monitoring task, and their knowledge of syntactic constraints would be impaired, as measured by sentence judgment task, and their performance would vary by type of syntactic violation. The study found that the sentence processing in agrammatism differed based on the type of violation in both tasks: preserved for semantic and tense violations and impaired for word category violations. However, there was no correlation between the two tasks. Furthermore, single-subject analyses showed that automatic syntactic processing for word category violations does not seem to be impaired in aphasia. Based on the findings, this study supports that knowledge of syntactic constraints and automatic processing may be relatively independent abilities which are not related. Findings suggest that individuals with agrammatic aphasia may have preserved automatic syntactic processing.Item The benefits of acoustic input to combined electric and contralateral acoustic hearing(2008-08-01) Zhang, Ting; Gordan-Salant, Sandra; Dorman, Michael F.; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)With the extension of cochlear implant candidacy, more and more cochlear-implant listeners fitted with a traditional-long electrode array or a partial-insertion electrode array have residual acoustic hearing either in the nonimplanted ear or in both ears and have shown to receive significant speech-perception benefits from the low-frequency acoustic information provided by residual acoustic hearing. The aim of Experiment 1 was to assess the minimum amount of low-frequency acoustic information that was required to achieve speech-perception benefits both in quiet and in noise from combined electric and contralateral acoustic stimulation (EAS). Speech-recognition performance of consonant-nucleus vowel-consonant (CNC) words in quiet and AzBio sentences in a competing babble noise at +10 dB SNR was evaluated in nine cochlear-implant subjects with residual acoustic hearing in the nonimplanted ear in three listening conditions: acoustic stimulation alone, electric stimulation alone, and combined contralateral EAS. The results showed that adding low-frequency acoustic information to electrically stimulated information led to an overall improvement in speech-recognition performance for both words in quiet and sentences in noise. This improvement was observed even when the acoustic information was limited down to 125 Hz, suggesting that the benefits were primarily due to the voice-pitch information provided by residual acoustic hearing. A further improvement in speech-recognition performance was also observed for sentences in noise, suggesting that part of the improvement in performance was likely due to the improved spectral representation of the first formant. The aims of Experiments 2 and 3 were to investigate the underlying psychophysical mechanisms of the contribution of the acoustic input to electric hearing. Temporal Modulation Transfer Functions (TMTFs) and Spectral Modulation Transfer Functions (SMTFs) were measured in three stimulation conditions: acoustic stimulation alone, electric stimulation alone, and combined contralateral EAS. The results showed that the temporal resolution of acoustic hearing was as good as that of electric hearing and the spectral resolution of acoustic hearing was better than that of electric hearing, suggesting that the speech-perception benefits were attributable to the normal temporal resolution and the better spectral resolution of residual acoustic hearing. The present dissertation research provided important information about the benefits of low-frequency acoustic input added to electric hearing in cochlear-implant listeners with some residual hearing. The overall results reinforced the importance of preserving residual acoustic hearing in cochlear-implant listeners.Item The benefits of closed captioning for elderly hearing aid users(2007-08-02) Callahan, Julia Susan; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of this study was to determine the effects of closed captioning and hearing aid use on word recognition of televised materials in a sample of 15 older adults with hearing loss, who use hearing aids. Participants viewed television segments in four viewing conditions: 1) without hearing aids or closed captioning (BSLN), 2) with hearing aids (HA), 3) with closed captioning (CC), and 4) with hearing aids and closed captioning (HA+CC). Three types of programming (game show, drama, and news) comprised the stimulus sentences. Anecdotal reports suggest older hearing impaired people do not use closed captioning, despite its potential benefit in understanding television. The extent to which listeners use closed captioning and hearing aids on a daily basis was examined. It was expected that listeners would have considerable difficulty in the BSLN condition, because the primary cue is speechreading alone. The HA condition was expected to produce significantly higher scores, because listeners would be able to combine information from two modalities: vision (speechreading) and hearing. It was predicted that CC would yield higher scores than these two conditions, because the visual text signal provides unambiguous information, and that the combined HA+CC condition would produce the highest scores. In addition, differences in speech recognition scores were expected for different program types. One prediction was that drama programming would result in consistently lower speech recognition scores due to reduced availability of visual cues compared to game show or news programming. Results indicated that 77% of participants reported never using the closed captioning when watching television, although most wore hearing aids during television viewing. There was a significant effect of listening/viewing condition for all three program types. For all program types, participants achieved higher word recognition scores in the CC and HA+CC conditions than in HA or BSLN conditions. There was no significant difference in performance between the BSLN and HA conditions. These findings indicate older people with hearing loss do not receive significant benefit from hearing aid use while watching television. However, closed captioning appears to provide significant benefit to older hearing aid users, even though they seldom use this technology.Item The Broad Autism Phenotype Within Mother-Child Interactions(2012) Royster, Christina; Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study sought to identify features of the Broad Autism Phenotype (BAP) expressed by mothers during interactions with their infants to further understand how these features relate to early indicators of autism. Twelve mothers were selected who had an older child with autism, and the control group included twelve mothers who did not. Results demonstrated that the groups of mothers did not have significantly different responses on the BAP assessment, and they did not differ in any features of interactions, except that the experimental group used less inhibitory language. Children in the experimental group had lower language scores than the controls. When subjects were divided into groups based upon both child responsiveness and maternal BAP traits, subsequent patterns indicated four mother-child profiles, suggesting that a combination of maternal BAP characteristics and child behavior might influence interaction outcomes. Further research regarding BAP features as an early indicator for autism is discussed.Item Characterizing the Auditory Phenotype of Niemann-Pick, Type C Disease: A Comparative Examination of Humans and Mice(2011) King, Kelly Anne; Gordon-Salant, Sandra; Brewer, Carmen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Niemann-Pick, type C disease (NPC) is a rare (1:120,000-150,000) autosomal recessive lysosomal lipidosis resulting in a progressive and fatal neurological deterioration. There is much about the pathogenesis and natural history of this complex, heterogeneous disorder that remains unknown. Limited literature suggests auditory dysfunction is part of the phenotype, but an aspect of the disease process that is poorly understood and, indeed, has likely been underreported. Experiment one includes auditory data from 55 patients with NPC seen at the National Institutes of Health between 8/14/2006 and 12/27/2010. These data confirm a prevalent high frequency hearing loss that progressively worsens in at least some individuals. Retrocochlear involvement is common, with abnormalities that suggest a profile of auditory neuropathy spectrum disorder in some patients. Analysis of late-onset cases suggests hearing loss is a premonitory symptom in this disease subcategory. The investigation was expanded to include the mouse model for NPC (BALB/cNctr-Npc1m1N/J), in which symptomatology is clinically, biochemically, and morphologically comparable with affected humans. There have been no previous reports of auditory function in NPC mice, although brainstem histopathology has been localized to the auditory pathway. Experiment two includes auditory brainstem response (ABR) and otoacoustic emission (OAE) data revealing a high frequency hearing loss in mutant NPC mice as early as postnatal day (p) 20, which becomes progressively poorer across the experimental lifespan. With support for both a cochlear and retrocochlear site of lesion, OAE level and ABR latency data provide surprising evidence for a disruption in maturational development of the auditory system in diseased animals, which may add a unique perspective on the role of NPC pathogenesis. This comparative, translational study has, for the first time, addressed comprehensively the existence of, and implications for, auditory dysfunction in NPC. Similar auditory phenotypes between affected humans and mutant mice should aid future efforts in refining site of lesion. In combination, these data support the auditory system as a useful marker for disease status and provide valuable prognostic and quality of life information for patients and their families.Item A communication partner training program: Assessing conversational behaviors and attitudes towards communication in Persons with Aphasia and their Communication Partners(2016) Yutesler, Allison E. Carlson; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study examined the conversational behaviors of eleven dyads consisting of a person with aphasia (PWA) and their familiar communication partner (CP), and investigated changes in behaviors as a result of attending a communication partner-training program CPT). Attitudes about communication were examined and related to conversational behaviors observed pre- and post- training. Results indicated that CPs and PWA used significantly more facilitating behaviors than barrier behaviors, although most dyads experienced some barriers. A comparison of pre-and post-CPT conversations revealed a significant interaction between time and type of behavior, with the increase in the number of facilitators approaching significance. Overall, persons with aphasia and their conversational partners expressed positive attitudes about communication. There were no significant correlations between scores on attitude surveys and behaviors pre or post-training. This study demonstrated that these dyads employed facilitative conversational behaviors even before CPT, and that facilitative behaviors can increase after a one-day training workshop.Item A comparison of lexical access in adults who do and do not stutter(2015) Howell, Timothy Andrew; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Previous work has postulated that a deficit in lexicalization may be an underlying cause of a stuttering disorder (Prins, Main, & Wampler, 1997; Wingate, 1988). This study investigates the time course of lexicalization of nouns and verbs in adults who stutter. A generalized phoneme monitoring (GPM) paradigm was used. Both populations showed a significant effect of word class (verbs yielded slower and less accurate monitoring than nouns), as well as phoneme position (word medial/final phonemes yielded slower and less accurate monitoring than word initial phonemes). Few significant differences were found between groups, although the experimental group showed poorer performance in all conditions, with the exception of null trials, where the experimental group actually out-performed the control group. The trends provide some level of support for the notion that people who stutter have a deficit in lexicalization, although the effect is mitigated by the lack of significance.Item Connected Language in Primary Progressive Aphasia: Testing the Utility of Linguistic Measures in Differentially Diagnosing PPA and its Variants(2017) Vander Woude, Ashlyn; Faroqi-Shah, Yasmeen; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Difficulty in using language is the primary impairment in Primary Progressive Aphasia (PPA). Individuals with different variants of PPA have been shown to have unequal deficits in various domains of language; however, little research has focused on finding common deficits in PPA that could aid in the differential diagnosis of PPA relative to healthy aging and age-related neurogenerative conditions. The commonality of deficits in variants of PPA was explored in this study by examining the connected speech of 26 individuals with PPA (10 with PPA-G, 9 with PPA-L, 7 with PPA-S), compared to 25 neurologically healthy controls, 20 individuals with Mild Cognitive Impairments (MCI), and 20 individuals with Alzheimer’s Dementia (AD). Measures of fluency, word retrieval, and syntax were used to assess linguistic ability in a between-groups comparison, in addition to a within-groups comparison of the same linguistic measures among specific PPA variants. It was found that participants with PPA showed significant deficits on certain measures of fluency, word retrieval, and syntax. These findings support the idea that a brief language sample has clinical utility in contributing to the differential diagnosis of PPA.Item Determining the Mechanisms of Spoken Language Processing Delay for Children with Cochlear Implants(2023) Blomquist, Christina Marie; Edwards, Jan R; Newman, Rochelle S; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The long-term objective of this project was to better understand how shorter auditory experience and spectral degradation of the cochlear implant (CI) signal impact spoken language processing in deaf children with CIs. The specific objective of this research was to utilize psycholinguistic methods to investigate the mechanisms underlying observed delays in spoken word recognition and the access of networks of semantically related words in the lexicon, which are both vital components for efficient spoken language comprehension. The first experiment used eye-tracking to investigate the contributions of early auditory deprivation and the degraded CI signal to spoken word recognition delays in children with CIs. Performance of children with CIs was compared to various typical hearing (TH) control groups matched for either chronological age or hearing age, and who heard either clear or vocoded speech. The second experiment investigated semantic processing in the face of a spectrally degraded signal (TH adult listeners presented with vocoded speech) by recording event-related potentials, specifically the N400. Results children with CIs show slower lexical access and less immediate lexical competition, and while early hearing experience supports more efficient recognition, much of these observed delays can be attributed to listening to a degraded signal in the moment, as children with TH demonstrate similar patterns of processing when presented with vocoded speech. However, some group differences remain, specifically children with CIs show slower speed of lexical access and longer-lasting competition, suggesting potential effects of learning from a degraded speech signal. With regards to higher-level semantic processing, TH adult listeners demonstrate more limited access of semantic networks when presented with a degraded speech signal. This finding suggests that uncertainty due the degraded speech signal may lead to less immediate cascading processing at both the word-level and higher-level semantic processing. Clinically, these results highlight the importance of early cochlear implantation and maximizing access to spectral detail in the speech signal for children with CIs. Additionally, it is possible that some of the delays in spoken language processing are the result of an alternative listening strategy that may be engaged to reduce the chance of incorrect predictions, thus preventing costly revision processes.Item Development of an Evidence Based Referral Protocol for Early Diagnosis of Vestibular Schwannomas(2008-09-03) Barrett, Jessica Ann; Gordon-Salant, Sandra; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of this investigation was to identify the presenting symptoms and testing outcomes that were most suggestive of a potential vestibular schwannoma and to propose an audiological referral protocol for MRIs. To that end, a retrospective chart review was conducted to examine radiologic, audiometric, and case history information from patients at Walter Reed Army Medical Center who were referred to the Department of Radiology to rule out retrocochlear pathology. Charts of 628 patients were reviewed from their electronic medical records, although the final patient sample was 328 patients who had complete audiologic data. Analyses were conducted to compare the unaffected and affected ears of the positive MRI group to the better and poorer ears of the negative MRI group. Results were significant between the affected ear of the positive group and the poorer ear of the negative group for pure tone thresholds, speech discrimination scores, and acoustic reflex thresholds. Significant differences between the groups were not generally seen for the comparison of the unaffected ear to the better ear, with the exception of acoustic reflex thresholds. The interaural difference between ears was significant between the two groups for pure tone thresholds and speech discrimination scores; however, the difference was not significant for acoustic reflex thresholds. For all significant differences between the groups, the positive MRI group evidenced poorer audiological results. Additionally, three symptoms/outcomes that led to the patients' referral were significantly different between the two groups: unilateral tinnitus, asymmetrical word recognition, and positive rollover in speech recognition scores. Logistic regression was applied to the audiological tests and symptoms to determine the most predictive set of variables that differentiated between the patients with a positive and negative MRI. The most predictive model yielded a sensitivity of 81.25% and a specificity of 82.59% when applied to the current patient sample. The audiological profile identified may be useful for clinicians in deciding whether their patient should be referred for an MRI to rule out the presence of a vestibular schwannoma.Item The Development of Syntactic Complexity and the Irregular Past Tense in Children Who Do and Do Not Stutter(2009) Bauman, Jessica; Ratner, Nan B; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study examined spontaneous language samples and standardized test data obtained from 31 pairs of children who stutter (CWS), ages 25-59 months, and age-matched children who do not stutter (CWNS). Developmental Sentence Scores (DSS; Lee, 1974) as well as the relationships among age, DSS, and other standardized test scores were compared for both groups. No substantial differences were found between groups in the syntactic complexity of spontaneous language; however, the two groups show different relationships between age and DSS and between test scores and DSS. Additionally, observed differences between CWS and CWNS in patterns of past-tense errors and usage are discussed in light of a recent theoretical model of language performance in populations with suspected basal ganglia involvement (Ullman, 2004).Item Early Phonological Predictors of Toddler Language Outcomes(2015) Gerhold, Kayla; Bernstein Ratner, Nan; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Several studies have explored relationships between children's early phonological development and later language performance. This literature has included a more recent focus on the potential for early vocalization profiles in infancy to predict later language outcomes, including those characterized by delay or disorder. The present study examines phonetic inventories and syllable structure patterns in a large cohort of infants as they relate to expressive language outcomes at 2 years of age. Results suggest that as early as 11 months, phonetic inventory and mean syllable structure level are related to two year expressive language outcomes (MLU, MCDI, and types). If specific patterns of production can be established for a typically-developing population then this will additionally inform clinical decision-making. Possible applications are discussed.Item Early Understanding of Negation: The Word "Not"(2006-06-05) Loder, Lisa Sue; Newman, Rochelle; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Few experimental studies provide data on early comprehension of negation. Commonly accepted norms are based on parental report and observational studies using a small number of participants. The purpose of this study was to determine if 18-month-olds (n=24) understand the word not. The study used a preferential looking paradigm, in which children saw two video screens showing a puppet performing a different action in each video. They then heard a voice, telling them to "Look! The ____'s not ____ing." For the three sets of videos used in the study, children only looked significantly longer at the matching video during one set of trials. However, for no set of trials did the children look longer at the puppet overtly named in the auditory stimulus. These results suggests that although children did not demonstrate a clear understanding of the word not, they may be developing an understanding of not at this age.Item The effect of a citrus tastant on pill swallowing(2010) Albert, Amy Beth; Sonies, Barbara C; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)If using a sweetened citrus tastant (i.e., a chemical that stimulates the taste buds and produces a sense of taste) to coat a pill could make swallowing pills easier, this could have a considerable positive impact on the ability to swallow pills in healthy adults and on those with identified swallowing difficulties who need to take a variety of oral medications. In this study, it was predicted that pills would be cleared from the pharynx more quickly and efficiently if a pill was coated with a tastant. Thus, the following study examined the effect of a pleasant citrus tastant on pill swallowing in healthy individuals (7 male; 17 female) aged 19-49 years (M = 27.83 years). Durational measures of swallowing were obtained from real-time ultrasound images of the oropharyngeal swallow. It was hypothesized that swallow durations would be shortest for citrus-coated tablets, followed by water swallows and then plain pills. Although results from statistical analyses did not support a quicker oropharyngeal swallow for one stimulus over another, rationale for lack of significant findings, such as a ceiling effect for healthy pill swallowing, are provided.Item The Effect of Body Position on Distortion Product Otoacoustic Emissions Testing in Neonates(2008-08-15) Heinlen, Krista; Fitzgerald, Tracy; Hearing and Speech Sciences; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The current study investigated the effects of body position on the measurement of distortion product otoacoustic emissions (DPOAEs) in newborns. DPOAE measurements are commonly used to screen for hearing loss in newborn hearing screening programs conducted in hospitals nationwide. To measure DPOAEs, a small probe is placed in the external ear canal and a series of tone pairs is presented to the ear. The ear's acoustic response to these tones is measured to determine if the infant is at risk for a hearing loss. Research in adults has indicated effects of body position on DPOAE levels and noise floor levels (Driscoll et al., 2004). However, no information is available on the effects of body position on DPOAE testing in infants, despite the fact that newborn screening is one of the primary clinical applications of DPOAEs. Participants were 47 full-term newborns recruited from the well-baby nursery. DPOAEs were measured from the right ear while the infants were in each of three body positions: lying on the left side, supine, and head raised 45 degrees from supine. DPOAE levels, noise floor levels, DPOAE/noise levels, test time, and pass/fail rate were compared across body positions to determine whether there is an optimal body position for newborn hearing screenings that would minimize test time and/or increase specificity. No statistically significant differences were found in the various DPOAE measures or screening results across body positions or between genders. Significant effects of frequency on DPOAE levels and noise floor levels were similar to those expected based on the literature (e.g., Gorga et al., 1993). The results suggest that newborn hearing screenings on infants in the well-baby nursery can be conducted in different body positions without significantly influencing the screening outcome or measurements obtained.