Languages, Literatures, & Cultures Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/2785

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Individual Variables in Context: A Longitudinal Study of Child and Adolescent English Language Learners
    (2023) Struck, Jason; Jiang, Nan; Clark, Martyn; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Millions of public-school students in the United States are identified as English language learners (ELLs), whose academic success is tied to their second language (L2) English education. Previous research in adult populations indicates that L2 proficiency is related to the contextual variable of the prevalence of one’s first language (L1) among their peers, called L1 density, which may also moderate the effects of individual variables such as age and exposure to the L2. Despite its substantial impact on the amount and quality of adult learners’ exposure to the L2, the variable of L1 density has received little attention in child and adolescent populations, even though it is unknown what role, if any, L1 density plays in L2 acquisition in a school context. Other outstanding questions concerning individual variables include the nature of the purported rate advantage of later starters and whether the similarity of one's L1 and L2 is related to L2 proficiency.The current study addressed these questions by analyzing longitudinal L2 proficiency assessment records of 10,879 ELLs in grades 1–12 in the United States. The assessment was WIDA's ACCESS for ELLs Online Test, a national, standardized test with scores for each of the four domains of listening, reading, speaking, and writing. Multilevel models were used to estimate the effects of several variables: age of enrollment in a United States school, length of enrollment, language similarity, and L1 density. In the fitted model estimates, age of enrollment had a small, positive effect. Length of enrollment had a sizable, positive effect but attenuated over time. ELLs enrolling at a later age progressed slightly slower than ELLs enrolling at an earlier age, contrary to the widely accepted notion that later starters enjoy a rate advantage. Little to no evidence was found for a relationship between test scores and language similarity or L1 density, or that the effects of age of enrollment or length of enrollment varied with L1 density. The results of this study give evidence for the following conclusions for ELLs in United States schools: an earlier age of enrollment is associated with greater gains in L2 proficiency over time, speakers of different L1s are not expected to become differentially proficient in L2 English, and ELLs’ levels of L2 proficiency are not expected to vary with how many of their peers speak the same L1.
  • Item
    COMPREHENSION OF CONVERSTATIONAL IMPLICATURE: EXAMINING EVIDENCE OF ITS SEPARABILITY AS A LISTENING SUBSKILL
    (2019) O'Connell, Stephen Patrick; Ross, Steven J; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Understanding the inferences that speakers rely on to communicate is a core part of listening comprehension and is, more broadly, an important aspect of communicative ability. As a result, theories of communicative language ability account for it, and language testers who try to gauge the proficiency of learners of a second language include it in their assessments. Within the field of language testing, much research has been conducted to better understand how different aspects of listening may contribute to difficulty for second-language learners. One area of investigation has been the notion of listening being separable into different subskills, such as listening for inferences as opposed to listening for specific explicit details or listening for main idea. However, there have been mixed results when attempting to determine the psychological reality of these subskills. This study attempts to clarify this question via a listening comprehension instrument that was designed specifically to assess the comprehension of conversational implicature, or pragmatic inferencing, in contrast to non-implicature, or general comprehension. This balanced instrument was administered to 255 language learners in two item formats, multiple choice and constructed response. In addition, participants were administered short-term memory and working memory measures. A variety of analyses, including item response theory (Rasch), logistic regression, and confirmatory factor analyses were used to try to attain evidence for 1) the existence of a separable listening for conversational implicature subskill and 2) the validity of assessing this subskill through a multiple-choice format. The results from the analyses generally converged to indicate that while conversational implicature contributes to difficulty, it is not a separable subskill. However, the results did show that the multiple-choice item format is a defensible method for targeting this skill. This leads to the conclusion that expending effort on assessing comprehension of conversational implicature in general language proficiency tests may not be necessary unless the test-use context places particular emphasis on this ability. Although it is an integral aspect of listening, from an assessment standpoint, performance on general listening items will likely give test users the information they need to make predictions about comprehension of conversational implicature ability.
  • Item
    ITEM-ANALYSIS METHODS AND THEIR IMPLICATIONS FOR THE ILTA GUIDELINES FOR PRACTICE: A COMPARISON OF THE EFFECTS OF CLASSICAL TEST THEORY AND ITEM RESPONSE THEORY MODELS ON THE OUTCOME OF A HIGH-STAKES ENTRANCE EXAM
    (2011) Ellis, David P.; Ross, Steven J; Second Language Acquisition and Application; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The current version of the International Language Testing Association (ILTA) Guidelines for Practice requires language testers to pretest items before including them on an exam, or when pretesting is not possible, to conduct post-hoc item analysis to ensure any malfunctioning items are excluded from scoring. However, the guidelines are devoid of guidance with respect to which item-analysis method is appropriate for any given examination. The purpose of this study is to determine what influence choice of item-analysis method has on the outcome of a high-stakes university entrance exam. Two types of classical-test-theory (CTT) item analysis and three item-response-theory (IRT) models were applied to responses generated from a single administration of a 70-item dichotomously scored multiple-choice test of English proficiency, administered to 2,320 examinees applying to a prestigious private university in western Japan. Results illustrate that choice of item-analysis method greatly influences the ordinal ranking of examinees. The implications of these findings are discussed and recommendations are made for revising the ILTA Guidelines for Practice to delineate more explicitly how language testers should apply item analysis in their testing practice.