College of Education
Permanent URI for this communityhttp://hdl.handle.net/1903/1647
The collections in this community comprise faculty research works, as well as graduate theses and dissertations..
Browse
6 results
Search Results
Item Improving Science Assessments by Situating Them in a Virtual Environment(MDPI, 2013-05-30) Ketelhut, Diane Jass; Nelson, Brian; Schifter, Catherine; Kim, YounsuCurrent science assessments typically present a series of isolated fact-based questions, poorly representing the complexity of how real-world science is constructed. The National Research Council asserts that this needs to change to reflect a more authentic model of science practice. We strongly concur and suggest that good science assessments need to consist of several key factors: integration of science content with scientific inquiry, contextualization of questions, efficiency of grading and statistical validity and reliability.Through our Situated Assessment using Virtual Environments for Science Content and inquiry (SAVE Science) research project, we have developed an immersive virtual environment to assess middle school children’s understanding of science content and processes that they have been taught through typical classroom instruction. In the virtual environment, participants complete a problem-based assessment by exploring a game world, interacting with computer-based characters and objects, collecting and analyzing possible clues to the assessment problem. Students can solve the problems situated in the virtual environment in multiple ways; many of these are equally correct while others uncover misconceptions regarding inference-making. In this paper, we discuss stage one in the design and assessment of our project, focusing on our design strategies for integrating content and inquiry assessment and on early implementation results. We conclude that immersive virtual environments do offer the potential for creating effective science assessments based on our framework and that we need to consider engagement as part of the framework.Item TOWARD A DATA LITERACY ASSESSMENT THAT IS FAIR FOR LANGUAGE MINORITY STUDENTS(2023) Yeom, Semi; O'Flahavan, John; Education Policy, and Leadership; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Data literacy is crucial for adolescents to access and navigate data in today’s technology-driven world. Researchers emphasize the need for K-12 students to attain data literacy. However, few available instructions have incorporated validated assessments. Therefore, I developed and implemented the Data literacy Assessment for Middle graders (DLA-M) that can diagnose students’ current stages fairly and support future practices regardless of their language backgrounds. I initiated the study with two research questions: a) How valid is a newly developed assessment to measure middle-grade students’ data literacy? b) How fairly does the new assessment measure data literacy regardless of students’ language backgrounds?A new assessment purported to measure two competencies of data literacy of 6th to 9th graders: a) Interpret data representations and b) Evaluate data and data-based claims. I used the Evidence-Centered Design (ECD) as a methodological framework to increase the validity of the assessment. I followed the five layers of the ECD framework to develop and implement the DLAM. Then I analyzed the data from implementing the assessment and gathered five types of validity evidence for validation. Based on the collected validity evidence, I concluded that the assessment was designed to represent the content domain that is purported to measure. The assessment had internal consistency in measuring data literacy except for nine eliminated items, and the data literacy scores from the overall assessment were reliable as well. Regarding item quality, item discrimination parameters met the quality criteria, but difficulty estimates of some items did not meet the intended design. Empirical cluster analyses revealed two performance levels from the participants. Differential item functioning analyses showed that item discrimination and difficulty were not differentiated between language minority students (LMSs) and their counterparts with the same data literacy level. These results did not reveal the possibility of unfair interpretations and uses of this assessment for LMSs. Lastly, I found significant interaction effects between the DLAM scores and the two variables about students’ English reading proficiency and use of technology. This study delineated how to develop and validate a data literacy assessment that could support students from different linguistic backgrounds. The research also facilitated the application of a data literacy assessment to school settings by scrutinizing and defining target competencies that could benefit adolescents’ data literacy. The findings can inform future research to implement data literacy assessments in broader contexts. This study can serve as a springboard to provide inclusive data literacy assessments for diverse student populations.Item Exploring the Contributions of Word Knowledge and Figural Reasoning Ability to College Students' Performance on a Measure of Relational Reasoning with Words(2021) Zhao, Hongyang; Alexander, Patricia; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Word knowledge has long been considered as one of the most important predictors of reading comprehension, academic achievement, and social development. However, it was relatively narrowly conceptualized and measured as either the number of words individuals know the general meaning of (i.e., breadth) or the simple association between two words (i.e., depth). The problem with such a one-sided view is that the existing measures of word knowledge is limited in revealing the quality of word knowledge, which is characterized by both depth and breadth. In this investigation, by comparison, quality of word knowledge was conceptualized as individuals’ fine-grained understanding of the word meanings as they systematically identified semantic similarities and differences among a group of words. It is believed that through deliberate comparisons among word sets, individual’s understanding of the intricacy, subtlety in the meaning of words can be revealed. Therefore, this study offered a new approach in assessing the word knowledge quality informed by a theoretical model of relational reasoning and its four resulting forms (Alexander & DRLRL, 2012). A novel measure of word knowledge quality, Relational Reasoning with Words (R2W2) was developed and validated in this study. Moreover, the unique contributions of relational reasoning ability and word knowledge to college students’ performance on R2W2 were also analyzed. With a sample that involved 338 participants from four US universities, the study found that R2W2 was a reliable and valid measure for word knowledge quality with sound psychometric properties on the item level. In addition, word knowledge was found to contribute to college students’ performance on R2W2 more than relational reasoning ability. Implications for future research and practice are also presented and discussed.Item Development and Initial Validation of the Work Addiction Inventory(2009) Bryan, Nicole A.; Lent, Robert W.; Counseling and Personnel Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of the study is to develop and validate the Work Addiction Inventory (WAI). The WAI is designed to assess individual's addiction to work via self-report. Data were collected from 127 working professional employed on at least a part-time (20 hours per week) basis. Results of an exploratory factor analysis retained 24 items and indicated that the WAI consists of three underlying factors. The WAI subscale and total scores showed adequate internal consistency reliabilities. Convergent and discriminant validity was initially supported by the relationship between WAI scores, an existing measure of workaholism, and social desirability. Also, WAI scores correlated highly with several criterion variables. Finally, evidence was found to suggest that the WAI accounts for unique variance beyond an existing measure of workaholism. In conclusion, psychometric properties of the WAI were initially supported by findings of the study.Item Measures of Writing Skills as Predictors of High Stakes Assessments for Secondary Students(2008-01-24) Jones, Karen Anne; Rosenfield, Sylvia; Counseling and Personnel Services; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This study examined the potential utility of written expression scoring measures, developed in the curriculum-based measurement research, to monitor student progress and predict performance on a high stakes state mandated assessment for high school students. In response to a teacher generated prompt, 10th-grade students completed 3 brief constructed response (BCR) and 2 extended constructed response (ECR) writing samples throughout the academic year. Writing samples were scored for total words written (TWW), words spelled correctly (WSC), correct writing sequences (CWS), correct minus incorrect writing sequences (CMIWS), percentage of words spelled correctly (%WSC), percentage of correct writing sequences (%CWS), production dependent index, and production independent index. The average time to score a BCR for TWW, WSC, CWS, and CMIWS was over 7 minutes, and the average time to score an ECR was over 16 minutes. Alternate form reliability correlation coefficients between scoring measures were only in the weak to moderate range. Results revealed that girls wrote more words, spelled more words correctly, produced more correct writing sequences, and produced more correct minus incorrect writing sequences. Across writing samples, statistically significant but small increases were found on scoring measures. Results of multiple regression and logistic regression analyses failed to provide a model that accurately predicted student outcomes.Item Student Teacher Exit Portfolios: Is It an Appropriate Measure and a Unique Contribution Toward the Assessment of Highly Qualified Teacher Candidates?(2004-04-26) Simpson, Leslie Jackson; Dudley, James; Education Policy, and LeadershipAbstract Title of Dissertation: STUDENT TEACHER EXIT PORTFOLIOS: IS IT AN APPROPRIATE MEASURE AND A UNIQUE CONTRIBUTION TOWARD THE ASSESSMENT OF HIGHLY QUALIFIED TEACHER CANDIDATES? Leslie Ann Jackson Simpson, Doctor of Philosophy, 2004 Dissertation directed by: Dr. James Dudley Professor Emeritus, College of Education Department of Education Policy and Administration The student teacher portfolio, at the forefront of teacher education assessment issues during the past decade, was the topic of this study. The teacher education community has moved beyond the initial concerns about defining a teacher portfolio, identifying appropriate contents of a teacher portfolio, and determining the place of portfolios in a program's assessment system. The teacher education community is now concerned about whether the student teacher exit portfolio is an appropriate measurement of all teacher candidates and contributes possibly unique information to the assessment of the competency of teacher candidates. This study investigated the possible influence of the demographic factors of gender, age, and certification levels of the teacher candidates on the assessment outcomes of student teacher exit portfolios. It also compared the outcomes of traditionally accepted assessments (student teaching grade, Praxis I tests, Praxis II tests, and overall grade point average) with the outcomes of the exit portfolio assessment. This was an ex-post facto study, based upon existing data collected about each teacher candidate (n=76), with no treatment afforded the teacher candidates as part of the study. Two conclusions were drawn from the findings of this study. First, the demographic factors of gender, age, and choice of certification level of the teacher candidates did not appear to influence the outcomes of the exit portfolio. The teacher candidates noted that they valued the portfolio process. Because of these two findings, the exit portfolio was deemed to be an appropriate assessment tool at this institution. Second, the exit portfolio results, compared with the four other assessments, did not indicate correlational statistics of a predictive quality. Therefore, the exit portfolio was considered to contribute information not offered by the other more traditional assessments of the competencies of teacher candidates.