Human Development & Quantitative Methodology Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2779
Browse
229 results
Search Results
Item Multivariate Multilevel Value-Added Modeling: Constructing a Teacher Effectiveness Composite(2019) Lissitz, Anna; Stapleton, Laura; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This simulation study presents a justification for evaluating teacher effectiveness with a multivariate multilevel model. It was hypothesized that the multivariate model leads to more precise effectiveness estimates when compared to separate univariate multilevel models. Then, this study investigated combining the multiple effectiveness estimates that are produced by the multivariate multilevel model and produced by separate univariate multilevel models. Given that the models could produce significantly different effectiveness estimates, it was hypothesized that the composites formed from the results of the multivariate multilevel model differ from the composites formed from the results of the separate univariate models in terms of bias. The correlations between the composites from the different models were very high, providing no evidence that the model choice was impactful. Also, the differences in bias and fit were slight. While the findings do not really support a claim for the use of the more complex multivariate model over the univariate models, the increased theoretical validity from adding outcomes to the VAM does.Item A Latent Factor Approach for Social Network Analysis(2019) Zheng, Qiwen; Sweet, Tracy M.; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Social network data consist of entities and the relation of information between pairs of entities. Observations in a social network are dyadic and interdependent. Therefore, making appropriate statistical inferences from a network requires specifications of dependencies in a model. Previous studies suggested that latent factor models (LFMs) for social network data can account for stochastic equivalence and transitivity simultaneously, which are the two primary dependency patterns that are observed social network data in real-world social networks. One particular LFM, the additive and multiplicative effects network model (AME) accounts for the heterogeneity of second-order dependencies at the actor level. However, all current latent variable models have not considered the heterogeneity of third-order dependencies, actor-level transitivity for example. Failure to model third-order dependency heterogeneity may result in worse fits to local network structures, which in turn may result in biased parameter inferences and may negatively influence the goodness-of-fit and prediction performance of a model. Motivated by such a gap in the literature, this dissertation proposes to incorporate a correlation structure between the sender and receiver latent factors in the AME to account for the distribution of actor-level transitivity. The proposed model is compared with the existing AME in both simulation studies real-world data. Models are evaluated via multiple goodness-of-fit techniques, including mean squared error, parameter coverage rate, information criteria, receiver-operation curve (ROC) based on K-fold cross-validation or full data, and posterior predictive checking. This work may also contribute to the literature of goodness-of-fit methods to network models, which is an area that has not been unified. Both the simulation studies and real-world data analyses showed that adding the correlation structure provides a better fit as well as higher prediction accuracy to network data. The proposed method has equal or similar performance to the AME when the underlying correlation is zero, with regard to mean-squared error of probability of ties and widely applicable information criteria. The present study did not find any significant impact of the correlation term on the node-level covariate’s coefficient estimation. Future studies include investigating more types of covariates, subgroup related covariate effects is an example.Item The development of symbolic magnitude understanding in early childhood(2019) Scalise, Nicole Rose; Ramani, Geetha B; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The path towards mathematics success starts early, as initial numerical knowledge sets the foundation for children’s later mathematics learning. In particular, young children’s knowledge of numerical magnitudes, like knowing that seven is more than three, is theorized to play an important role in their mathematical development. In support of this perspective, there is consistent evidence that symbolic magnitude skills, or knowledge of how written numerals and number words can be ordered and compared, predict mathematical achievement in childhood and adulthood. Yet less is known about the antecedents and consequents of symbolic magnitude understanding in preschool. The goal of the present study was to understand whether symbolic magnitude knowledge in early childhood relates to later math achievement and identify foundational numerical and general cognitive skills that underlie the development of symbolic magnitude knowledge. One hundred and forty Head Start preschoolers aged 3 – 5 years old were assessed in the winter and spring of the school year to test a theory-driven conceptual model of symbolic magnitude development. Specifically, children’s knowledge of the cardinal value represented by numbers, such as knowing that the number word “four” can be represented with four objects, was hypothesized to predict their symbolic magnitude understanding, with children’s symbolic magnitude understanding in turn predicting their symbolic addition skills controlling for children’s executive functioning skills, age, and gender. There was significant evidence in favor of the proposed conceptual model, specifically that children’s cardinality skills predicted their concurrent and later symbolic magnitude understanding; children’s symbolic magnitude understanding predicted their later addition skills; and children’s executive functioning skills predicted each of their numerical skills uniquely. Findings suggest symbolic magnitude understanding fully mediates the relation between children’s cardinality and addition skills, and both domain-general executive functioning and domain-specific cardinality and magnitude skills assessed in the winter explain a similar amount of variability in children’s spring addition skills. These findings will be used to inform the design of comprehensive early numeracy interventions to help parents, teachers, and researchers best support the mathematical development of young children.Item READING IN PRINT AND DIGITALLY: PROFILING AND INTERVENING IN UNDERGRADUATES’ MULTIMODAL TEXT PROCESSING, COMPREHENSION, AND CALIBRATION(2019) Singer Trakhman, Lauren Melissa; Alexander, Patricia A; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)As a consequence of today’s rapid-paced society and ever-changing technologies, students are frequently called upon to process texts in print and digitally. Further, multimodal texts are standard in textbooks and foundational to learning. Nonetheless, little is understood about the effects of reading multimodal texts in print or digitally. In Study I, the students read weather and soil passages in print and digitally. These readings were taken from an introductory geology textbook that incorporated various graphic displays. While reading, novel data-gathering measures and procedures were used to capture real-time behaviors. As students read in print, their behaviors were recorded by a GoPro@ camera and tracked by the movement of a pen. When reading digitally, students’ actions were recorded by Camtasia@ Screen Capture software and by the movement of the screen cursor used to indicate their position in the text. After reading, students answered comprehension questions that differ in specificity (i.e., main idea to key concepts) that cover content from three sources: text only; visual only; and, both text and visual. Finally, after reading in each medium, undergraduates rated their performance on the comprehension measure on a scale of 0-100 for each passage. The accuracy of these ratings formed the basis of the calibration score. The processing data were analyzed using Latent Class Analysis. In Study II, an intervention aimed at improving students’ comprehension and calibration when reading digitally were introduced to participants from Study I who returned to the lab about two weeks later. Next, the undergraduates repeated the procedure for digital reading outlined in Study I with a passage on volcanoes. In Study I, students performed better when reading in print and spent more time with the text but were better calibrated when reading digitally. Three clusters were identified for the print data, and three clusters were identified for the digital data. Cluster movement across mediums suggests that some participants treat digital texts differently than when reading in print. After the intervention in Study II, comprehension scores and duration increased but calibration accuracy scores worsened. The LCA revealed three clusters, each showing improvement in processing behaviors, comprehension, or reading duration.Item Developing characterizations of problem-solving processes, strategies, and challenges from process and product data in digitally delivered interactive assessments: case study.(2019) Caliço, Tiago Alexandre; Harring, Jeffrey; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Games and simulation-based assessments (GSBAs) are the focus of increased interest in educational assessment given their ability to operationalize assessment tasks that mimic real world scenarios. Combined with the capacity to unobstrusively collect data on task-solving behavior, sometimes referred to as \emph{process} or \emph{event} data, GSBAs have the potential to expand the scope and nature of inferences about students' skills, knowledge, and abilities. A case study and a simulation study explored the viability of using concepts and analytical tools from the field of Business Process Mining (BPM) to facilitate the generation of evidence identification rules from behavioral, event-based data generated in the context of a GSBA. The case study demonstrate the utility of a process guided by the principles of Evidence-Centered Design (ECD) in order to define and refine Student, Task and Evidence Models. The BPM conceptual and analytical tools allowed to economically investigate the feasibility of using aspects of task-solving behavior, such as differences in targeted event sequences, as evidentiary sources. Bayesian Networks were then use to aggregate traditional score data with behavioral data in order to predict student membership to latent classes. Given the novel nature of the analytical method used to identify evidence rules, known as the Fuzzy Miner, a simulation study investigated the impact of sample size, expert classification of a training sample, behavioral variability, and modeling parameters in the ability of the method to identify differences in process structure across groups. The simulation results show that the method's robustness to several sources of noise, suggesting its utility as an exploratory tool to be integrated with expert judgment when generating evidence identification rules.Item Handling of Missing Data with Growth Mixture Models(2019) Lee, Daniel Yangsup; Harring, Jeffrey R; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The recent growth of applications of growth mixture models for inference with longitudinal data has introduced a wide range of research dedicated to testing the different aspects of the model. One area of research that has not drawn much attention, however, is the performance of growth mixture models with missing data and when using the various methods for dealing with them. Missing data are usually an inconvenience that must be addressed in any data analysis scenario, and the use of growth mixture models is no less an exception to this. While the literature on various other aspects of growth mixture models has grown, not much research has been conducted on the consequences of mishandling missing data. Although the literature on missing data has generally accepted the use of modern missing data handling techniques, these techniques are not free of problems nor have they been comprehensively tested in the context of growth mixture models. The purpose of this dissertation is to incorporate the various missing data handling techniques on growth mixture models and, by using Monte Carlo simulation techniques, to provide guidance on specific conditions in which certain missing data handling methods will produce accurate and precise parameter estimates typically compromised when using simple, ad hoc, missing data handling approaches, or incorrect techniques.Item THE INFLUENCE OF STRESS AND SOCIAL SUPPORT ON PARENTING BEHAVIORS AMONG LOW-INCOME FAMILIES: MEDIATIONAL PATHWAYS TO CHILDREN’S SOCIAL DEVELOPMENT(2019) Kuhns, Catherine Emily; Cabrera, Natasha; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Economic stress been shown to compromise children’s social development and undermine parenting behaviors in mothers of young children. A separate literature suggests that social support may attenuate the negative effects of maternal stress on parenting behaviors. Guided by the Family Stress Model and the Stress Buffering Model, this study examined the indirect pathways from maternal experiences of stress (economic and parenting) to children’s social competencies and behavior problems longitudinally in a sample of children from the Early Head Start Family and Child Experiences Survey (Baby FACES). It also tested the moderating effects of two types of social support (instrumental and emotional) on the negative association between stressors (economic and parenting) and children’s social skills. Using structural equation modeling (SEM) results demonstrated support for the Family Stress Model, such that economic stress (at age 1) was longitudinally and indirectly related to children’s social competencies and problem behaviors (at age 3) via observed maternal sensitivity (at age 2). That is, higher levels of economic stress were related to elevated levels of behavior problems and lower levels of social competencies because it increased parenting stress and decreased maternal sensitivity. However, there was no evidence that social support moderated the association between either type of stress and parenting. Findings are discussed in light of policy and programmatic efforts to broaden support of families and children by incorporating services that promote sensitive parent-child interactions and reduce maternal parenting stress.Item The Performance of Balance Diagnostics for Propensity-Score Matched Samples in Multilevel Settings(2019) Burnett, Alyson; Stapleton, Laura M; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of the study was to assess and demonstrate the use of covariate balance diagnostics for samples matched with propensity scores in multilevel settings. A Monte Carlo simulation was conducted that assessed the ability of different balance measures to identify the correctly specified propensity score model and predict bias in treatment effect estimates. The balance diagnostics included absolute standardized bias (ASB) and variance ratios calculated across the pooled sample (pooled balance measures) as well as the same balance measures calculated separately for each cluster and then summarized across the sample (within-cluster balance measures). The results indicated that overall across conditions, the pooled ASB was most effective for predicting treatment effect bias but the within-cluster ASB (summarized as a median across clusters) was most effective for identifying the correctly specified model. However, many of the within-cluster balance measures were not feasible with small cluster sizes. Empirical illustrations from two distinct datasets demonstrated the different approaches to modeling, matching, and assessing balance in a multilevel setting depending on the cluster size. The dissertation concludes with a discussion of limitations, implications, and topics for further research.Item AN INITIAL EVALUATION OF IBI VIZEDIT: AN RSHINY APPLICATION FOR OBTAINING ACCURATE ESTIMATES OF AUTONOMIC REGULATION OF CARDIAC ACTIVITY(2018) Barstead, Matthew; Rubin, Kenneth H; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Photoplethysmogram (PPG) sensors are increasingly used to collect individual heart rate data during laboratory assessments and psychological experiments. PPG sensors are relatively cheap, easy to use, and non-invasive alternatives to the more common electrodes used to produce electrocardiogram recordings. The downside is that these sensors are more susceptible to signal distortion. Often, the most relevant measures for understanding psychological processes that underlie emotions and behaviors are measures of heart rate variability. As with all measures of variability, outliers (i.e., signal artifacts) can have outsized effects on the final estimates; and, given that these scores represent a primary variable of interest in many research contexts, the successful elimination of artefactual points is critical to the ability to make valid inferences with the data. Prior to the development of IBI VizEdit, there was no single, integrated processing and editing pipeline for PPG data. The present pair of studies offers and initial evaluation of the program’s performance. Study 1 is focused on the efficacy of a novel approach to imputing sections of particularly corrupted PPG signal. Study 2 tests the ability of trained editors to reliably use IBI VizEdit as well as the validity of estimates of cardiac activity during a prescribed set of laboratory tasks. Study 1 suggests that the novel imputation approach, under certain conditions and using certain parameterizations may hold promise as a means of accurately imputing missing sections of data. However, Study 1 also clearly demonstrates the need for further refinement and the consideration of alternative implementations. The results from Study 2 indicate that IBI VizEdit can be reliably used by trained editors and that estimates of cardiac activity derived from its output are likely valid.Item A FRAMEWORK FOR THE PRE-CALIBRATION OF AUTOMATICALLY GENERATED ITEMS(2018) Sweet, Shauna Jayne; Hancock, Gregory R; Harring, Jeffrey R; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This paper presents a new conceptual framework and corresponding psychometric model designed for the pre-calibration of automatically generated items. This model utilizes a multi-level framework and a combination of crossed fixed and random effects to capture key components of the generative process, and is intended to be broadly applicable across research efforts and contexts. Unique among models proposed within the AIG literature, this model incorporates specific mean and variance parameters to support the direct assessment of the quality of the item generation process. The utility of this framework is demonstrated through an empirical analysis of response data collected from the online administration of automatically generated items intended to assess young students’ mathematics fluency. Limitations in the application of the proposed framework are explored through targeted simulation studies, and future directions for research are discussed.