Human Development & Quantitative Methodology

Permanent URI for this communityhttp://hdl.handle.net/1903/2248

The departments within the College of Education were reorganized and renamed as of July 1, 2011. This department incorporates the former departments of Measurement, Statistics & Evaluation; Human Development; and the Institute for Child Study.

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    A hierarchical latent space network model for mediation
    (Cambridge University Press, 2022-05-30) Sweet, Tracy M.; Adhikari, Samrachana
    For interventions that affect how individuals interact, social network data may aid in understanding the mechanisms through which an intervention is effective. Social networks may even be an intermediate outcome observed prior to end of the study. In fact, social networks may also mediate the effects of the intervention on the outcome of interest, and Sweet (2019) introduced a statistical model for social networks as mediators in network-level interventions. We build on their approach and introduce a new model in which the network is a mediator using a latent space approach. We investigate our model through a simulation study and a real-world analysis of teacher advice-seeking networks.
  • Item
    SENSITIVITY ANALYSIS OF STRUCTURAL PARAMETERS TO MEASUREMENT NONINVARIANCE: A BAYESIAN APPROACH
    (2014) Kang, Yoon Jeong; HANCOCK, GREGORY R; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Most previous studies have argued that the validity of group comparisons of structural parameters is dependent on the extent to which measurement invariance is met. Although some researchers have supported the concept of partial invariance, there is still no clear-cut partial invariance level which is needed to make valid group comparisons. In addition, relatively little attention has been paid to the implications of failing measurement invariance (e.g., partial measurement invariance) on group comparison on the underlying latent constructs in the multiple-group confirmatory factor analysis (MGCFA) framework. Given this, the purpose of the current study was to examine the extent to which measurement noninvariance affects structural parameter comparisons across populations in the MGCFA framework. Particularly, this study takes a Bayesian approach to investigate the sensitivity of the posterior distribution of structural parameter difference to varying types and magnitudes of noninvariance across two populations. A Monte Carlo simulation was performed to empirically investigate the sensitivity of structural parameters to varying types and magnitudes of noninvariant measurement models across two populations from a Bayesian approach. In order to assess the sensitivity of noninvariance conditions, three outcome variables were evaluated: (1) accuracy of statistical conclusion on structural parameter difference, (2) precision of the estimated structural parameter difference, and (3) bias in the posterior mean of structural parameter difference. Inconsistent with findings of previous studies, the results of this study showed that the three outcome variables were not sensitive to varying types and magnitudes of noninvariance across all conditions. Instead, the three outcome variables were sensitive to sample size, factor loading size, and prior distribution. These results indicate that even under a large magnitude of measurement noninvariance, accurate conclusions and inferences on structural parameter differences across populations could be obtained in the MGCFA framework. Implications for practice are discussed for applied researchers who wish to conduct group comparisons of structural parameters across populations under measurement noninvariance.
  • Item
    REWEIGHTING DATA IN THE SPIRIT OF TUKEY: USING BAYESIAN POSTERIOR PROBABILITIES AS RASCH RESIDUALS FOR STUDYING MISFIT
    (2010) Dardick, William Ross; Mislevy, Robert J; Measurement, Statistics and Evaluation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed as weights for the original data. The data weighted by the units' posterior probabilities for the unscalable class is used for further exploration of structures. Data were generated in accordance with departures from the Rasch model that have been studied in the literature. Factor analysis models are compared with the original data and the data as reweighted by the posterior probabilities for the unscalable class. Eigenvalues are compared with Horn's parallel analysis corresponding to each class of factor models to determine the number of factors in a dataset. In comparing two weighted data sets, the Rasch weighted data and the data were considered unscalable, and clear differences are manifest. Pattern types are detected for the Rasch baselines that have different patterns than that of random or systematic contamination. The Rasch baseline patterns are strongest around item difficulties that are closest to the mean generating value of è's. Patterns in baseline conditions are weaker as they depart from a item difficulty of zero and move toward extreme values of ±6. The random contamination factor patterns are typically flat and near zero regardless of the item difficulty with which it is associated. Systematic contamination using reversed Rasch generated data produces alternate patterns to the Rasch baseline condition and in some conditions shows an opposite effect when compared to the Rasch patterns. Differences can also be detected within the residually weighted data between the Rasch generated subtest and contaminated subtest. In conditions that have identified factors, the Rasch subtest often had Rasch patterns and the contaminated subtest has some form of random/flat or systematic/reversed pattern.