|dc.description.abstract||This study explored the relationship between test-wiseness and the validity of standardized achievement test scores. Test-wiseness has been suggested by R.L. Thorndike (1951) to be a contributor of invalid, systematic variance in test scores. An attempt, through training, was made to reduce this invalid true score variance. Since a reduction in true score variance without a concomitant reduction in error variance should result in lowered reliability, one might expect that validity would therefore also be reduced. The assumption was made, for the purposes of this study, that test-wiseness variance functions in a manner similar to a suppressor variable. When invalid, systematic variance is decreased on a predictor, the true score relationship between predictor and criterion will be enhanced. In this way, the test becomes a "purer" predictor of the criterion and validity is increased. In this study, reduction of invalid, systematic true score test-wiseness variance in the predictor (the Comprehensive Tests of Basic Skills, CTBS) was thus expected to result in a higher correlation between the CTBS and a criterion of teacher report card marks. Subjects in this study were fourth graders from a low socio-economic area of a large metropolitan city. The experimental group consisted of 401 children from seventeen classrooms in five elementary schools while the control group numbered 410 children from seventeen classrooms in nine schools. The seventeen teachers taking part in the research attended an introductory seminar and monthly meetings aimed at teaching them about test-wiseness and the specific test simulations they would be leading. Teachers led nineteen test simulations, one each week over a six-month period. Each lesson involved a practice test and children were expected to learn pre-determined test-taking skills and attitudes through experiencing the simulations aimed at these skills and attitudes. Experimental group subjects took a test-wiseness pretest and a test- wiseness posttest while control group subjects took the test-wiseness test at the same time as the experimental group took the posttest. Internal consistency of these results was low for the pretest and the control group test (stratified alpha's .1541 and .2007, respectively) but moderate (.3966) for the posttest. A one-way analysis of variance on classroom means yielded significantly larger means for the posttest than for the control group test. A one-way repeated measures analysis on classroom means yielded a significantly greater posttest than pretest mean.
Two-sample homogeneity of variance tests and Levene's tests yielded significantly greater variances on the posttest results than either the pretest or the control group results. All children took the CTBS at the end of the year. There was no significant difference in mean classroom score between experimental and control group. Achievement test scores were correlated with teacher report card marks in reading. Fisher's Z was used to test for significant difference in validity coefficients at the .05 level. The difference approached significance by achieving a .07 level. Suggestion was made that the study be replicated in order to control for variables which might have contributed to or caused the resultant lack of significant difference in validity in this study. Larger experimental groups (to control for mortality) might be used. A more reliable (or more valid) criterion of achievement than teachers' grades might be employed. A different research design might allow for study of individual rather than group differences.||en_US