We are in the process of updating the DRUM statistics and the number of downloads reported in DRUM records only reflects downloads from June 2014 to the present. The previous numbers have not been lost and we are in the process adding them to the total. Please contact firstname.lastname@example.org if you have any questions.
DETERMINANTS OF COLLEGE GRADE POINT AVERAGES
Bailey, Paul Dean
Hellerstein, Judith K
Wallis, John J
MetadataShow full item record
<bold>Chapter 2: The Role of Class Difficulty in College Grade Point Averages.</bold> Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the difficulty of the classes the students take. When class difficulty is correlated with student ability, GPAs are biased estimates of students' abilities. Using a fixed effects model on eight years of transcript data from one university with one fixed effect for student ability and another for class difficulty, I decompose grades at the individual student-class level to find that GPAs are largely not biased. Eighty percent of the variation in GPAs is explained by student ability, while only three percent of the variation in GPAs is explained by class difficulty. This estimation is carried out using an ordered logit estimator to account for the ordered but non-cardinal nature of grades. <bold>Chapter 3: Are Low Income Students Diamonds in the Rough?</bold> Consider two students who earn the same SAT score, one from a lower-income household and the other from a higher-income household. Since educational expense is a normal good, the lower income student will, on average, have had a less well-resourced primary and secondary education. The lower income student may therefore be stronger than their higher income counterpart because they have earned an equally high SAT score despite a lower quality pre-collegiate environment. If this is the case, once the two students start attending the same college---and school spending becomes more similar---the lower income student's in-college performance should be relatively higher. I test this theory by using eight years of data from one university to compare the grade point averages of students from various family income levels. Results show that lower income students appear to be "diamonds in the rough": lower income students have surprisingly high outcomes, conditional on their SAT scores. However, unconditional on SAT score, the lower income students also outperform their higher income counterparts. This suggests that a single university's data is inappropriate for answering this question. I also develop how this type of regression might give insight into the production function of human capital. Specifically, a common assumption made in the economics of education literature is that first differenced human capital accumulation rates are independent of ability because ability is already represented in the test used as a base period. A “diamonds in the rough” result would contradict that assumption, and show that SAT is not a perfect measure of underlying ability. <bold>Chapter 4: Estimation of Large Ordered Multinomial Models.</bold> Decomposing grades data into class fixed effects and student fixed effects is difficult and the estimator's accuracy is unknown. I describe the successful application of the L-BFGS algorithm for fitting these data and propose a new convergence criterion. I also show that when the number of classes is about 32 (slightly fewer than is typical at the University of Maryland), the estimator performs well at estimating correlations and the non-parametric statistics used in Chapter 2 of this dissertation. Some issues with significance testing the sets of fixed effects are also considered and I show that when the number of classes is 32, the significance tests are not sufficiently protective against false rejection of the null hypothesis. The jackknifed likelihood ratio test is shown to be only modestly biased towards false rejection regardless of the number of classes per student.