6 Mar 2014

The SAT and SES

Submitted by Karl Hagen
Topic: 
Everyone seems to be talking about the new SAT. I'm going to reserve judgment until I see really specific information about the new test. The rather vague descriptions so far sound fine, but details are very, very important on standardized tests.

The New York Times article on the changes has a lot of interesting stuff. But one comment about the relationship between the SAT and socioeconomic status (SES) caught my attention:

"The test highly correlates with family income," says Soares, who also edited a book that, in part, examines the effects of making the SAT optional at the University of Georgia, Johns Hopkins University and Wake Forest. "High-school grades do not."

Soares's remark is, depending on how you want to look at it, either false, or true only in a highly misleading context. Yes, it's true that the SAT is correlated with family income. But the assertion that GPA is uncorrelated is only true if you lump together grades from many different schools and compare SAT scores with grades from everywhere in the country. As should be obvious, though, schools don't all have equivalent grading standards. Work that routinely merits a B at one school might result in a C at another, and so on. That's the whole motivation for standardized tests in the first place. These variations mean that high school grades (unlike SAT scores) are not really all measured on the same scale, so pooling all these different sources of grades is statistically invalid. Garbage in, garbage out.

There are statistical methods for controlling for the grading differences between schools, and one you apply them, GPA suddenly becomes strongly correlated with SES too. In some studies, it appears to be even more strongly correlated with SES than the SAT (Zwick and Grief, 2007).

Most people would agree that minimizing bias is desirable, so what are we to do to improve the situation? The critics normal solution, abandon standardized tests and rely on grades, doesn't seem to improve things and may actually make the problem worse. Indeed, I would argue that any assessment that measures ability in a way that society would see as useful will inevitably be correlated with wealth. To see why, you only need to accept a few premises, all of which strike me as virtually certain to be true:

  1. People can develop skills through education.
  2. Education can be of varying qualities.
  3. More money can buy you a better quality education.
  4. Many people with the means to do so devote extra resources to teaching their children.

Now assume we are somehow able to construct an ideal assessment that accurately and fairly measures a particular developed skill. For this thought experiment, the form of the assessment is irrelevant. It could be a single standardized test, it could be a summative grade at the end of the course, or any other method you choose.

If greater wealth gives you, on average, a better education, then average scores on this ideal test would still be higher for the wealthy. The wealthy can afford to send their students to better schools or to move to areas with better schools. They can hire tutors to help their students, and so on.

If there is no correlation, that could only mean that one of our premises doesn't apply for that particular skill. Perhaps the skill is one where the rich and the poor have equal access to education of the same quality. Perhaps the wealthy simply have no interest in devoting their resources to developing this skill. Perhaps the assessment measures an innate quality that could not be improved with study. That, of course, is exactly what the original creators of the SAT wanted to do. They thought they could measure inborn intelligence and help level the playing field. But no one in the testing industry today really believes that's what the SAT can, or should, do.

What we really want to know from any assessment that will be used for college admissions is whether or not this measurement tells us something useful about how well prepared a student is to do college-level work at a particular institution. And all of the things that we could plausibly measure for that end are unquestionably skills that can be developed through education.

Leaving the ideal for the real world, the same conclusion holds even with imperfect assessments, as long as they measure developed abilities with even modest power. They will still correlate with SES simply by virtue of the fact that you can buy a better education, which in turn will make you better perform on a test that measures aspects of your education.

I want to be clear that I'm not arguing for the status quo here. I'm not suggesting that the SAT, or other standardized tests, are close to perfect, nor that there's no way to improve upon things. Some SES correlation is inevitable, but that doesn't mean that we cant work to reduce degree of that correlation by improving the educational opportunities for less affluent students. The wealthy will always be able to buy extra tutoring, etc., but if we chose to do so as a society, we could reduce the marginal payoff of such activities by devoting enough resources to make public education better for the less advantaged.

The fact that there is a correlation between the SAT and SES does not mean that the SAT is fatally flawed. It measures (with all the caveats of error margins, etc.) what it's broadly supposed to measure. At the same time it provides evidence that we live in a society where the rich have better educations than the poor. That inequity won't be reduced by jettisoning standardized tests. They're the messenger of the problem, not a cause. Currently, wealthy parents spend a lot of money on SAT prep programs and the like. But even if you got rid of all standardized tests tomorrow, the wealthy would simply reallocate their resources to prepare their children for whatever alternative assessment was used.

Reference: Zwick, Rebecca and Green, Jennifer Greif (2007),
"New Perspectives on the Correlation of Scholastic Assessment Test Scores, High School Grades, and Socioeconomic Factors," Journal of Educational Measurement, v. 44, pp. 1-23.

Share: