Goodbye to Sentence CompletionsSubmitted by Karl Hagen
This statement, which had been telegraphed by David Coleman even before the announcement of the changes, was rhetorically unfortunate because it gave the impression that a strong vocabulary wouldn't count. It lead to a spate of commentary (see, for example, this piece in The Atlantic for a relatively moderate take) bemoaning a slipping of standards.
I call the announcement unfortunate because it left the impression that vocabulary wouldn't matter anymore on the SAT. If that were true, complaints would be justified. All the research indicates that a strong vocabulary is the single biggest predictor of overall reading ability. What the College Board should have said was that out-of-context testing of vocabulary meaning would be dropped.
For sensible commentary on the topic, see this article by Georgia Scurletis, who points out that the panic is overblown and that the reading passages will still contain plenty of challenging vocabulary.
Dropping sentence completions completes a gradual process by which, over the last three decades, the SAT has progressively abandoned context-free testing of vocabulary items. Once upon a time, the SAT had antonym and analogy questions. Antonyms were dropped when the SAT was revised in 1994. Analogies were dropped in the 2005 revisions. Both of these changes objectively reduced the what I call the "vocabulary loading" of the test, that is, the sheer number of less common vocabulary items.
To create a rough metric of this change, I calculated an index that I call the COCA-5000 ratio. This is the ratio of words (excluding proper names) that are not found in the top 5000 words of the Corpus of Contemporary American English to the words that are in that list. With this ratio, a larger number indicates a text with a greater number of uncommon words. My logic for excluding proper names is that very few appear on the COCA-5000 list, but since the SAT is designed not to depend on external knowledge, proper names are unlikely to increase the difficulty of the text. For example, you may never have heard of Hrotsvitha, but should the SAT mention her, as it did on one occasion, your appalling lack of historical knowledge (kidding) won't count against you. You won't need prior knowledge to answer any questions.
I calculated this index for the verbal/critical reading sections of 134 SATs that have been made public between 1979 and 2012, and the results are charted below. Each dot represents the COCA-5000 ratio for a single SAT. The three versions of the SAT are distinguished by color, and I have added a regression line for each of the three revisions. While the overall trend is a decrease in vocabulary loading, it's clear that within the period of each revision, there is no strong trend. The decreases are almost certainly primarily the result of removing antonyms and analogies, both of which have very high COCA-5000 ratios when taken in alone.
Interestingly, when we consider the reading passages (and their questions) in isolation, the vocabulary here has been growing somewhat more difficult. It appears that the test makers have compensated for fewer direct questions testing vocabulary knowledge by choosing passages and writing questions that feature such words more prominently than they did in the past. Update: I should make it clear that when I say "growing more difficult" I do not mean that there's a significant trend within each revision of the test, but that successive revisions appear to have increased the average complexity of the vocabulary in the reading passages.
In short, the forthcoming revision will certainly decrease the density of difficult words. But by no means is the SAT shying away from featuring passages that deploy complex and nuanced vocabulary. If history is any guide, we can probably expect that the difficulty of texts will be somewhat greater after the revision.
Whether or not this is a good change depend on your perspective. Why should we measure vocabulary directly? Presumably because we believe that it's a good proxy for the skill that we're really interested in: the ability to read and comprehend sophisticated texts. In that case, though, why not simply present such texts and ask questions about them?
The specifications for the new SAT lay out explicit metrics for assessing the difficulty of the texts they will use. This brings up the question of how reading passages on previous texts measure up. More on that in a later post.
This is part 4 of my analysis of the upcoming changes to the SAT. Part 1. Part 2. Part 3.