30 Mar 2017

Most Test-Prep Material Sucks (Part 1)

Submitted by Karl Hagen
Yesterday, I picked up a recent edition of the Kaplan SAT Subject Test Literature guide (the 2015-2016 edition, which I gather is actually identical to the more recent edition) and went through the diagnostic test in it. Unfortunately, this mock test had so many flaws and outright errors, was so unrepresentative of the actual Literature Test, that it's hard to see how it could diagnose much of anything. Just how terrible was it? Here's just one example:

Included in this test was an excerpt from John Dryden's play All for Love, and the test gives its publication date as 1678. (Dryden actually wrote and first published the play in 1677, but that's close enough.) One question focuses on that date:

42. Judging by the year it was completed, this selection was written in what period of English history?

(A) Edwardian era
(B) Elizabethan era
(C) Victorian era
(D) Cromwellian era
(E) Georgian era

If you know your English history, you may see a problem already. But before giving my own reaction, I'll give you the explanation found in the answer key.

42. B
Though the SAT Literature test will not have many questions pertaining to history, students should be familiar with the major literary eras. The Elizabethan era (B), in which Dryden wrote Antony and Cleopatra, was also the age of Shakespeare. It ran from 1558-1603, the years of Elizabeth I's reign. The Edwardian era, which covered the reign of King Edward VII, ran from 1901-1910. This followed the Victorian era (C), 1837-1901, spanning the reign of Queen Victoria. Oliver Cromwell (D) ruled from 1653-1658, but his influence spanned several years before and after he served as Lord Protector of England. The Georgian era (E), 1714-1830, included the reigns of kings George I, II, III, and IV.

W! T! F! The number of issues with this problem are so many that it's almost comical in its ineptitude. First, none of the answer choices are even grammatical in context. Every option needs to start with a definite article. Much more seriously, though, the keyed answer, is wrong even by its own assertions. The key correctly says that the Elizabethan era refers to the reign of Elizabeth I and gives the right dates for her reign, but then it claims that Dryden wrote this during this period even though it's already given an essentially correct date of 1678, which was long after the close of the Elizabethan era. Moreover, it gets the title of the play wrong, calling it "Antony and Cleopatra." That's Shakespeare's play on the same subject, and it seems as if whoever wrote this question was getting the two plays mixed up, which doesn't inspire much confidence in the question writer's knowledge of English literature, or even an ability to stay focused on the single text in front of them.

Not only is the keyed answer wrong, but none of the choices given are correct. The one that comes closest is what the question writer terms the "Cromwellian era," although from the explanation, they seem to have in mind the period more normally called the Interregnum (1649-1660). The real answer is that Dryden's play was written during the Restoration (1660-1689), but that's not an option.

Even if we were to edit this question to provide a correct answer, it would still be a terrible question. The explanatory key claims that the Literature Subject Test "will not have many questions pertaining to history." That's a true statement only if "not many" actually means zero. Literary history is beyond the scope of the exam specification, although conceivably you could find something like it on the GRE subject test in literature. The types of questions that you can be asked about passages are not secret. They're clearly spelled out in the material that College Board provides students, and none of the question types involve knowing literary history.

Although this question was particularly egregious, the whole test was full of ambiguous, badly written questions.

The experience was infuriating but not particularly surprising. It's an open secret in the test-prep industry that the practice problems and mock tests found in the many commercially available books are significantly inferior to the material produced by the actual test organizations. This problem is severe enough that on places like College Confidential, it's common advice that you should ignore books like those from Kaplan or Princeton Review and use only official material. And smaller test-prep outfits often make a virtue out of necessity by boasting that they only use official test material to drill their students.

If you want significant practice, however, you don't always have the luxury of avoiding commercial test-prep material. For the SAT Literature Subject Test, and for many other subject tests, College Board keeps in print a grand total of one old test. The ACT's official book only has three tests in it. Even the SAT Reasoning Test, which has traditionally had more official practice material available than other tests, still may not provide enough for some students to be satisfied. (At the time I'm writing this, College Board has made seven full-length practice tests available for the new incarnation of the SAT, along with additional practice problems available at Khan Academy.)

The high demand for such practice goes a long way to explaining the crappy quality of commercial test-prep material. Because there's a voracious market for content, people will buy these books regardless. Publishers don't pay a significant penalty in the marketplace if the book is badly edited, so why pay for a competent editor? Students who are studying on their own won't even notice most of the problems anyway. They're likely to interpret their struggles with a flawed problem as the result of their own shortcomings rather than those of the test writer. And there appears to be no incentive for publishers to fix the problems. Princeton Review and Kaplan routinely repackage the same flawed material year after year, updating only the year on the cover. Given that both organizations run their own classes, and presumably give this practice material to thousands of students every year, you would think that they could get feedback from students and instructors to improve flawed questions, but that never happens. The only thing that provokes change is when the underlying test changes on them, as recently happened with the SAT.

That is not to say that all commercial test-prep material is universally dreadful. It's possible to do a better job, and even come close to the quality of the authentic material, if you know what you're doing and are willing to devote some resources to the task. But the problem is that creating good test items for a standardized test is hard, far more so than most people, even most teachers who write their own classroom tests, fully appreciate. Next time, I'll take up the question of just what standards we should expect from developers of test-prep material.

Share: 

Comments

I work for a small test prep company, one that prepares all of its own textbooks for ACT and SAT prep, and I'm constantly running into this very problem. I seem to be the only critic of the books in the company, and the person in charge is wholly unconvinced that they need to be improved. It's a constant source of frustration.