It is tradition in schools to evaluate students through assessments, usually in the form of exams. Most colleges use assessments such as the SAT or ACT to evaluate incoming Freshmen. For most students, scores on the SAT or ACT determine greatly what college they will end up attending. In the more prestigious Colleges, certain SAT or ACT requirements are put in place to ensure only the brightest students are accepted. But are these traditional educational assessments accurate? Research Psychologist Gerald Bracey argues in his article “Test Scores in the Long Run” that they are not as accurate as most would think. After carefully reviewing the information presented in the article, while also consulting outside sources for argumentative information, it is evident that Gerald W. Bracey makes a valid argument against tradition education assessments and I fully agree with Bracey that traditional education assessments are not an indication of future success or failure.
Bracey’s most powerful evidence is his statistics and surveys, which help him prove his thesis. Bracey uses information gathered on careers and the connection to pay raises to show that test scores mean nothing once a person has entered the workplace. He suggests that:
Job success doesn’t involve taking tests. People who get their wages increased please their supervisors, work well with groups, treat customers and colleagues politely, show up reliably and on time, display a sense of humor, and in some jobs might rely more or less heavily on critical thinking, perseverance, motivation, enthusiasm, and “emotional” intelligence. No doubt that is why supervisors’ ratings of job performance show correlations with test scores as low as those shown by earnings. (Bracey 637+)
This means that it doesn’t matter how well a student or worker does on any particular test, because there are so many other factors involved in the workplace and on college campuses that determine success. This is a common ideology, one held also by Abigail Thernstrom of the Manhattan Institute, who-reporting on the SAT I-says: “The basic SAT is viewed . . . as ‘akin to an IQ test-a measure of innate intelligence” (Thernstrom 42+). A person who scores very highly on a test in a particular field may know what to do, but could lack the ability to use that knowledge effectively. This is a point that Bracey successfully stresses, and is his most powerful argument.
Bracey’s article has another very strong point: the structure and the coherence of the article fits well and allows readers to follow along with his ideas. He begins by talking about how the SAT and ACT is used by colleges for entrance purposes, and continues by examining the problems with test scores: why test scores don’t work when determining job placement, and how standardized testing is biased against females and minorities. In all of the sections, he backs up his ideas with hard evidence, which adds legitimacy to his claims. Michael Harvey of the Nuts and Bolts Of College Writing says that “Good structure helps make an essay easier to follow” (Harvey 1). This may seem oversimplified, but the smoother an essay flows, the more interested the reader will be, and the more the reader will thoroughly understand. An essay that is difficult to follow will not keep the reader interested, and can confuse the reader and fail in its purpose. A great example of how easily structure can destroy an otherwise great essay is “Test Flight” by Marcia Yablon of New Republic. Yablon discusses how Mt. Holyoke College is no longer was demanding SAT scores, then how that will affect the college’s ranking in the US NEWS, and how dropping the SAT requirement affects a college overall. She then finally reports Mt. Holyoke College’s view on colleges that are dropping the requirement to gain a better US NEWS ranking (Yablon 24+). The last point would have been better placed before Yablon discussed the effects of dropping the requirement, in order to help the essay flow more smoothly and uninterupted. A simple mistake in structure had me questioning the validity of the writer, and forced me to read the article twice before fully understanding it.
Unfortunately, Bracey fails to bring up several key points about standardized testing that could have made his argument much more powerful and convincing. He never mentions the fact that many Colleges are doing away with traditional education assessments (such as the SAT and ACT) requirements. According Rebecca Zwick of Phi Delta Kappan, many colleges-including the University of California-find that standardized tests such as the SAT’s are biased towards whites and have done away with the requirement in order to increase minority enrollment (320+). An example concerning the reforms within colleges concerning standardized testing in Bracey’s article could have strengthened his point dramatically. Another very interesting point that Bracey did not address was how ironically wrong SAT scores actually were, as Rebecca Zwick reports: “. . . SAT scores tended to predict higher college grades than were actually attained by African American, Latino, and American Indian students and lower grades than were actually attained by Asian American and white students” (320+). This is an interesting finding, one that would benefit Bracey’s argument that standardized tests scores-specifically SAT scores-are inaccurate.
Another problem with Bracey’s article is that it is very one-sided; he never once brings up any arguments for using traditional education assessments that he could refute, nor does he offer any solutions to the race/gender bias of standardized tests. One very interesting pro-SAT point Bracey failed to mention was the fact that-according to the College Board-the majority of people who make up highest percentile of math scores on the SAT are women and minorities (Techniques 15). This contradicts Bracey’s argument that standardized testing is biased towards women and non-whites. Bracey should have researched this statistic and attempted to refute the finding in order to strengthen his own thesis. This is a small problem that could have been easily fixed. More importantly, the question of how to change standardized tests in order to be unbiased is never answered. Ben Wildavsky of the U.S. NEWS AND WORLD REPORT brings up the question of how certain minorities should be compensated on white-friendly tests, or how an overachieving student should be rewarded for doing better on a standardized test than predicted (53+). Although he touches upon how tests such as the SAT and ACT favor richer students who can employ tutors, he never develops the point. It’s a very important point; Dan Seligman of Forbes wrote an entire essay on the subject, entitled “High Noon for the SAT.” Seligman makes a very valid argument against class-discrimination that Bracey should have touched upon: “The test discriminates against dumb kids. The dumb kids wind up in less prestigious colleges and less prosperous carets. The critics can’t quite utter that complaint, so they argue that the test discriminates against poor kids” (78+). Bracey’s argument could have been much more powerful had he brought up this issue and explained why pro-SAT critics like Seligman are wrong or their arguments invalid. Another interesting point brought up in reporter Aldric Hama’s article “Demographic Change and Social Breakdown: The Role of Intelligence” in Mankind Quarterly , but not brought up in Bracey’s article is the direct relation between mean intelligence and crime:
There was a positive correlation between percentage of blacks and rate of crime and a negative correlation between percentage of blacks and SAT scores. In contrast, a positive correlation was observed between percentage of whites and SAT scores and a negative correlation between percentage of whites and crime. A negative correlation was observed between percentage of Asians and SAT scores and rate of crime. (Hama 41+)
This is just one of many examples of how mean intelligence has been found to be linked to crime and poverty, and should have been addressed by Bracey.
Bracey’s article-although coherent with enough interesting and informative evidence-could have been a much better article. While the coherence and structure made the article easy to follow, I believe that Bracey could have taken the time to research the other side of the argument in order to refute the other side’s position and thus strengthen his own. Still, Bracey’s article was strong enough to convince me that standardized testing has problems and does not work well enough to determine future success.
Bracey, Gerald W. “Test Score in the Long Run.” Phi Delta Kappan 82.8 (2001): 637+.
Hama, Aldric. “Demographic Change and Social Breakdown: The Role of Intelligence.”
Mankind Quartlerly 40.1 (1999): 41+.
Harvey, Michael. The Nuts and Bolts of College Writing. (Online) Available
Seligman, Dan. “High Noon for the SAT.” Forbes 167.11 (14 May 2001): 78+.
Thernstrom, Abigail, Stephan Thernstrom. “Admissions impossible.” National Review
53.5 (2001): 42+.
Yablon, Marcia. “Test Flight.” New Republic 223.18 (2000): 24+.
Zwick, Rebecca. “Eliminating Standardized Tests in College Admissions.” Phi Delta
Kappan 81.4 (1999): 320+.
“SAT Math Scores Reach Record High.” Techniques: Connecting Education & Careers
75.8 (2000): 15.