Will MIT change the optional test-taking debate? (opinion) | Tech US News


Someone once said (OK, maybe it was me) that when Harvard University itches, everyone else in higher education itches. Does the same apply to the Massachusetts Institute of Technology?

Maybe we’ll get an answer to that question soon. MIT’s announcement that it will reinstate its requirement that applicants submit test scores next year has already sparked reactions from those who believe it’s time to do away with admissions tests altogether, as well as testing industry advocates.

Is the MIT decision the start of a trend, an anomaly, or nothing? We probably already have an answer, at least in one respect—MIT’s announcement did not open the door for other institutions to reverse their optional testing policies. MIT’s decision was thoughtful and appropriate for the mission, although test skeptics may disagree with MIT’s decision, reasoning, or interpretation of the evidence. But MIT’s decision doesn’t translate to lots and lots of other colleges and universities. This is not the beginning of the end of the optional test.

Of course, there are some global reasons why selective testing policies won’t go away. One is the decision by the University of California and California systems to no longer use test scores in their admissions processes. As a result, California’s high-recruiting colleges will find it difficult to reinstate test score requirements. But students outside of California may also push back against colleges re-requiring test scores. The Ivies may be able to get away with it, but two years ago, when the pandemic accelerated the number of colleges going the exam-optional route, a friend of the dean’s admissions department speculated that colleges further up the food chain might find that students simply they refuse to apply to colleges that are not optional tests.

Then there is the elephant in the room. Perhaps the only force stronger than the desire to use test scores as an insurance policy in assessing students’ academic preparation is the pressure on colleges to increase applications and lower acceptance rates. Can colleges afford to lose application numbers in a climate where selectivity is revered as a proxy for quality?

I’m not particularly interested in weighing in on the culture wars of testing. I don’t think admissions tests are bad, just flawed. But I am also disturbed by the worship and abuse of test results.

Ethical College Admissions is always looking for bigger picture questions, and there are a few I want to look into.

One is the idea that test scores are the engine of diversity. I’ve seen this argument made many times by proponents of admissions tests, and I wonder if there’s any evidence for it or if it’s a suburban legend.

One of the arguments I’ve seen for using test scores has argued that without test scores, things like legacy settings become stronger. Maybe, but that’s not evidence of the power of test scores to increase diversity.

A more common articulation is the diamond in the rough argument that test scores identify students with abilities who come from diverse backgrounds and would otherwise be overlooked. A diamond in the rough argument was one of the justifications for switching from College Board exams to the SAT nearly 100 years ago, at a time when the Ivies and other elite Northeastern colleges were looking to expand their student bodies geographically and enroll more public school students. This was also at a time when the SAT was considered an objective measure of intelligence.

Today, we understand that test scores are strongly correlated with family income, and that it is far from clear what the tests measure. But the diamond in the rough remains.

So is there any proof that the diamond in the rough is a real phenomenon? I reached out to Oregon State’s Jon Boeckenstedt, who is known for focusing on the diamond in the rough argument on Twitter and is also a guru when it comes to aggregating and analyzing data on college admissions and higher education in general. Jon knew of no data to support the diamond in the rough hypothesis, but he also suggested that it was tautological on some level that high test-scoring diamonds in the rough are the only diamonds highly dismissive colleges tend to take a chance on.

I also reached out to Stuart Schmill, MIT’s dean of admissions, to ask if he had data on how many MIT students qualify as diamonds in the rough. Love responded by admitting that he is a regular reader of “Ethical College Admissions.” He said he can’t give a specific number, but he’s confident there are students whose test scores help MIT admit them. He also stated that many MIT students have expressed their belief that they fall into this category.

His answer made me wonder if we all have the same definition of a diamond in the rough. In an op-ed for The Washington Post, Bob Schaeffer, executive director of FairTest: The National Center for Fair and Open Testing, defined diamonds in the rough as “applicants with modest high school achievements but high SAT scores,” noting that these students are more likely to be affluent Asians or Caucasians. males. MIT apparently doesn’t accept these students, which makes me think it uses a different definition (the use of the term “diamond in the rough” was probably mine, not Stu Schmill’s). His response suggests that MIT uses test scores not to identify students with poor academic performance but high test scores, but as validation for high-performing students from academic backgrounds without much advanced training or high-level math courses.

I am wondering if anyone can direct me to evidence that rough diamonds do exist.

There is another question I want to consider – under what conditions should colleges use test scores? I suspect my mastery of the obvious will become apparent in my answers.

  1. Ensure that test scores add predictive validity to admissions decisions. The test-choice movement has shown us that it is possible to make decisions without test scores, although there are concerns about grade inflation following COVID-19. There are a number of colleges for whom test scores were an insurance policy rather than an added value in decision making, and one article I saw suggested that only about half of the institutions that require test scores have actual validity studies to support their use. At best, test scores provide a small increase in predictive value over a high school transcript alone, and that value is only for predicting freshman grades. Shouldn’t we be looking for tools that predict success in college and beyond?
  2. Don’t fall for the false accuracy that test results show. Test scores are too often treated as precise measures, which they are not. Do we measure the things we value, or do we value the things we can measure? The standard error of measurement for each section of the SAT is more than 30 points, so there is no significant difference between a 600 and a 630. The test score cutoffs for institutional scholarship consideration or for National Merit Scholarship eligibility (even if the College Board is a National Merit partner) are inappropriate use of testing.
  3. Consider test results in context. Note that test scores, even if valid, may not be equally predictive of different groups of students. A 2008 NACAC Commission on Testing report noted research showing that test scores over-predict first-year grades for some minority students and may under-predict average first-year grades for some female students. Then there’s the impact of test prep. If two students have the same test scores, and one had hours of expensive test prep and the other didn’t, those scores don’t mean the same thing. Special care must be taken to ensure that test scores do not become a barrier to access for students whose high school records are otherwise promising. A year ago, I heard (and wrote about) a leading public program denying admission to students from underprivileged backgrounds it wanted to admit because they submitted test scores, not realizing that submitting scores was optional. In my opinion, there is no excuse for institutions not admitting candidates based on test scores alone.

I don’t have test scores to use as (controversial) predictive tools, but that won’t stop me from predicting that the MIT decision won’t end the testing culture wars.


Source link