“Whether a student will succeed at NIU, whether they will graduate within a reasonable amount of time with a degree in hand, those scores tell us nothing. We’ve done mathematical analysis looking at the various things that go into predicting whether a student will succeed at NIU and those scores have no value.” said Northern Illinois University President Lisa Freeman in a January 2020 interview, as the university announced they will drop standardized assessment from their admissions requirements and move to a “holistic application review”, while guaranteeing admission to any student with a cumulative GPA of 3.0 or higher. NIU is just one of many schools dropping standardized assessments from their admissions process, and this particular decision pairs with the release of a new peer-reviewed study published in Educational Researcher finding that high school GPAs are five times stronger than ACT scores in predicting college graduation.
In this article I’ll look at the movement of undergrad and graduate programs away from standardized assessments, unpack new research into our assumptions about standardized test scores as predictors of college completion, and challenge listeners to re-rethink “What’s in a grade?”
The policy shift at Northern Illinois follows what the Washington Post calls a turning tide as a “record number of schools have decided to accept all or most of their freshmen without requiring test results” amid a growing disenchantment with standardized tests. Between September 2018 and September 2019, they write, nearly 50 accredited colleges and universities announced that they were dropping SAT or ACT scores as an admissions requirement, bringing the total to 1,050, or nearly 40% of the total number of accredited universities that award bachelor’s degrees.
With public attention still on the 2019 college admissions bribery scandal, disenchantment with standardized tests also comes from research pointing to growing score gaps across demographic groups, and, according to the National Center for Fair and Open Testing, “with the average scores of students from historically disenfranchised groups falling further behind students from more privileged families”, and finding a consistent link between SAT and ACT scores and family incomes, mother’s education level, and race.
While it may be up to the governing bodies of major universities to make these decisions, Cecilia Estolano, Vice Chair of the University of California Board of Regents, has made her opinion clear, that the tests use a “clearly flawed methodology that has a discriminatory impact,” adding, “we don’t need any more studies on the issue”.
This disillusionment isn’t confined to undergraduate institutions. A wave of graduate programs, dubbed GRExit, have also dropped the GRE as an admissions requirement. According to ScienceMag, while most PhD programs still require a GRE score, life science programs are leading the GRExit push: it’s estimated that more than 50% of molecular biology PhD programs have dropped the GRE, along with one-third of neuroscience and ecology programs.
These decisions challenge public and faculty assumptions that the GRE is an accurate predictor of how students will perform in graduate programs, while a growing body evidence seems to indicate that GRE scores aren’t correlated with how long it takes to complete a degree program, whether students will graduate, nor their accomplishments in post-graduate fieldwork. A study released in Jan 2019 concluded that for Physics PhD programs, which the study calls the least diverse of the sciences, GRE scores were not only insignificant in predicting program completion but, “aside from these limitations in predicting PhD completion overall, over-reliance on GRE scores in admissions processes also selects against underrepresented groups.”’
The authors conclude, and I’ll quote at length:
“Excellence as a researcher is likely also a function of research mentoring and experience (both before and in graduate school) and socioemotional/noncognitive competencies (e.g., initiative, conscientiousness, accurate self-assessment, and communication), which scholars have linked to performance in other professional and educational domains. It is time to think creatively about both assessing these qualities alongside academic preparation as part of a holistic approach to graduate admissions and identifying strategies that connect prospective students to graduate programs in which they will thrive.”
Like Vice Chair Estolano in the University of California system, the Chair of the physics and astronomy department at University of Pittsburgh agrees with the conclusions saying “Dropping the GRE just seems like a no-brainer. The test is both not really measuring something useful…and at the same time discriminating against students who we are trying to work very hard to increase the numbers of in our program.”
At the crest of this tide, a new study, authored by Elaine Allensworth and Kallie Clark out of the University of Chicago, titled “High School GPAs and ACT Scores as Predictors of College Completion: Examining Assumptions About Consistency Across High Schools” is, according to phys.org, the first of its kind to “explicitly test whether standardized assessments are comparable across high schools as measures of college readiness”. Allensworth and Clark examined over 55,000 graduates of Chicago Public Schools who immediately enrolled in college. This sample size and diversity of school environments in the Chicago public school system make the study of particular interest.
The authors point out that all 50 states use standardized assessments as an indicator of college-readiness out of the assumption that these types of college-entrance exams are “strong and consistent measures of readiness” in the way they assess all students under identical conditions, time, and tasks, compared to the use of GPA and subjective course grades in an era of supposed rampant grade inflation. And of course, there is concern about the alignment and comparability of school coursework to the content and skills measured by exams like the SAT and ACT.
Much of the research cited in the paper questions the objectivity and validity of exams as predictive of college outcomes. For example:
- A large 2009 study found the relationship of the SAT and ACT scores with college outcomes was small and sometimes not significant.
- A 2014 study concluded that students in test-optional colleges who did not submit test scores had similar or better college outcomes than students in the same colleges with similar high school GPAs.
- A 2004 study examined California university data and found that much of the relationship between SAT scores and college GPA could be attributed to high school poverty, school racial composition, and student background. In fact, this study concluded that SAT scores were such a proxy for student background and socioeconomic status that “the background variables themselves can provide much of the information contained in the SAT score” and suggests sarcastically that “admissions offices could admit better-prepared entering classes by giving explicit admissions preferences to high-SES students and to students from high-SES high schools.”, which the author calls “affirmative action for high SES children”.
- And lastly, the authors point to a body of evidence that factors other than high school GPA that can account for college success and explain the variation in the predictive ability of GPA across high schools, which go beyond college knowledge to include things like access to diverse environments and enrichment opportunities outside the classroom that help students respond to new and challenging situations.
About this body of evidence and the results of their own statistical analysis, Allensworth and Clark conclude:
“There is little evidence that students will have more college success if they work to improve their ACT score because most of the signal from the ACT score seems to represent factors associated with the student’s school rather than the students. In contrast, students’ efforts to improve their high school GPAs would seem to have considerable potential leverage for improving college readiness.”
And they recommend that schools rely less on test scores in accountability systems and in school improvement plans, adding that the factors that may work to improve test scores are not necessarily the same that improve grades and college readiness.
So doesn’t this work point to the centrality and importance of grades and GPAs in the work of teachers and schools? Doesn’t this have implications for Teachers Going Gradeless and those, like myself, who have worked to diminish and decenter grades in their classrooms? I think the answer is yes, but maybe not in the way that the authors of the study propose.
Clark, one of the co-authors, was quoted in a separate interview as saying, “Extensive time spent preparing for standardized tests will have less pay-off for postsecondary success than effort put into courses, as reflected in students’ grades.” This first point I think is where the research confirms what educators have long understood about the flaws of deriving predictive value from high-stakes standardized assessments, with support for this kind of singular decision-making originating primarily from the marketing departments at the College Board and ACT.
Where I think the researchers overextend themselves is in the second half of this claim, where Clark continues:
“The more that middle and high school educators can support strong engagement in school — helping students overcome barriers to engagement in class, helping them succeed at different types of academic tasks, so that they earn strong grades — the better these educators are supporting academic skills broadly and preparing students for college.”
The question we should be asking is “Well, what’s in a grade?” The qualities Clark mentioned — engagement, overcoming barriers, succeeding at different academic tasks — do not require grades, and in some contexts grading practices actually diminish engagement, erect barriers, prevent students from demonstrating success, and contribute to inequitable outcomes. In fact, the authors also conclude, “that students are more likely to graduate college if they come from some high schools rather than others. These school effects may be the result of more rigorous academic programs at some high schools, different non-academic supports for preparing students for college, or simply a tendency of families with more resources for college to send their students to particular schools.”
For as much of their statistical analysis linked GPA to college completion, it seems the authors also re-discovered another conventional wisdom: that students from families with more resources for college are more likely to have higher GPAs because they are able to send their kids to high schools with a better track record of college attendance and completion, similar to the phenomenon in our discussion of SAT scores as a predictable proxy for socioeconomic status.
Students from families with more resources for college are more likely to have higher GPAs because they are able to send their kids to high schools with a better track record of college attendance and completion, similar to the phenomenon in our discussion of SAT scores as a predictable proxy for socioeconomic status.
But we know from socioeconomic data that it’s just as likely that these families come from educated backgrounds themselves, ensuring that they understand how to navigate the systems of school and legacy admissions, they live in safe communities with stable family situations, they have access to preschool and summer enrichment, to personal libraries of books and the internet at home, to adequate and nutritious meals, healthcare, braces and glasses — all of the benefits that socioeconomic status buys you in America — while it might also be the case that students with low GPAs struggle with mental & physical health, or live in segregated, underserved, and historically disadvantaged communities, to attend school with undiagnosed learning disabilities, to be suspended and expelled at higher rates, or deal with inadequate transportation and transient housing situations, and if motivation, engagement, and success in school — as measured by GPA — are all positively correlated, I guess it doesn’t come as a surprise to me that our education system selects for those with the most advantages and privileges.
When the authors write that decreasing reliance on test scores will increase the salience of GPA in college admissions, I think we’re just as correct to interrogate the factors that contribute to GPA as a measure of school success, and since we know the limitations and har ms of grades and understand how GPA measures individual and school factors beyond the curriculum, we need to do better than a single averaged number to describe student abilities. It turns out, we can do better.
The Mastery Transcript provides an example of exactly such a model for how college admissions could look different without reducing students to just a number. The Mastery Transcript Consortium is a partnership between member schools and the Mastery Transcript team to develop mastery-based rubrics and curate an alternative representation of a student’s interests and abilities beyond a single numerical average. The sample on their website visually captures and communicates students’ strengths and progress in areas that a GPA is a poor proxy for anyway. Things like citizenship and decision-making, problem-solving and critical analysis, communication and self-expression, and social, cultural, and historical fluency, and the mastery transcript features selected work by the individual student to support their credentials in those areas. Even for this sample student, a much more holistic and authentic picture of their interests and abilities emerges. In both its looks and purpose it seems more like a resume than a traditional transcript of credits.
Assessment scores and GPAs alike have shown themselves to be outmoded tools from an age in which the linkage between our values, their reductive measurements, and our children's future success seemed obvious. Now the only obvious thing seems to be a clear movement away from discriminatory and dehumanizing calculations that once gave institutions a kind of technical cover to select for privilege.
If we desire more than a number from our kids, we have to figure out what it is we value and develop a means of communicating those values, not compromise our values for the convenience and comfort of those with a vested interest in keeping our kids at arms length.
If we desire more than a number from our kids, we have to figure out what it is we value and develop a means of communicating those values, not compromise our values for the convenience and comfort of those with a vested interest in keeping our kids at arms length. I’ve told this story in several places now, but over the last several years, Iowa high schools have largely transitioned away from reporting class rank. Where it was once assumed vital to the college admissions process that some chosen student be number one and everyone fall in their place behind them, this year the state Board of Regents eliminated class rank from its Regents Admissions Index calculation, which determined a cutoff score for automatic acceptance into the state’s public universities. It’s just not the case that we have to settle for what colleges demand of us and our kids. Iowa high schools changed their practice and the university system was forced to listen.
I for one can’t wait to see what vital and sacred number we make them drop next.