The Data on Test-Optional Outcomes — What Actually Happens to Non-Submitters

You've heard the arguments on both sides. Test-optional is a gift to students. Test-optional is a marketing scam. Scores still matter. Scores are dead. Everyone has a take, and most of those takes are built on vibes rather than data. The actual research on what happens to students who apply without test scores is more nuanced, more interesting, and more useful than either camp admits. Here's what the numbers say, what they don't say, and what you should actually take from them.

The Reality

The single most cited study on test-optional outcomes is the one by William Hiss and Valerie Franks, published through the National Association for College Admission Counseling. The study, formally authored as Syverson, Franks, and Hiss (2018) and titled "Defining Access: How Test-Optional Works," examined 28 institutions over multiple years, covering 955,000 student records. The headline finding was striking: students who did not submit test scores had college GPAs only 0.05 points lower than students who did submit, and their graduation rates were within 0.6 percentage points of submitters. In practical terms, non-submitters performed almost identically to submitters once enrolled.

That finding matters because it addresses the central fear behind the test-optional debate — the worry that students who don't submit scores are somehow less prepared for college-level work. The Hiss data says no. At the 28 schools in the study, the academic outcomes were essentially the same. These weren't marginal students sneaking through a loophole. They were students whose high school GPAs, course rigor, and other application components predicted college success just as well as a test score would have.

But before you take that finding and run with it, you need to understand what's underneath it. The 28 schools in the study were not a random sample of American higher education. They were schools with established test-optional policies — some going back decades. Bowdoin has been test-optional since 1969. Bates since 1984. These are institutions that built their entire admissions infrastructure around evaluating students without scores. Their admissions officers have years, sometimes decades, of calibration. The outcomes at these schools tell you what's possible when a school genuinely commits to test-optional admissions. They don't necessarily tell you what happens at a school that went test-optional in 2020 because of a pandemic and hasn't figured out its evaluation rubric yet.

The Play

When you look at admit rate data, the picture gets more complicated. At many long-term test-optional schools, the admit rates for non-submitters are similar to those for submitters. Belasco, Rosinger, and Hearn (2015) examined the effects of test-optional policies and found that these policies did increase applications from underrepresented groups, but the admit rate dynamics varied significantly by institution. Some schools admitted non-submitters at roughly the same rate. Others showed meaningful gaps. The variation depends on the school's selectivity, its institutional priorities, and how long the policy has been in place.

FairTest.org, which maintains the most comprehensive database of test-optional schools and compiles outcome data from participating institutions, reports that non-submitter outcomes have been broadly positive across schools with mature policies. But "broadly positive" is doing a lot of work in that sentence. It means the aggregate trend looks good. It doesn't mean every school, every year, for every type of student. [VERIFY FairTest.org specific compiled outcome statistics and methodology] You need to look at the data school by school, which means pulling Common Data Sets and reading each institution's published admissions statistics. The aggregate is reassuring. The individual school data is where your strategy lives.

Here's where the selection effect comes in, and it's the most important caveat in this entire discussion. Students who choose not to submit test scores are not a random group. They're students who looked at their score, looked at the school's middle-50% range, and made a strategic calculation. The ones who skip submission tend to be students with strong GPAs, solid extracurriculars, and the awareness to know that their score would hurt rather than help. In other words, non-submitters self-select well. They're making a smart play, and that smart play inflates the outcomes data. When you see that non-submitters graduate at the same rate as submitters, part of what you're seeing is that strategically savvy students do well in college — which isn't exactly a surprise.

This doesn't mean the data is useless. It means you need to read it correctly. The Hiss study doesn't prove that any student can skip scores and be fine. It proves that students who strategically choose not to submit — because they have other strengths that compensate — tend to do well. That's a different, narrower claim, and it's actually more useful for you because it tells you exactly what makes test-optional work: it works when the rest of your application is strong enough to carry the file.

The Math

Let's get specific about what the numbers can and can't tell you. The Syverson, Franks, and Hiss (2018) study found the 0.05 GPA gap and 0.6% graduation rate gap across their 28-school sample. Those are small numbers. For context, a 0.05 GPA difference is the gap between a 3.25 and a 3.30. A 0.6% graduation rate difference means that out of 1,000 students, 6 fewer non-submitters graduated. These are within normal variation for most institutional metrics. Statistically, the researchers found these differences to be practically insignificant.

Belasco, Rosinger, and Hearn (2015) focused on a different question: what happens to the applicant pool when a school goes test-optional. They found that test-optional policies increased applications by an average of about 3% [VERIFY exact percentage from Belasco et al.], with larger increases from students of color and first-generation students. But — and this is the nuance that gets lost — they also found that the increase in applications didn't always translate into increased enrollment of those groups. Some schools saw real diversity gains. Others just got more applications and admitted the same profile of students they always had, minus the test score requirement.

NACAC's longitudinal data on test-optional outcomes supports the general finding that non-submitters perform comparably in college, but their data also highlights the yield question. Some schools report higher enrollment yield from test-optional admits — meaning students admitted without scores are actually more likely to enroll than students admitted with scores. The theory is that students who apply test-optional are choosing the school based on fit rather than prestige-chasing with a high score, so they're more committed when admitted. [VERIFY NACAC yield data specifics and sample size] This is good news for schools looking to fill seats with engaged students, and it's good news for you if it means test-optional admits aren't treated as second-class enrollees.

But here's the question the data cannot answer, and it's the one that matters most for your individual decision: what would have happened if the non-submitters had submitted? We don't know the counterfactual. Maybe a student who was admitted test-optional with a 3.8 GPA would also have been admitted with their 1180 SAT score. Maybe they wouldn't have been. The data shows outcomes for students who made a choice. It can't show outcomes for the choice they didn't make. This is a fundamental limitation of observational research, and anyone who tells you the data "proves" that going test-optional has no cost is overreading the evidence.

What we can say is this: at schools with mature test-optional policies — schools that have been doing this for five or more years, have published outcome data, and have admissions offices calibrated to evaluate files without scores — the evidence consistently shows that non-submitters do fine. They get in, they succeed academically, and they graduate. The system works. At schools that adopted test-optional policies recently, especially during or after the pandemic, the data is thinner, the policies are less settled, and the outcomes are less predictable. This doesn't mean you should avoid going test-optional at newer schools. It means you should do it with your eyes open and your application especially polished.

What Most People Get Wrong

The first mistake is treating the Hiss study as a universal guarantee. It's not. It's a study of 28 specific schools with established policies. The finding that non-submitters perform comparably is robust within that sample, but extending it to every school with a test-optional checkbox is a leap the data doesn't support. You need to evaluate each school individually. Check its Common Data Set. Look at what percentage of admitted students submitted scores. Look at how long the policy has been in place. A school where 55% of admits submitted scores is a different environment than a school where 88% submitted.

The second mistake is ignoring the selection effect. The students in the Hiss study who didn't submit weren't randomly assigned to the non-submission group. They chose it. And they chose it because they were the kind of students who make strategic decisions about their applications — which is exactly the kind of student who tends to succeed in college. The data tells you that strategic non-submission works. It doesn't tell you that non-submission is universally harmless. If you're going test-optional, you need to be one of those strategic students. That means having a strong GPA, meaningful extracurriculars, and an application that doesn't need a test score to make the case for your admission.

The third mistake is dismissing the data entirely because it's imperfect. Some people — often people who sell test prep — argue that because we can't prove the counterfactual, the test-optional data is meaningless. That's too cynical. The Hiss study is one of the largest observational studies in admissions research. The Belasco team's work is published in peer-reviewed journals. NACAC's data represents broad institutional reporting. These aren't perfect studies, but they're the best evidence we have, and they consistently point in the same direction: at schools with genuine test-optional commitments, students who don't submit scores perform academically at the level of students who do.

The honest summary is this. Test-optional admissions works well at schools that have built their processes around it. The research shows [QA-FLAG: name the study] comparable outcomes for non-submitters at these institutions, and the data has been consistent over two decades of study. At schools that are newer to test-optional policies, the outcomes are less documented and less predictable, which doesn't mean the policy is fake — it means you have less evidence to rely on and should compensate with a stronger application. The data supports test-optional as a real option. It doesn't support it as a magic shield that protects any application from any weakness. Like every other strategic tool in your admissions toolkit, it works when you use it well.


This article is part of the Test-Optional Strategy series at SurviveHighSchool.

Related reading: The Test-Optional Essay Trap — Don't Mention Your Score Decision, Your Test-Optional Game Plan — The Full Decision Tree, Schools That Actually Mean "Test-Optional" — The Honest List