The Test is Meant to be Hard

There has been some chatter that the SAT a few weeks back was very difficult.

This is the first time the new test went "large" and was administered in the States as well as international.  By tweaking the questions a bit harder, College Board has a bit more room to play with at the top of the curve, in order to keep the scoring comparable to past tests - a core objective of any metric.   If the test is objectively hard then it will be scored on a more lenient scoring table.  A bit like a handicap/slope rating of a golf course.

If the test was really easy then loads of kids would have had full marks on the math and you can't have 10% of the kids get a perfect 800. The test being "hard" is kind of the point - if most kids got 90% correct you would end up with a terrible metric, akin to an inflated GPA.  While the scoring on the new test is indeed a bit of a black box, the headline scores will be statistically massaged to be directly comparable with past iterations.  Even last year the complaint was that the test seemed harder than the practice tests but most scores came in as expected.

For any of our many students who sat this test, let us know how you did when the scores come out.

Is this the end of "Test Optional"?

On February 5th Dartmouth announced that it will be re-instating the SAT/ACT requirement for domestic applicants.  Less than 3 weeks later Yale followed suit with a rather confusing "test flexible" policy.  These moves surprised few and had been telegraphed well in advance.  Yale even has a miniseries of podcasts cued up detailing the rationale behind the move.

Both schools cite internal studies that showed that the test optional policy was discouraging economically disadvantaged students from submitting scores that would have potentially helped them gain admission, even if the scores were lower than 25th to 75th percentile scores of accepted students.  They were at pains to say that all scores are considered in the context of the student's situation and hence a 1350 from a kid from a rough urban school would be potentially seen as demonstrating even more academic potential than a a student with a 1500 from Greenwich Connecticut.  Further, both studies also showed that test scores have been a better predictor of academic success than grades, surprising few who have noted the gross grade inflation now prevalent.  

As we've written about previously, "test optional" for the most competitive schools has proven to be a bit of a facade.  Well over 60% of accepted students in the most "rejective" 40 American universities with a test optional policy submitted SAT scores.  These moves by Dartmouth and Yale simply said the quiet part out loud: universities want to see as much data on applications as possible and test scores are a useful metric, in combination with the many other factors considered.

Other schools have "reaffirmed" their commitment to sticking with test optional moving forward.  78% of successful applicants to University of Michigan submit scores, as do 69% of applicants to Columbia.  How could something that a large majority do be considered optional?  Great grades, sports, music, and great school activities are also technically "optional", are they not?  

Speculation is rife on which schools may soon follow.  Harvard (83% submitting) has remained test optional, but what about Brown (81%), Duke (93%), or U Chicago (84%)?  While these schools represent an insanely small percentage of students in America, they tend to take the lead and others follow.

Students that do well in school and who would excel at a top university will do well on the SAT, even better with a bit of help.

Common Data Set

We've spent a fair number of hours scraping data off common data sets.  These are standardized data submissions that most universities post yearly.  A good directory can be found here.  These files contain a host of data on enrolment, applications, and retention but we chose to pluck out three key numbers:

-  Admit rate (percent of completed applicants accepted)
- SAT/ACT submission rate (percent of enrolled students who submitted standardized test scores)
-  Yield percentage (percent of admitted students who enrolled)

We compiled this data from over 200 American universities using the US News rankings as a point of departure for selection.  For this month we will be looking at admit rates and submission rates.  To keep the data as clean as possible, we have stripped out schools that are test blind (UCs) and also those that are test required (MIT, Georgia/Florida publics, Georgetown, service academies).  We have also not included schools that didn't include testing data (ahem, Dartmouth et. al.).
By admit % Submit Rate Admit rate

Top 20 69.6% 6.5%

Top 40 62.7% 9.3%

Top 60 61.3% 12.8%

Top 100 57.8% 23.4%

Top 150 55.4% 35.7%

Top 200 54.3% 47.5%

Over half of the enrolled student in the 200 most rejective American universities submitted test scores, nearly 70% to the top 20.
By admit % Submit Rate Admit rate

1 to 20 69.6% 6.5%

21 to 40 55.9% 12.0%

41 to 60 58.5% 19.9%

61 to 100 52.7% 39.8%

101 to 150 50.3% 60.0%

151 to 200 51.1% 82.6%

Segmented by group, the number is still well north of 50%.  So among all enrolled students in schools from number 41 (Wesleyan U) to 60 (Scripps), 58.5% submitted test scores and the average admit rate among those schools was about 20%.

How is something that over half of people do really "optional"?  Seems that a better word might be "preferred".  

We'll be doing some analysis on yield and also on how these numbers differ among large publics, privates, and smaller colleges in the months to come.

The SAT Still Matters

The number of SAT tests administered is rising back towards the pre-pandemic levels, mainly thanks to in-school weekday testing spreading rapidly. A few universities like Georgetown and MIT have bravely starting to again require scores from all applicants. MIT has a very robust defense of the requirement here. Quite simply: the SAT provides a standard measure of student ability that has a proven correlation with eventual academic success. Grade inflation accelerated with Covid and it is hard for a university to compare a 4.1 GPA at high school A with a 4.2 from high school B without massive resources and past data sets. A student scoring a 1450 on the SAT is nearly always going to do much better academically than a student with a 1200. 

A common criticism of the SAT (and all standardized testing) is that it is biased towards affluent students who attend high schools with greater funding. We would contend that the test is biased towards students that go to better-funded high schools because those students have received a superior education and hence are better prepared for university, as revealed by higher scores. Blaming the SAT for education inequalities is like blaming a thermometer for hot weather, to paraphrase Freddie de Boer.

More recent research has revealed that university admissions is indeed a corrupt process that is biased toward the affluent. A summary of Chetty, et. al. is here and offers a damning indictment of the industry backed up by statistically robust data. Beyond the fact that the system is very much biased towards rich tennis players whose Mom and Dad went to the same university, there also is a clear correlation between SAT scores and eventual academic and career success. "Among available indicators, SAT/ACT scores remain one of the best predictors of student success".

Small liberal arts colleges with 1,200 students can take the time going through each aspect of an application carefully and can proudly be test blind in admissions. Large schools with upwards of (or over) 100,000 applications have no such luxury and the SAT provides a reasonably accurate metric to be viewed along with academic transcripts and other data. NYU doesn't require the SAT, but the reported median score is 1540. Why report this information if it wasn't considered?  NYU and many other similar schools have enjoyed a massive surge in the number of applications by going test optional.

SAT scores may be optional, but technically so are high grades, good recommendations, strong essays, and outside activities. Students that submit high SAT scores have a better chance of acceptance to the most rejective universities. This is especially true for students coming from affluent international schools where there are no barriers to accessing the test.

Thoughts on the dSAT

We've been preparing students for the newest version of the SAT, the Digital SAT (dSAT), since January.  The dSAT is taken on a student's digital device and is adaptive by section: you must do well on the first module to get to the second harder module and challenge for a higher score.  We did a deep dive on the new format here, and that analysis is still broadly valid.  Here are some more thoughts:

- The questions have different weights!
Getting an easy question incorrect will really lower a score while getting a super-hard question wrong will have much less impact.  Hence, students need to work hard to not make silly errors on the easier questions in the first module.  If you don't get to the harder second module, scores are really capped at about 580/800.

- The test is shorter!
It is now just over 2 hours long and the computer based nature means that there is much less faffing about at the start with the answer sheets and whatnot.  Kids click "Start Test", and they are off to the races.  A much more pleasant, and much shorter, experience for all concerned than the old SAT and especially the ACT.

- Focus is key!
While shorter, the Reading and Writing half of the test requires students to focus on a brief passage about, say, Native American pottery, to answer one question and then tackle the next question which may cover an experiment in botany, followed by interpreting a Shakespeare quote, etc.  Switching so quickly is a challenge for many and requires laser-like focus for the full 64 minutes of the two Reading modules.  We have developed some great timing strategies to maximize Reading/Writing scores.

- DESMOS is a game-changer!
The DESMOS graphing calculator is embedded in the testing app and on some math modules it can be used on up to a third of the questions to effectively blunt-force your way to the correct answer through trial and error.  Learning how to employ this powerful tool is key to a higher score, especially for students who don't remember what a discriminant will tell them about the number of solutions to a quadratic function.

- The test has worked!
We had profound fears regarding the many, many things that could go wrong with this transition to a new test format.  Thankfully, things have gone largely to plan - a few issues aside.  International students are being used as beta testers before the test is rolled out in the States from next March but to give College Board credit, it seems to be working just fine.