Recent changes to the FAFSA mean that predicting your enrollment outcomes has just become that much harder. But there is a clear path forward. It depends on creating more opportunities for interaction with the students you’re hoping to recruit and making the most of what you learn from those interactions.
It’s been a busy couple of months in FAFSA news.
Most recently, we had the U.S. Department of Education’s decision to let students use prior-prior-year income data in their applications. But back in August, the federal government made another announcement that, even though it grabbed fewer headlines, deserves your full attention: its decision to withhold students’ FAFSA-related school rankings from colleges and universities.
This is great news for students, as they can finally stop worrying that their school preferences will disadvantage them in admissions or financial aid decisions.
Not so much for schools.
Suddenly reduced visibility
Colleges have historically depended heavily on students’ FAFSA rankings. Few if any competing measures were as broadly available and as powerfully predictive of applicant behavior. This made rankings indispensable for many schools in figuring out how many students of what sort to admit and how to structure aid packages in order to hit enrollment targets. It’s not much of an exaggeration to say many schools will be flying blind come the next admissions cycle.
But some less so than others. The more data you have on students, the more options you have when it comes to backfilling missing rankings info. And some schools are a lot further ahead than others in this regard. What follows are four important lessons we’ve learned from organizations that have gotten it right.
1. Create more opportunities to learn about student intentions.
It's helpful to understand the challenge as having two parts: first, gathering the data; and second, putting it to use.
As far as the first of these priorities goes, a lot of your success will depend on how much interaction you’re having with prospective enrollees. Each contact is an opportunity to learn more about students’ intentions—information that can go a long way toward improving the power of your predictive models.
Your level of interaction will depend, in turn, on two things: how intensive your outbound messaging is and how good you are at getting students to respond.
2. Be persistent.
One place I see a lot of institutions fall down is giving up too soon.
This is something my colleagues and I have actually done a lot of research on. We've consistently seen that a lot of students who end up enrolling don't respond to the first five or so attempts at contacting them.
It takes time and experience to know how far to push this dynamic—the last thing you want to do is spam the students you're trying to attract-but once you find that point, it's astonishing just how much inbound communication students will not only tolerate but welcome.
3. Lead with content that students will actually want to read.
It's not only the number of communications you're sending out; it's also your effectiveness in getting students to respond.
Is your message one that resonates with them?
Are your subject lines interesting enough that students actually want to open your emails?
This is also something we've collected data on across dozens of campaigns, and we've found that well-designed outreach can consistently double the response rate you're getting.
4. Make the most of existing student outreach.
Another key lesson is getting more out of the efforts you've already invested in reaching out to students.
Net price calculators (which all schools are required by law to provide anyway) are a great example. Top-performing schools have moved well beyond regulatory box checking here and are making their NPCs highly user-friendly and extremely accurate; which is to say, genuinely useful for students.
Not incidentally, their NPCs are also providing a platform for soliciting additional information; which a lot of students do, in fact, willingly provide. We've seen, for example, that students' willingness to share parent email addresses is strongly predictive of likelihood to enroll. Some of the best schools I’ve worked with are using their application process in similar ways.
“Brute force” CRM approach not a viable option for many
I don’t want to suggest all of this is easy. There’s no getting around the fact that organizing and analyzing data collected from student interactions are resource-intensive. Sophisticated CRM systems are increasingly common, which means many schools have at least one key piece of infrastructure in place. A few have accomplished a sort of rocket science with their systems, harvesting automated email outreach and web analytics to produce remarkably detailed pictures of their prospective enrollees.
But schools fitting this description are few and far between, due to the scale of investment and effort required and the as-yet uncertain payoff of this approach. Apart from the heavy technical lift of getting their systems up and running, many institutions are struggling to populate and maintain them.
And once that front-end effort is complete, there comes the hard work of "boiling the ocean" of data you’ve gathered, to figure out what’s actually predictive of the outcomes you're interested in.
For the vast majority of EM departments, it will be more a question of targeted, triaged efforts—collecting a handful of metrics with proven predictive power, and doing so across enough enrollment cycles to build a robust predictive model.
For schools that already have a few years’ worth of experience with the sort of extended data sets described above—and predictive models built from them—the transition to the post-rankings enrollment environment should not be too much of a shock.
For the remainder, the next enrollment cycle may prove to be a nail-biter.