As Business Development Manager at Magoosh, it’s my job to make sure schools and universities get the most out of test prep. Part of that means helping staff members decide on the best prep program for their students. But determining what’s “best” can be complex for the schools I work with — especially when they don’t know what data to look at to make that decision, or when the program they’re currently using doesn’t provide strong metrics.
Moreover, many educators can be resistant to change. Magoosh is disrupting traditional test prep by providing online learning experiences that are both excellent and affordable, but many faculty I speak with are hesitant to change what they think is already working. So how do we know if the college prep program you’re using is really working? And if it isn’t, how do we know what will?
1. Legitimate score increases
There are lots of ways to quantify test-prep effectiveness, but score improvement is definitely the most common. Give students a diagnostic, help them study, and then see what they get on the actual test. But we want to make sure the score increases are legitimate, and there are some big problems with this method.
First, diagnostic tests aren’t the same as real tests. Modern students can feel so over-tested that they are hesitant to try their best on a diagnostic. It’s also possible that the diagnostic could be harder or easier (or just plain different) than the real exam. In fact, some test prep companies have been accused of making their diagnostics too hard on purpose — a harder pre-test means inflated student growth numbers, after all. Definitely a test-prep ethics no-no.
At Magoosh, we prefer to focus on student growth and conceptual learning instead of unofficial diagnostics. The best before-and-after method is to compare two official exams. Most students take the SAT more than once, as do many GRE and GMAT students, so official pre-scores are often easy to come by. If you can’t get previous scores, use an officially released exam as a benchmark (here are links to free released GRE, SAT, and GMAT exams). Just be careful to use diagnostics the right way.
2. Student engagement
If you don’t have previous real-life scores to compare to, another great metric to determine whether your program is working is student engagement. You can measure this in lots of ways — student attendance, homework completion, or even just watching their faces while they do the work.
Because Magoosh is all online, we can help schools track engagement with a lot of different metrics. How many questions have students answered? How many lessons have they watched? When did they last log in, and how long was their session? These are all important yardsticks to help determine how engaged students are with the material, and to help teachers and facilitators hold students accountable.
3. Cost efficiency
Let’s face it: if you work in a school or university, you have probably faced budget cuts in recent years.
To determine if your program is cost effective, start by thinking about how much you’d be willing to pay for some incremental improvement per student. How much would a 10 point SAT bump be worth, or a 1 point GRE boost for each of your students? Then compare that to what you’re actually paying per student for the average increases you’re getting. You may have to estimate, but you can at least come up with a ballpark of what you’re paying per point, per student. How satisfied are you with this cost?
At Magoosh, we guarantee that students who fully use our program will earn a 150 point score increase on the SAT, a 5 point increase for GRE scores, or a 50 point increase for GMAT. Divided by the cost per student — less than $100 for now — and that’s a ratio our customers are very happy with.
Every school program and group of students is different, so the question isn’t which prep program is best — it’s which prep program works best for you. These metrics are great tools for making that determination. Are there others that you use? Please share them in the comments!
Read more education articles by Peter: