Let’s go back to our example of the volunteer maths program, where low-income students who are failing at maths are offered a place in an out-of-schools program run by volunteer maths teachers.
Suppose the aim of the program is to help students achieve a certain minimum level of maths proficiency, as measured by a target exam result. The outcome metric of the venture is ‘How many students went on to achieve a B grade or higher in Maths?’ The output metric of the venture is ‘How many students enrolled in our out-of-school workshops?’
Effectiveness is the ratio of the outcome to the output. So if 100 failing students enrol, and 50% of them then achieve a B grade or higher, then the effectiveness of this program is ’50 students in every 100 that enrol in our program achieve a B grade or higher in Maths’.
By offering this kind of statistic, the venture is able to market itself to funders and also to be able to offer comparisons with other programs on the market.
When calculating effectiveness, it is important to be aware of selection bias. How were the students selected? Were they the top performing students in the school? The lowest performing? A mix? A group who signed up voluntarily (i.e. who self-selected)? These will all skew your results. It’s important to be explicit and deliberate about these selection effects, as they impact effectiveness.