Why does the counterfactual evaluation principle seem to break down in the following case?
Suppose eleven organisations each bring enough vaccines to vaccinate 100,000 children, and there are a million children who need the vaccine now. There are enough vaccines to go round, but only just; only 10% of the shots are left over. One of the organisations, Organisation X, wants to know if its intervention had any impact. People on the ground are impressed: “you saved their lives” they say. But Evelyn the Evaluator says no, we have learned to think in counterfactuals, the World Bank told us to. So Organisation X actually had no impact, because even if they hadn’t come, all the children would still have got vaccinated - there were after all just enough vaccines to go round.
So the counterfactuals paradigm tells us one answer, whereas most evaluators would probably disagree to some extent and for a variety of reasons. I guess some would argue that we can’t be certain the other organisations would have been all so effective, or that taken together, all ten interventions really did have an impact.
But I just read Mohr (1999) who has a more profound objection to the counterfactual argument: he discusses the idea of physical causation, the kind of “hammer-hits-glass, glass-smashes” evidence that we just don’t usually doubt. If we saw the vaccinations happening with our very eyes, why would we need any other kind of evidence that the intervention by Organisation X caused the children to be vaccinated? Aren’t the people who say “we saw you saving their lives” right in a way that we might miss if we have been on too many evaluation seminars? And isn’t the counterfactual argument nonsense when used in this way?
Mohr, L. B. (1999). The qualitative method of impact analysis. American Journal of Evaluation, 20(1), 69.