|||

I only just came across the work of Judea Pearl (which shows how ignorant I am because he won the Turing Prize in 2011). I think his work is sensational and is essential reading for all scientists but in particular for social scientists and evaluators too.

Basically he says that science has suffered because statistics has failed to formally deal with causation, leaving it as a kind of mythical thing we only talk about in whispers. Pearl provides a robust and practical notation for causation with the do() operator and develops a complete set of theorems around it. In particular he shows under what conditions correlational data can indeed be used to draw causal conclusions.

Wait, he does what?

One of the most frustrating paradoxes in the whole of philosophy, and perhaps problem number one in the philosophy of science, is Hume’s depressing observation that we can’t actually observe causation. So how are we to do science? How are we to actually learn anything?

Pearl drops a bombshell. He says that we have just been lazy in assuming this impossibility. He shows how observational, correlational data can under certain circumstances provide evidence for causal statements. This is a very big deal.

AI systems can’t sit about getting frustrated by Hume’s paradox any more than a living, learning human can. Pearl and colleagues needed to know how to set up an AI system so that it can sift through observational data and indeed make causal hypotheses on the basis of some of it. What characteristics does observational data need to have in order to support causal hypotheses?

One massively important consequence of Pearl’s approach is that the randomised control trial loses its alleged status as the unique and golden road to causal evidence.

But Pearl had plenty of other things to say which should make social scientists, and evaluators in particular, sit up and listen. Against a background of a lot of rather airy discussions of chaos and complexity in the evaluation community, he points out that our knowledge of the way the world works is built up of surprisingly simple yet surprisingly stable chunks.

He isn’t making this up: he is one of the parents of modern AI. Intelligent systems right now are all about how to learn to work out new rules in new situations. Pearl’s algorithms are helping AI systems to do just that. We humans do it all the time. Both humans and AI systems understand the world at least partly in terms of relatively simple rules of thumb - mini-theories.

These mini-theories can by the way be seen as grist to the mill of realistic [@pawson_realistic_1997] evaluation theory. Perhaps Pearl also has some ideas for the problem which is always facing evaluators (and social scientists in general): how to synthesise the kind of mini-theories from which theories of change are built; and more generally, how to synthesise qualitative information.

Plus he does it all with structural equation models which are fun to look at and easy to work with, (and which can be seen as the basis for the logframes and logic models which evaluators have to use every day).

Look at the second part of this annotated bibliography to see the sort of things he has been dealing with. , e.g. Pearl J. and E. Bareinboim, Transportability of causal and statistical relations: A formal approach,” Proceedings, AAAI-11, 2011. Reduces the classical problem of external validity to mathematical transformations in the do-calculus, and establishes conditions under which experimental results can be generalized to new environments in which only passive observation can be conducted.

Anyway, the book might seem really hard (I am working through it but very slowly) but I just discovered there is an Epilogue right at the back of the book which provides a great summary. You can read it in an hour or two and it will definitely change your life.

Up next Everything should be evidence-based - if only the evidence would make up its mind already This question came up on an Evaluation mailing list and was forwarded to non other than Andrew Gelman, and for our purposes it can be boiled down Beaufort and Rubrics A quick post about the Beaufort scale for wind speed, see below, as paradigm of a rubric. Rubrics are really important in evaluation. The Beaufort
Latest posts causal-map Causal Map intro Causal Mapping - an earlier guide The walk to school in Sarajevo Glitches Draft blog post for AEA365 Theory Maker! Inventory & analysis of small conservation grants, C&W Africa - Powell & Mesbach! Lots of charts! Answering the “why” question: piecing together multiple pieces of causal information rbind.fill for 1-dimensional tables in r yED graph editor Examples of trivial graph format Using attr labels for ggplot An evaluation puzzle: “Talent show” An evaluation puzzle: “Mobile first” An evaluation puzzle: “Many hands” An evaluation puzzle: Loaves and fishes An evaluation puzzle: “Freak weather” An evaluation puzzle: “Billionaire” Publications Using Dropbox for syncing Shiny app data on Amazon EC2 Progress on the Causal Map app Articles and presentations related to Causal Maps and Theorymaker Better ways to present country-level data on a world map: equal-area cartograms A starter kit for reproducible research with R A reproducible workflow for evaluation reports Welcome to the Wiggle Room Realtime comments on a Theory of Change Responses to open questions shown as tooltips in a chart A panel on visualising Theories of Change for EES 2018? Peer mentoring for evaluators