August 29, 2018
Inspired by Judea Pearl’s new “The Book of Why”, and also by Nicky Case’s work on explorables, I’ve started work on a explorable web app.
Pearl talks about “wiggling” variables in a causal network to look at their causal effects. As someone brought up on Meccano, this really makes sense to me. It’s a philosophically interesting idea too.
So my new app is called “The Wiggle Room”. You can explore the causal effect of one or two influence variables on a consequence variable. Or you can just click the blue button to look at examples.
There are two twists:
There are two main sliders - to show not only the level under intervention but the base level — what would happen without the intervention. The intervention is actually the difference or “delta” between the two. And the effect on the consequence variable is also a difference — you can see this on the graphs. This helps us to “think in differences” which I believe is essential for understanding causal networks like theories of change.
The variables are not modelled using continuous numbers. Instead, they are “intensity variables” which I’ve also called “lo-hi variables” elsewhere. They vary between a vague minimum and a vague maximum, a bit like a percentage. In the Wiggle Room, I call them percentages because these are familiar to most people.
The Wiggle Room offers pre-sets to construct the different possible functions between (sets of) influence variables and one confluence variables, in the form of “influence shapes” for specific links and “combination shapes” which govern how these influences are combined.
So these are the features:
Each variable (the one or two influence variables, as well as the consequence variable) can have one of several Types:
true-false, possibly also probabilistic true-false (e.g. a 75% probability that it is true)
- combined with: normal or
with negative, i.e. including a negative “bottom half”.
- = 4 (or 6) possibilities.
- plus, the “base” or non-intervention level of each variable can be specified and optionally the intervention level. In the case of endogenous variables, these levels are calculated rather than specified.
Each influence can have different shapes and strengths:
S-shaped (there could be others, these are the most obvious)
- combined with:
- = 12 possibilities.
- if you have two influence variables, reducing the strength of one makes it less important than the other
- if no influence variable has a strength of 100%, there is some ambiguity left about the level of the consequence variable. So another “residual” slider appears for the consequence variable, see below.
Plus, the consequence variable (in addition to Type) can have:
- Combination (how the influence variables combine):
- the consequence variable can also be “flipped” so 0% becomes 100% and vice-versa.
- = 14 possibilities
- strength of (unspecified)
residual influences. This is only relevant if the maximum strength of the influence variables is less than 100, i.e. they don’t control it completely. The influence variables share influence between them, so if there are two, with strengths of 80 and 40, the former gets twice the influence of the latter and there is 20 left to “explain” with residual influences.
For the second influence variable (on the left) you can also specify what controls it:
- External factor
- Our intervention (this links the variable to the first variable)
- External intervention (in this case, you can explore not only “what if we do x” but also “what if they do y”)
I hope these various settings are fairly intuitive. But they give a bewildering variety of different combinations.
So next time you see someone hasn’t specified the nature of the links leading into a variable in a theory of change, and you’re supposed to guess it, ask them which of these possibilities it is!
These variables are mostly
intensity variables: they can vary between a rough minimum, which we can think of as 0%, and a rough maximum, which we can think of as 100%. (You can also try with variables which have a minimum of -100%) In the Wiggle Room you can also find true-false variables, which we see as being a special case of intensity variables. You can think of them as just having the values 0% and 100%. You can also interpret a value like 50% as meaning a 50% chance of the variable being true.
The variable types also do not need to be restricted to intensity and true-false variables. But these have the significant advantage (especially for people working with theories of change) that interventions and their effects can be conveniently expressed as percentages of the total range of the variable in question.
In practice, intensity variables can be constructed presented using ‘rubrics’: by describing (using rich and concrete language) four or five of the different levels they can take, from minimum (0%) to maximum (100%). The use of rubrics is described well elsewhere. This is a more fundamental and important task than the usual practice of trying to pin down a variable with just a brief title (e.g. “inter-ethnic trust”) and then defining it post-hoc and implicitly via the indicators which have been selected for it.
Calculating effects in a causal network: Theorymaker
I think this is a unique explanation of causal effects because it does not rely on statistics - it’s about the causal connections which explain the statistics. Evaluators often have direct (if unreliable) information about causal effects which have nothing to do with correlations. Stakeholders talk to us about causal links and only rarely about correlations. We need to have a way to process this kind of information, and statistics can’t help us.
In the Wiggle Room we only look at simple Theories, single steps in causal networks from one or two variables to another. In a larger causal network like a Theory of Change, there will be several such connections. With intensity variables, we can easily calculate how causal effects ripple through a network. If an intervention on a variable A has a 50% influence on B, and the resulting effect on B has a 50% influence on C, the direct causal influence of the intervention A on C will be precisely 25%.
So I’m integrating the same algorithm into the experimental version of my existing Theory of Change visualisation tool Theorymaker.
In the existing version you can express causal relationships using indentation:
… and you can add styling like this…
In the new version, you can actually build a causal model using the same ideas.
Relationships; base=.5; intervention=.8
Luck; base=.5; intervention=.5
Qualifications; base=.5; intervention=.5
combination=multiply after a variable changes the way that variable combines its influences.
Rick Davies asked about implementing logic gates. All of them are possible with this app.
So a true-false variable like the ones in logic gates is realised just by considering the values 0% and 100% of an intensity variable. (If you want, you can consider the values in between as probabilities of true-false values or fuzzy set membership. I think the generalisation of these combinations to intensity variables is really interesting in terms of Theories of Change and also much more applicable in the real world than just on/off situations.)
In Causal Explorer, you can select the type “true-false” for your variables, but the only thing (at the moment) which changes is the axis labels change to false and true. I might consider a more logic-gate-like display. Then:
- NOT: with a single influence variable, select “reversed” in the consequence variable section.
- for all the others, you need to click “Include second influence variable” and then choose a value from “How do the two influence variables combine?”
- AND: multiply or smallest (and select “reversed” for NAND)
- OR: soft add, hard add, or largest (and select “reversed” for NOR)
- XNOR: similarity (and select “reversed” for XOR)
In Theorymaker3, you can type something like this (try copying and pasting the whole lot)
variable: type=true-false #this switches all the variables to true-false rather than intensity: the bar at the bottom of the variables changes to show just two values.
variable: base=hide # hide the base values
a; combination=similarity # or whichever of the above combinations you want to try
b; intervention=0 # fiddle with these (set them to 0 or 1) to see what happens
c; intervention=1 # fiddle with these (set them to 0 or 1) to see what happens
I’ve also chosen functions which are more general in the sense that they can combine any number of variables, not just two (though in Causal Explorer you can’t visualise more than two).
These building blocks are certainly not capable of constructing all possible functions, far from it. Alternatives or additions to the influence shapes and the combination shapes are perfectly possible. (The
R version of Theorymaker provides the possibility to describe a function directly without any reference to these shapes.)
Most importantly, these tools are primarily for visualisation, exploring and “getting a feel for” possible causal relationships. They shouldn’t be used to try to get spurious certainty (“look, I put in a few vague ideas and numbers came out!”) where there is none. In most real-world cases, a causal chain of more than one or two links will most often have its effects reduced to almost nothing by noise. These tools are supposed to help model this kind of uncertainty.
… to Rick Davies for suggesting to add S-shaped / sigmoid to the list of variable shapes, and for the discussion also with Martin Klein which led me to document the logic-gate possibilities.
May 8, 2018
Wouldn’t it be great if, for an evaluation, you could map out the project’s theory of change and then pin the data (the answers to the evaluation questions) directly on the diagram as pop-up notes? Ideally with different colours to mark different kinds of respondent (and other data sources). You could use it interactively with respondents, and/or paste the data on later and elicit comments.
I made a mock-up.
You can interact with it here, and view the comments. If you were logged in, you could also update the diagram, add comments etc.
I made it with Realtimeboard, which is really interesting for this task. Disadvantage: There are only 3 colours for the comments.
You can also embed a diagram in a webpage, like this (you have to click on the symbol) -
It updates live, but you can’t see the comments.