Section 10 The mini-map coding rule

Here is the first rule of Soft Arithmetic for causal maps.

Information like “the influence variables B, C and D all have some kind of causal influence on the consequence variable E” can be coded with a mini-map in which one or more variables (the “influence variables”) are shown with arrows leading to another (“the consequence variable”). The information and the map are equivalent.

Fig xx. A “mini-map”: one or more variables are shown with arrows leading to one other variable.

In this extreme case, there is no information at all about the content of the causal statement from something as vague as “B and C and D all somehow contribute positively to E” to more detailed claims. Later we will see different ways in which we might encode that content. But we don’t in principle need any specific way to do that, we can just write what we know on the downstream box:

Fig xx: A mini-map in which the information about the content of the causal influence has been written on E, preceded by three dots ... . The three dots indicate that what comes after is not formally part of our Soft Arithmetic.

Mini-maps are the atoms of causal maps. You can build up any causal network from them.

There are alternative ways of defining the “atoms” of causal maps, for example as multiple causal links forming a causal chain as QuIP does, or used to. Or you could also allow for more than one downstream variable within an “atom”. Often just single links between one influence variable and one consequence variable are treated as the atomic units – for example, Pearl does this. But this begs the question of what to do about causal influences which essentially involve two or more influence variables working together, for example synergistically.

Mini-maps, defined like as above, are our atomic units.

Many of the example mini-maps I will show contain several influence variables. In 90% of practical applications, 90% of mini-maps will just contain a single influence variable and a single consequence variable. But we don’t want to get stuck when we need two or more influence variables.

It is really important to understand that a mini-map can encode any combined influence of a causal package of variables on another, not just the kind of cases we expect when we come from a background of linear, additative models. From that background, we might have to remind ourselves “aha, there might be interactions between the members of the causal package” but if you say this you are only showing that you have not yet shaken off the shackles of linear thinking, because the usual case is that when B and C are part of a causal packet, the influence of B on E depends on the value of C and vice versa. Independence is a special case.

10.1 Meaning depends also on elicitation context

Markiczy and Goldberg (1995) point out that the context in which the causal claims are elicited partly determines their meaning. It makes a difference in particular whether the respondents are asked “try to make sure you have included all the relevant factors” or not. It makes a difference whether a predefined set of concepts are used or an open-ended approach as with QuIP. Does the absence of an arrow mean that the respondent believes there is no such causal link or merely that they hadn’t thought to mention it? This question is crucial to how we merge data from different respondents, see later xx.


Technical note

You could say,

aha but the sentence above contains the words “causal influence” so you have explained one mystery with another. If you take that part out, the diagram could be about anything, and the arrows might mean “is larger than” or “is a child of”, or lots of things.

That’s true, but the whole point is that we will specify not just this inference rule for causal maps, but enough different rules that only causality is left as a possible interpretation of their meaning. Or, to put it differently, if we have a child who can use + perfectly, we aren’t bothered if they can explain it in words or not; if you understand the inference rules, you understand enough. If you want to test whether someone understands how to code and interpret QuIP information correctly, you can ask them to make various inferences with the maps.

Again: we are not going to say what “causal influence” means. We are going to show how it works.

Some of these rules will seem pretty trivial and obvious. They should. But they are necessary for building up a complete and consistent system. In any case I need them to make an app which actually works, and we need to spell them out so we can agree how to code with the app.

It’s crucial that a mini-map codes information about causality, not co-incidence. So the causal map “C → E” should not be interpreted along the pattern of “if you observe (a high level of) C you are more likely to observe (a high level of) E”, though that may or may not be a corollary of the causal information. The strongest and most correct interpretation is “if you intervene in the system and manipulate C, which may involve breaking any causal links from other factors to C itself, then this manipulation will produce a corresponding effect in E”.