Section 30 Combining causal fragments from different sources using propositions

30.1 Same item

There is an easy extension to this approach as already implemented in “classic QuIP”.

When two propositions from two different causal fragments can be considered more or less the same, create the combined map: join up the pieces on the common items. As discussed elsewhere there are four possibilities:

  1. Chaining. (The common proposition is an influence in one fragment and a consequence in the other.)
  2. Shared influence. (The proposition is an influence in both fragments; unproblematic.)
  3. Shared consequence. (The proposition is an consequence in both fragments; what arithmetic do we use for this? The result doesn’t have to be consistent: E was caused by B but no, it was caused by C? The composite map can’t be considered as a larger claim that things are like this, only as an aggregate of different claims)
  4. Shared arrow. (Combining fragments in which there is a shared influence and a shared confluence; here the issue is how we combine the arrows, e.g. making them fatter and saying “2 citations”.)

The first two are fairly unproblematic.

There is also the problem of the meaning of an individual item when it appears in, say, 20 different fragments from 20 different sources. Is it still a proposition, but a new one? Does it say “yields increased (for 20 farmers)”? Does it generalise to “yields in this area increased”? Does it merely say “yields increased (in 20 different cases)”?

[More to discuss here]

30.1.1 Extension 2: Subsuming items under more general items

This is also a part of classic QuIP. It makes it much easier to aggregate fragments, as above, by recoding different but similar items into broader categories.

There are some issues here too though. If we have

Heavier rains have led to worse crops

as well as

Hotter temperatures have led to drier ground

can we recode the left-hand side as “climate change” in both cases? It is not actually reflecting what the respondents said; they weren’t talking about climate change, just rains or temperatures.

[More to discuss here]

30.1.2 Extension 3: Recoding items from different sources as gradations of more general items

(Respondent P:) here in village Q, the new seeds have doubled our yields

(Respondent R:) here in village S, the training has slightly increased our yields

I don’t think classic QuIP does this. In classic QuIP, all we can do is this:

..… which can then be combined into one map.

This is a bit of a fudge, because the meaning of “increased yields” changes depending on how you look at it. It isn’t clear does it mean “at least some increase, no matter how small” or “some average-ish increase”?

The extension I suggest here allows us to explicitly encode the degree of a relationship:

We have discussed this before, perhaps using a notation like “strength=.3” instead of “weaker”.

Here we have two separate causal fragments from separate sources which we have combined afterwards using the “shared consequence” rule. The important (implied) step was first to recode this

as this

We still have propositions, and causal claims about simple links between them. But in the aggregated map we have combined two propositions into one. Creating something like a proto-variable. This trivial-seeming trick allows us to combine maps on propositions which are in a sense common, but differ by degree, without losing the information about the degree.

30.1.3 Extension 3a: allowing negative gradations

If we are going to go along with extension 3, there is no reason not to allow negative strengths too.

So we can encode this

(Respondent P:) here in village Q, the new seeds have doubled our yields

(Respondent R:) here in village S, the weather has made our yields slightly worse

as this:

This extension does however highlight the problem of the meaning of an aggregated item which represents “the same” item in several original sources; we don’t in general even necessarily know whether to phrase it as “increased yields” or “decreased yields”. We are really only justified in saying “changed yields”. We should not be trying to make an actual statement about any change in the total yields amongst many farmers. We don’t have enough of the right kind of data for that.

30.2 What these extensions still don’t do

This model allows for background causal maps in which all influences are of an additive (or subtractive) nature. Things make other things higher, or better, or worse, etc, perhaps more or less likely, and that’s it. There is no notion of the shape of the influence and (more importantly) there is no notion of any kind of interaction between multiple influences. So for example there is no room for the difference between AND and OR, see below. And there is no room for necessary or sufficient conditions.

Without wanting to, we have ended up with background theories like those common in natural science: separate, additive influences between numeric variables. This is because we wanted to keep things simple, and these kinds of models have emerged over millennia to fit that bill.

I will finish by pointing out a few of the things we can’t do.

Other vegetables don’t grow in our locality since the soil is swampy. The government agents particularly kebele principals gave support through distributing vegetables and fruits for planting. But we tried it and it couldn’t grow in our locality but it was good for people in other kebeles.

This is an explicit AND claim (you have to have seeds AND good soil). Or it’s an explicit claim that good soil is necessary but not sufficient.

30.2.1 Can’t distinguish between AND and OR

I don’t think this is a big deal. AND claims and OR claims do appear in QuIP transcripts. But not often enough to warrant a lot of attention to the difference between the two. For example, in the example of the cow and the feedstuff, it is probably enough to note that both the cow and the feedstuff have some kind of a PLUS influence on milk (though in fact the influence of one is not independent of the influence of the other).

I got the cow, and I also got the feedstuff, which I couldn’t have afforded. Thanks to both, my family now has milk.

In the example of the cow and the feedstuff, the underlying causal map is based on a conjunction; E is true if and only if B is true and C is true. There is no way we could deduce this underlying map from this single observation, and there is no way the respondent could either. The respondent is gifting us the implied underlying causal knowledge, just as they did in the simpler version of the coding rules, above. The two propositions which they report to us do not include anything about, for example, cases in which there was a cow but no feedstuff; but the implied causal map does.

This is the subtle difference to the case below: here too, all three propositions are true. As in the case above, the propositions only say that B and C are true, and E is true, and that if B and C hadn’t been true, E wouldn’t be true. But the underlying maps which we infer from the narrative and the context give us the additional information which distinguishes between AND and OR; what would happen if B was true and C false, or vice-versa.

We had no clean water, and then two different NGOs came and dug wells! We’d only really need one of them, but anyway we are happy now.

30.2.2 Necessary / sufficient conditions

Reminder:

  • a report of a necessary condition being fulfilled says: B happened, E happened, and B is a necessary cause of E, which means that E couldn’t have happened without B. Or: if B doesn’t happen, E doesn’t happen, but I don’t know what happens if B does happen.
  • a report of a sufficient condition being fulfilled says: B happened, E happened, and B is a sufficient cause of E, which means that NOT-E couldn’t have happened without NOT-B, Or: if B does happen, E does happen, but I don’t know what happens if B doesn’t happen. So in a sense a sufficient claim, like a necessary claim, is weaker than an ordinary causal claim.

We could argue: we can’t encode this kind of information, because a respondent can’t deduce something as advanced as a necessary or sufficient condition on the basis of a single story. They have to have more information, e.g. more observations, to be sure of this claim.

But we can say the same about ordinary causal claims too: they can’t be justified on the basis of a single observation or report either.

So we could code like this: B happened, and E happened, and B is a necessary cause of E. We encode the necessary/sufficient information not in the way we encode the propositions but in the way we encode the background causal information. As encoding a causal link always implicitly involves encoding the respondent’s background causal knowledge, why shouldn’t that knowledge include necessary or sufficient links too, as well as about ordinary causal links?

The biggest problem I have with actually coding these links is that whether you think of a condition as necessary or sufficient depends on the contrast you happen to be thinking of.

We really wanted milk. We have the feedstuff and an empty shed, we just didn’t have the cow. The cow was necessary, the missing piece. No other kind of help (e.g. livelihood training) would have done this.

or

We really wanted milk. We have the feedstuff and an empty shed, we just didn’t have the cow. The cow was sufficient. There are other kinds of help (e.g. deliveries of milk to the door) which would have done this; but the cow was enough.