How to distinguish assumptions and context in logic models? Here is one attempt. It isn’t quite right yet …
A potential assumption in a logic model is a variable A
, not under our control, e.g. tomorrow's weather in our town
, (and a designated value or set of values d
of that variable, e.g. fair weather with no heavy rain or storms
) which significantly affects a logic model, i.e. it affects the function which predicts values of the downstream variable(s) given values of the upstream variables. So as long as the weather is fair, our sterling efforts to prepare the football finals will indeed lead to a successful event. But if the weather is unkind, the success of the match is in danger.
So A
being not-d
rather than d
decreases the likelihood that the output variables will take their designated values even if the controllable variables take their designated values. In ordinary language, if an assumption fails to be true then it is less likely that we will get the desired outcomes even if we do everything we planned.
So A
is a kind of moderator because it affects the effect of one set of variables on another. Though in classicial statistics, using the word “moderator” usually presupposes linear functions, normal distributions etc and we certainly don’t want that. And yes, a moderator is part of the model even if it isn’t always shown as such.
However, and thanks to Rick Davies for pointing this out, there are uncountably many factors like the existence of gravity which could be considered as potential assumptions. we wouldn’t actually call them assumptions unless they are salient.
The salience of an assumption A
depends on the risk it represents to the rest of the logic model; the likelihood that A
will not take its designated value(s) multiplied by the negative consequences which would ensue. So decent weather is a more salient assumption in winter than in summer because poor weather is more likely; and it is less salient if the football match is going to be held indoors anyway as the negative consequences of poor weather are not so significant.
Of course we never know everything with certainty, in fact most of our models are clouded with uncertainty most of the time. So the salience of an assumption depends on our best guess of the negative consequences of it failing - and that guess always has to work with information which is more or less limited or sketchy.
((Also, an assumption is an epistemological concept, and in this article I keep switching between talking about what we believe and our levels of evidence for that (our assumptions in our models) and the things themselves. So, say, it is an assumption if we plan for fair weather, but the weather itself isn’t an assumption. I need to tidy up this language.))
Every variable in a logic model (like tomorrow's weather in our town
) has a context within which it makes sense. We have consensus about what it means and how in principle to check the status of the variable. So a variable like average class size in an educational district
is important for educational outcomes and can be recognised and used across a wide range of contexts. But it is not so useful in edge cases like home schooling and might be completely inapplicable in, say, a post-disaster situation, in which children are coming and going and educational provision is ad-hoc and shifting.
Some variables only make sense at a particular time, like yesterday's weather in our town
or the number of visitors to 2016 Olympics
. There is no such thing as measuring them now but we have a clear idea of what we had to do or will have to do to measure them. So the time is part of their context.
This is why we often encounter variables for which there is no such thing as a baseline measurement, e.g. the number of visitors to 2016 Olympics
. When logical frameworks insist on a baseline measurement, e.g. 0
, in these cases, they are the cause of philosophical headaches.
In the same way, some variables only make sense in certain places. So percentage of people attending Friday prayers
makes sense in some towns and cities but not in others and certainly not at the North Pole.
So time and place and many other things can make up the context of a variable.
Data we have about the likely values of a variable and its connections with other variables also makes sense within a context and doesn’t make much sense outside the context.
Within a certain context, the existence of the variables is given. That is another way of saying, we have agreed on how to measure and record class size, pupil attainment, etc.
Recall that when we make a claim that the values of one set of variables is (at least partially) determined by another, we construct a theory or a logic model. (A theory, in the weak sense of the word, is just such a claim about connections; and a logic model is a theory with two additional properties: first, we have control over some of the variables and second, some subsets of the values of the upstream and downstream variables are designated in a certain way, i.e. we plan to input at least 10 teacher training sessions of at least a certain quality, and we hope this will lead to educational outcomes of at least a certain quantity and quality.) So every logic model contains a theory, but not vice versa.
So, when we construct a logic model or part of a logic model, the context of the whole model is the intersection of the contexts of the constitutent variables. So we might know a whole load of stuff about educational outcomes but is it useful or even relevant to education provision in a post-disaster situation?
The theory within a logic model makes claims about how certain variables depend on others. We can have competing logic models, competing theories, about the same set of variables within a given context.
Most of a context goes without saying and we just don’t need to explain. You can never spell out the entire context. That is just another way of saying, the further away someone is from the context, the more explaining you will have to do in order to help them understand it. Think how much explaining you would have to do about, say, class size to a being from Mars. In practice we only ever have to explain just enough context to explain something to someone who comes from a bit outside a context.
Sometimes we say that our variables vary over possibilities, from actual to counterfactual. We imagine the counterfactual states of the variables as being like glassy, hardly visible ghosts behind the actual states. Sometimes this picture is useful. But we shouldn’t forget that the variable only makes sense within a certain context or contexts. That is another way of saying, we can only have a theory when we have regularity and repetition in the real world - when we have some degree of certainty about what will lead to what, and we only get that from seeing different, actual cases which are similar to one another in some ways but different in other aspects - for example, by being able to identify different, real, school classes and having recorded that they are of different size; and in addition noting the connection of class size with other variables of interest. It is this variability in fact, variability across cases, which gives us the idea of a variable varying between actual values and counterfactual values, i.e. which enables us to say it could have been different, e.g. this class could have had a different number of students in it.
The context of a variable is the space within which all these different cases are found.
If the context breaks down, anything goes. Here be dragons. Like trying to measure the variable class size
in a post-disaster context.
One of the killer moves in science is to turn some aspects of a context into variables.
So suppose I have a logic model for how reduced class size is associated with improved student attainment - within a given context, say: secondary schools in my education authority in the last ten years
. I can rely purely on empirical data and/or I can have theory-based explanations about why this should be the case. Now, I might want to extend the context within which my model applies. I can do this by adding variables. So I can add the variable education authority
. I might decide that the influence of class size on attainment is fairly constant but is stronger in authorities B, C and E than in the others. I might decide this on empirical and/or theoretical grounds (and preferably on both). So now the context has lost a distinguishing feature, it has become more general: secondary schools *in my country* in the last ten years
; and the feature has become a variable within the model.
In its messianic mode, the World Bank thinks we only need to transform a couple more aspects of context into variables and we will have no context left, just a pure, context-free model. But there is always context. Any context is always arbitrarily complicated to someone from sufficiently far away. The context we actually spell out is just enough to tell someone from a different context how to get from there to here.
So a logic model has both assumptions and context. We will nearly always identify at least a couple of salient assumptions, and our model must have a context. But the influence of the assumptions is specified - we make an estimate of the damage they might do to the outcomes if they fail. Whereas the influence of the context is, by definition, unspecified. We just know that if the context fails to hold, there be dragons.
In most logic model and logframe templates, assumptions are not additional variables but footnotes to causal links between variables: “what could threaten the if-then statement from outputs to outcomes”. We are often then asked, what are your risk management activities to mitigate these risks. But first, the assumptions be included in the model, not added as footnotes. Second, activities we plan to reduce the potential threat of failed assumptions are important activities which should be part of your overall plan, i.e. logic model, and not footnotes to them.
So we don’t need “assumptions” or “risk mitigation” columns in our logic model templates. They should be recast as variables and included in the main model.
True, we need more attention, not less, to assumptions1. And having a special column, as we do now, might be a good thing because it forces ones attention to them. So yes, it might be dangerous in practice to say “we don’t need an assumptions column”. So in terms of logic model checklists, I would have to replace this checkbox:
“have you filled in your assumptions column”
with a checkbox like
“have you included salient assumptions in your model, i.e. all and any factors beyond your control which represent a risk of at least X to your planned successes” (either because they happen, like earthquakes or foul weather, or fail to happen, like economic stability or fair weather)?“”
But to be honest, my primary interest right now is in getting clear about the formal logic behind these strange things we call logic models, rather than worrying about how to get people to use them correctly, though of course that is important too.
I don’t think it is helpful to include much formal discussion of context within logic models. There is no formal recipe which will always tell us where there is scope for misunderstandings. But we do need to make sure that we don’t overgeneralise. We need to realise that our model only makes sense in the overlap between the contexts of its parts.
Thanks again to Rick Davies for pointing this out↩