Illogical frameworks, composite results, and logframe bloat.
Logframe bloat causes headaches, tiredness and indigestion in M&E staff. This post is about one cause of it.
Sometimes I have to help organisations with their existing results frameworks or logframes and I am struck by how often I see, below a composite result, what one could call “redundant results”: subsidiary results which are supposed to contribute to the higher, composite result but which are in fact just parts of its definition.
For example, suppose: “Result 1: increased awareness of HIV amongst schoolchildren and their parents”, which we could call a composite result, has below it two subsidiary results: “Result 1.1: increased awareness of HIV amongst schoolchildren” and “Result 1.2: increased awareness of HIV amongst parents”.
Does this kind of combination of results strike anyone else as weird? 1.1 and 1.2 are redundant, aren’t they? Can’t we just strike the two lower-level results? Leaving them in is a major cause of logframe bloat.
Manuals like the USAID TIPS series correctly teach that results have to be logically independent of their parent result; they can be measured independently from it, and collectively they can causally contribute to it. Whereas in the example, 1.1 and 1.2 together are the very same thing as 1; taken together, they logically imply it and are implied by it. If A and B are the same thing as C, they cannot cause it. It can’t be part of our theory of change to achieve C by doing C.
There are many occasions where we just have to have composite results; when we are addressing changes in related but different groups of stakeholders, or geographical areas, minority and majority groups, etc. Or where we are aiming for a bunch or series of related but different products or achievements or regulations adopted or whatever. That’s OK. But then isn’t it always a mistake to add a redundant level of superfluous results below such a composite result?
Sure, one can (correctly) respond “oh this just goes to show the fatal flaws in this kind of sequential, deterministic program design” but that doesn’t help the tens of thousands of organisations who are bound by contract to exactly these kinds of designs, and are going to be bound by similar contracts for the foreseeable future too.
This leads to a bunch of other issues.
First, unfortunately, there are many different ways in which subsidiary results can fail to be logically independent of the parent result. Here are some possibilities.
Exact overlap: subsidiary results overlap exactly with the parent. In this case we can just delete the redundant lower layer as discussed above.
The parent can include content which is not covered in the subsidiary results. So if we change Result 1 to “increased awareness of HIV amongst educational stakeholders”, 1.1 and 1.2 are partof the definition of 1, but we have left out teachers, principals, the education authorities etc. Now, do we assume that the subsidiary results (or any other results elsewhere in the framework) can make a causal contributionto this additional coverage in the parent?
If no, we just have superfluous content in the parent which we are not programming to change and which should just be deleted (and then the subsidiary level is entirely redundant and we can delete that too).
If yes, for example because we believe the children and their parents will influence the other stakeholders, we have got a bit of a mess and we should redesign this part of the framework.
The subsidiary results can include content which is not covered in the parent. So if we change Result 1.1 to “increased awareness of STDs amongst school children”, 1.1 is still partof the definition of 1, but it also includes material which is not covered in 1. Now, do we assume that this additional content in the subsidiary results can make a causal contributionto the parent (or to any other results in the framework)? Again, there are two cases:
If no, we just have superfluous content in the subsidiary results which does not lead to higher-level change and which should probably just be deleted (and then the subsidiary level is entirely redundant and we can delete that too).
If yes, for example because we really believe the children’s knowledge of other STDs will influence or reinforce their or other stakeholders’ awareness of HIV, again we have got a bit of a mess and we should redesign this part of the framework.
Of course various combinations of case B and case C are possible too.
Second, what are we going to do about the indicators? Composite results need composite indicators. But donors want commitments to simple targets for simple indicators. So for a composite result we have to both define a set of subsidiary indicators for each of the dimensions of the result andspecify how to combine them into a single indicator.
I will look at composite indicators in a subsequent post.