Progress Markers, Boundary Partners and Item-Response Theory

There has recently been a discussion thread on the Outcome Mapping mailing list about progress markers and boundary partners. Briefly, a development project plans progress for/with the key people with whom it directly cooperates, called boundary partners”, in terms of their progress towards achieving key outcomes called progress markers” which are grouped into expect to see” like to see” and love to see” items. Often these items are visualised as steps up a ladder.

One contributor pointed out that one problem with these metaphors is that a boundary partner can reach a higher point before they reach a lower point. Also, some progress markers which are probabilistic or repetitive in nature might be met once, then not met e.g. the following month, then met again, e.g. the month after. So for example, our boundary partners might start off reaching all the expect to see” goals like coming to meetings, then stop coming to meetings, then start again, while concurrently (perhaps surprisingly) achieving some of the higher progress markers.

These examples seem to break the ladder metaphor for how we visualise the progress markers. And in fact something very similar applies to the levels of a logical framework or results framework.

My feeling is that the ladder metaphor for progress markers is unhelpful not because it is a ladder but because it is implies that every boundary partner (or group of boundary partners) is always at one particular point on the ladder at any one time. So it cannot be improved by other spatial metaphors (journeys etc) with the same implication.

I would like to suggest one kind of model which does fit this situation pretty well, and that is the model of students succeeding or failing at a series of more or less difficult exam questions, for example a reading test. So for example suppose a class of children learning to read are given a list of (progressively more difficult) words to read aloud. Most likely, the weaker students will get some of the easier ones right and not many of the more difficult words, whereas the stronger students will get all of the easier ones right and some of the more difficult ones; but there will always be exceptions and surprises. So the likelihood of whether student X gets question Y right depends on both the student (who we can assess as being on a scale from weak to strong) and the questions (which we judge to be on a scale from easy to difficult).

From this perspective, the progress markers can indeed be put on a scale from expect to see / like to see /  love to see, according to how likely it is that an average boundary partner has achieved or is achieving them at some point in time; and we can say that the progress of the boundary partners towards the outcome challenge is some kind of combination of how well they are doing at achieving each of the progress markers  - just as we would judge a child’s reading ability by combining their score on a whole bunch of reading words, perhaps giving a higher score for harder words.

One advantage of this approach is that the progress of boundary partners can still, after all, be expressed in terms of progress up a ladder, e.g. we could say that boundary partner X is around the level of progress marker Y, but we understand this to mean in detail that they are pretty likely to be achieving Y, and might be achieving some progress markers higher up the ladder, and should pretty certainly be achieving the progress markers below; the position on the ladder is a summary of how well they are doing or have been doing at all of these progress markers.

Of course in some situations, partner X might not (yet) be in a position to attempt some high-up marker Y1. Or they might be far beyond marker Y2 which they long ago finished with. But we can still ask ourselves how likely they would be to achieve each one if they were in fact faced with it.

This kind of test situation is the focus of both Rasch models and Item-Response theory, two related approaches in educational science. They both have well developed mathematical ways of dealing with these situations. And they also give us additional food for thought, for example that we don’t need to decide in advance which are the expect, like and love to see markers because we could just wait to see how easy our partners find each of them to achieve. So from a bunch of test data (the answers by the students to the questions) we can use Rasch theory afterwards to work out not only where each student goes on a scale from weak to strong but also where each question goes on a scale from easy to hard. Also, IRT opens up the possibility that a bunch of progress markers cannot be put on one single dimension or ladder of difficulty, but that two or more are necessary - so for example some boundary partners might be doing well on progress markers connected with democratisation, but not so well on those which have more to do with communication skills.

Would be interested to hear if this fits with the way you look at progress markers, or indeed with any other way of conceptualising progress in terms of achieving progressively more difficult targets.

Up next How to mindmap your zotero items I love zotero as a reference manager. And I love mindmaps. In particular I love docear, which is built on best-of-breed opensource mindmapper called Auto-send pdfs from zotero to your kindle and convert for easy reading Lately I have been using my Kindle Touch for reading work-related pdfs in peace and quiet at home. So I faced three problems: most scientific pdfs
Latest posts Making notes on PDFs without computer or paper Publications causal-map Causal Map intro Causal Mapping - an earlier guide The walk to school in Sarajevo Glitches Draft blog post for AEA365 Theory Maker! Inventory & analysis of small conservation grants, C&W Africa - Powell & Mesbach! Lots of charts! Answering the “why” question: piecing together multiple pieces of causal information rbind.fill for 1-dimensional tables in r yED graph editor Examples of trivial graph format Using attr labels for ggplot An evaluation puzzle: “Talent show” An evaluation puzzle: “Mobile first” An evaluation puzzle: “Many hands” An evaluation puzzle: Loaves and fishes An evaluation puzzle: “Freak weather” An evaluation puzzle: “Billionaire” Using Dropbox for syncing Shiny app data on Amazon EC2 Progress on the Causal Map app Articles and presentations related to Causal Maps and Theorymaker Better ways to present country-level data on a world map: equal-area cartograms A starter kit for reproducible research with R A reproducible workflow for evaluation reports Welcome to the Wiggle Room Realtime comments on a Theory of Change Responses to open questions shown as tooltips in a chart A panel on visualising Theories of Change for EES 2018?