Theory Maker!
Jul 27, 2020
r & software & theorymaker & evaluation & mande & Theorymaker
“A theory of change for our project!” “Great, but so fiddly to actually draw.” Theory Maker is designed to meet this need. Theory Maker is a
Inventory & analysis of small conservation grants, C&W Africa - Powell & Mesbach! Lots of charts!
Jul 27, 2020
r & dataviz & conservation & IUCN & Africa & reproducibleResearch & evaluation
Here it is at last This was an interesting job. We visited three countries, did a lot of interesting interviews, and a lot of data analysis. This
Progress on the Causal Map app
Mar 30, 2020
evaluation & theorymaker & causalmapping & causal-mapping
I’ve been working more or less full-time on our app for causal mapping, now called simply “Causal Map” for over a year now. You can find out more on
Articles and presentations related to Causal Maps and Theorymaker
Mar 26, 2020
evaluation & theorymaker & visualisation
Here are some things I’ve already written as steps on the road to understanding causal mapping. If you’ve read any of these, you might be interested
A starter kit for reproducible research with R
Jan 11, 2019
R & reproducibleResearch & evaluation
This accompanies my draft short article for the UKES Bulletin: A reproducible workflow for evaluation reports. Here are just two files which
A reproducible workflow for evaluation reports
Jan 11, 2019
R & reproducibleResearch & research & evaluation
This is a draft of a short article for the UKES Bulletin. Most evaluators have to produce at least a few tables and graphics in their evaluation
Welcome to the Wiggle Room
Aug 29, 2018
evaluation & theorymaker & visualisation
Inspired by Judea Pearl’s new “The Book of Why”, and also by Nicky Case’s work on explorables, I’ve started work on a explorable web app. Pearl
Realtime comments on a Theory of Change
May 8, 2018
evaluation
Wouldn’t it be great if, for an evaluation, you could map out the project’s theory of change and then pin the data (the answers to the evaluation
A panel on visualising Theories of Change for EES 2018?
Jan 11, 2018
evaluation & theorymaker & visualisation
I am thinking of getting together a panel presentation on this topic for The European Evaluation Society conference in Greece 1-5 October 2018:
Peer mentoring for evaluators
Jan 7, 2018
evaluation
Peer mentoring – It is good practice for professionals who often work alone to be able to call on a mentor. I am beginning a co-mentoring
How do you explain reproducible research to clients?
Jan 3, 2018
reproducibleResearch & research & evaluation
Most of the statistics work I do now is reproducible research - this can offer a big advantage for clients but of course that doesn’t necessarily
Links for my AEA eval2017 presentation, Washington DC
Oct 18, 2017
evaluation & research & theorymaker
AEA program listing. A short post here on this blog The book is not an easy read But there is an Epilogue right at the back of the book which
Beaufort and Rubrics
Jan 12, 2017
evaluation & research & outcome-mapping & social research
A quick post about the Beaufort scale for wind speed, see below, as paradigm of a rubric. Rubrics are really important in evaluation. The Beaufort
Judea Pearl
Jun 1, 2016
evaluation & research & social research
I only just came across the work of Judea Pearl (which shows how ignorant I am because he won the Turing Prize in 2011). I think his work is
Everything should be evidence-based - if only the evidence would make up its mind already
May 18, 2016
twitter & evaluation & Theories of Change & social research
This question came up on an Evaluation mailing list and was forwarded to non other than Andrew Gelman, and for our purposes it can be boiled down
Theory Maker and markdown
Feb 8, 2016
r & software & evaluation & mande & markdown & Theorymaker & theorymaker
Theory Maker is a free and simple web app for making diagrams of theories of change etc. I already wrote about it. Its special feature is that you
Crowd sourced evaluation: did it work?
Jan 11, 2016
mande & evaluation & crowd-sourced-evaluation
This is a quick report of what happened when I tried to “crowd-source” a recent global evaluation I did for a UN agency. An ordinary evaluation
Crowd-sourced evaluation - the steps.
Jan 10, 2016
mande & evaluation & crowd-sourced-evaluation
At the very least, it has to be clear what is anonymous, what is not, what the data can be used for. If the client makes any part of the process
Crowd-sourced evaluation - why?
Jan 10, 2016
mande & evaluation & crowd-sourced-evaluation
Very often, evaluation TORs are so comprehensive that it is not really possible in the time available to do proper independent data gathering for
Theory Maker - technical details
Nov 12, 2015
r & software & theorymaker & evaluation & mande
Theory Maker is a free and simple web app for making diagrams of theories of change, logframes, etc. Its special feature is that you can also create
Theory Maker - technical details
Nov 12, 2015
r & software & theorymaker & evaluation & mande
Theory Maker is a free and simple web app for making diagrams of theories of change, logframes, etc. Its special feature is that you can also create
Assumptions and Context in Logic Models
Nov 1, 2014
research & evaluation & social research
How to distinguish assumptions and context in logic models? Here is one attempt. It isn’t quite right yet … A potential assumption in a logic model
Measuring resilience
Aug 10, 2014
evaluation & development
This article is work in progress! Feel free to add a comment. There should be a pdf at this link. After the terrible earthquake in Haiti in 2010,
Does maintaining supply of water and sanitation in IDP sites after the relief phase encourage people to stay in the sites?
Oct 2, 2013
development & evaluation & reproducibleResearch
Does cutting them
Does it make sense to try to measure progress on the highest levels of a logframe?
Jun 22, 2013
development & evaluation & frameworks
Another interesting discussion on the M&ENews mailing list - does it make sense to try to measure progress on the highest levels of a logframe? A
Haiti: Request for Qualifications for research teams to conduct an impact evaluation of the Integrated Neighborhood Approach (INA)
Feb 22, 2013
evaluation & Haiti & impact & social research
Very proud and happy to see that this idea, which we developed while I was in Haiti with the IFRC, is nearing fruition: 3ie will be issuing a
The five-year plan is dead, long live the five-year plan!
Jan 14, 2013
evaluation
Interesting discussion taking place on the XCEval mailing list. Deborah Rugg, UNEG Chair and OIOS IED Director, posted selective highlights of final
IFRC Haiti Learning Conference and Evaluation Framework - main documents
Aug 10, 2012
evaluation & development & disaster & IFRC
Here they are: https://www.dropbox.com/sh/ytsmeoxg7c6fnnl/AAAqwFMHiQBKsa7MXBUNSPwLa?dl=0
Using R for classification in small-N studies
Jul 14, 2012
development & evaluation & reproducibleResearch
Rick Davies just wrote an interesting post which combined thoughts on QCA (and multi-valued QCA or mvQCA) and classification trees with thoughts on
Impact, Outcome and INUS causes
Jul 12, 2012
research & evaluation
Interesting discussion on the Outcome Mapping mailing list - here is something I just posted. This part of the discussion was about whether Outcome
Progress Markers, Boundary Partners and Item-Response Theory
Jun 8, 2012
research & evaluation
There has recently been a discussion thread on the Outcome Mapping mailing list about progress markers and boundary partners. Briefly, a development
Outcome Mapping book chapter just out
Mar 6, 2012
development & evaluation & research & outcome-mapping & social research
Powell, Steve, Joakim Molander, and Ivona Čelebičić. ‘Assessment of Outcome Mapping as a Tool for Evaluating and Monitoring Support to Civil Society
Illogical frameworks, composite results, and logframe bloat.
Nov 26, 2011
evaluation
Logframe bloat causes headaches, tiredness and indigestion in M&E staff. This post is about one cause of it. Sometimes I have to help organisations
Paradox of Evaluation Counterfactuals
Jul 15, 2011
development & evaluation
Why does the counterfactual evaluation principle seem to break down in the following case? Suppose eleven organisations each bring enough vaccines
My first Reproducible Research Compendium
Dec 14, 2010
evaluation & research & social research
I have just completed my first Reproducible Research Compendium “Analysis of the combined survey datasets from the American Red Cross Tsunami
Value and measurability in development
Oct 19, 2010
development & evaluation & research & social research
There is an interesting discussion on the xceval mailing list right now about value and measurability in development. Seems to me there are four
Google docs a good way of reaching consensus?
Sep 22, 2010
collaboration & evaluation & tech
So we have now finished our google doc, four people over three days etc., and a couple more nights afterwards. Writing it on google docs certainly
Three days, six people, three pages, one google doc
Sep 21, 2010
collaboration & development & evaluation & tech & social research
Just got back from Geneva where we - the four authors of the meta-evaluations of the ARC Tsunami Recovery Program - spent three days writing a