Crowd-sourced evaluation - the steps.


Privacy, assent etc

At the very least, it has to be clear what is anonymous, what is not, what the data can be used for. If the client makes any part of the process compulsory for staff, what is compulsory and what isn’t.

Ideally any evaluation will have an evaluation ombudsperson - someone in the know but outside the process with the ability to address any ethical issues.

If the evaluator plans to link answers in one part of the process with answers from the same individual from another part of the process (or even from other data sources like HR data), this needs to be made clear.

Structuring the doc

First, it is necessary to gently push back a little against the implicit requirement to answer so many evaluation questions and sub-questions, and to suggest only breaking questions down into sub-questions where there is really a need to do so (for example where the position of women and men obviously differ really widely and could be very important, or where they are non-negotiable for the donor).

We need to encourage participants to report differences where they are significant and relevant.

The outline for the actual final report will probably include some main headings like Introduction which are not included in the online Focus Doc, however it is also possible to crowdsource these sections too. (After all, the contributing experts working together can certainly give a better outline of the project and the history of the project than the evaluator can.)

The headings and subheadings are expressed in the form of questions - each section and subsection will aim to answer the evaluation question which heads it.

Findings and Recommendations Sections

Often we think of this kind of structure

  • Findings
    • Subhead A
    • Subhead B
  • Conclusions
    • Summary of both subheads
    • (maybe summary of subhead A)
  • Recommendations
    • Summary of both subheads
    • (maybe summary of subhead A)

The detail doesn’t matter. The point is that we don’t usually see recommendations entering the game until the end. General evidence and Recommendations are treated differently. Findings are gathered early on and then are synthesised into Conclusions. Recommendations are supposed to come from nowhere after the completion of the Findings section, the spontaneous product of the evaluator’s mysterious expertise. Or they are partly a synthesis of stakeholder’s recommendations, but then shouldn’t they be already covered in the findings sections?

Instead, I would think of the evaluation process as synthesising in parallel general evidence alongside its normative counterparts like suggestions, lessons learned. The two threads should interact with each other right across whatever structure we give to the evaluation questions and correspondingly to the report.

Deciding on contributor groups, privacy etc.

There are lots of possibilities here.

Perhaps the simplest way is to keep all of the Focus Doc and the process private to a single group of contributors.

But it is also possible to define different groups of contributors who can

  • be invited in stages, so that beneficiaries and field staff get to comment and contribute first, so they get a chance to have their say before management are invited.
  • if necessary restrict access to some content only to specific contributor groups. (“Access control”)

Posting the preliminary questions and answers

  • The questions themselves may be a mix of mandatory evaluation questions and evaluation questions posed by the stakeholders themselves at the beginning of or during the evaluation process.
  • They are usually posted by the evaluator in the preparation phase, so that the Focus Doc already has an outline and some content.

Stakeholder visits

When using a Focus Doc, face-to-face contact is more important than ever, especially to counterbalance and validate the online process.

One possible procedure is this: The evaluator visits the client for 3-8 days early on in the evaluation, in order to meet the different stakeholder groups.

Kick-off meeting

Ideally these stakeholder visits would be followed up by one big meeting of 1-2 hours in which a good number of all the different stakeholders gather together. I share first impressions, explain how we will be collaborating online and formally launch the online collaboration.

If a prize is to be awarded for the most constructive contributor, the details should be agreed and published.

Live phase

Launch and invitations

Now we are ready to invite stakeholders and other key informants (“contributors”) to contribute to a crowd-sourced evaluation report” or a Focus Doc”.

Invitation mail

You don’t have to contribute to all the topics, just the ones where you have something to say. And the contributor who gets most likes” from other contributors will win an iPad mini! (I am donating this prize because this procedure should speed up the process for me as well as for you.)

We are writing to you early because we are sure you won’t feel intimidated by being among the first to make comments.

The discussion is private but you can register using the email address we sent this mail too (or a connected social media account).

We value your real opinions. We guarantee the opinions you express will not influence any future interactions you may have with Tunza. So please speak up and say what you really think. We might quote what you say in our evaluation report, but we promise not to write your name.

Tips on taking part

Here are some tips on taking part - you can read them again at the site.

  • When you log in, please look at the list of different topics and replies and click on any which interest you.

  • You can comment on the whole topic by clicking reply” at the bottom of the other replies, or you can reply to an individual comment by clicking the pale blue reply” button at the end of the comment.

  • You can like” topics and people’s replies by clicking on the heart” icon.

  • Please try to write clearly and briefly! Provocative replies are welcome just as much as attempts to find consensus.

  • Your reply will appear at the bottom of the page and also attached to the comment you replied to.

  • Please write at least five replies and like” at least ten replies. Please add evidence for your views wherever you can!

  • The contributor who gets most likes” from other people (and has written at least five contributions and likes” at least ten other contributions) wins an iPad mini!

  • If you wish, you can also start a new topic with a new question for the other contributors.

  • If you have any questions, you can write to me, the external evaluator, directly at steve@promente.org.

  • You can also send me an email at steve@promente.org or if you want you can send me a completely anonymous mail at http://www.sendanonymousemail.net/

Identify evaluation champions”?

At least one representing the evaluation commissioner, ideally one for each contributor group.

Asking and answering

  • Evaluation questions may be refined, split or reformulated during the process according to the suggestions of the stakeholders and moderated by the evaluator.

During this stage, the evaluator will usually take on an active role as moderator - challenging received opinion”, asking for evidence, encouraging responses from underrepresented groups, explaining the process etc.

Gathering more evidence

  • Where evidence is missing to answer the questions, the evaluator and/or the stakeholders themselves can be tasked with gathering additional evidence through small pieces of in-depth research, see examples below. When ready, the evidence is added to the discussion.

Fresh evidence to answer tricky questions: how the evaluator can help

This could start before the live phase, addressing evidence gaps noticed by the evaluator in the inception phase. But things get especially interesting when contributors themselves identify evidence gaps.

If at all possible, the evaluator (or other contributors) will post the new evidence, the results of additional investigations and analyses, in the Focus Doc attached to the questions they address before the end of the live phase, to allow the contributors to digest them - adding votes and downvotes, comments etc.

The evaluator can address the contributors online like this:

You find some evaluation questions particularly interesting or important but contributors are just exchanging opinions because evidence is lacking to answer them.

What can I offer you to get more new real evidence? Here are a few ideas. You can make your own suggestions (though I can’t promise to do them).

  • benchmarking analysis against similar organisation(s)
  • in-depth, qualitative study to look at say, why something works or doesn’t work and for whom and how
  • analysis of existing questionnaires or other spreadsheet-type data, particularly with a view to estimating effects/impact.
  • where appropriate, and where baseline data exists (e.g. on outcome measures) but an endline has not been conducted, conduct an endline survey and analyse the changes
  • on online survey for a specific purpose e.g. to gather feedback information from users, retrospective assessment of change, etc
  • a paper-and-pencil questionnaire survey with user groups
  • focus groups e.g. with users or partners to get insight into what the group thinks” - about the relevance of a programme, about factors like hopes and fears, suspicions and motivation.
  • analysis of social networks either via online social media or via analysis of data on interactions within the a group

Bringing in additional experts

Most evaluations would benefit from specific expertise in areas like

  • group dynamics
  • subject-matter expertise, from forestry to sanitation
  • PR and marketing
  • beneficiary communications
  • fundraising
  • statistics
  • social science e.g. conflict, peacebuilding
  • gender analysis
  • political science

Most evaluators, let’s face it, aren’t experts in all the areas which could be useful for the evaluation.

Synthesis Phase

  • The evaluator synthesises the discussion into the evaluation report, adding any additional research carried out as an annex.

  • The evaluator synthesises the discussion into the draft evaluation report, adding any additional research carried out as an annex.
  • (If required, the draft can again be reviewed by some or all groups of stakeholders, using familiar tools like tracked changes” in offline or online docs. However this is no longer part of the Focus Doc process.)

The prize

The most most productive contributor (probably the person whose contributions were liked” by the most people) being awarded a significant prize (e.g an ipad or similar) which I will pay for.

Up next A PDF workflow which kills no kittens I have spent really decades trying to find a decent workflow for storing, annotating and referencing research documents, mostly but not always PDFs. Crowd-sourced evaluation - why? Very often, evaluation TORs are so comprehensive that it is not really possible in the time available to do proper independent data gathering for
Latest posts Making notes on PDFs without computer or paper Publications causal-map Causal Map intro Causal Mapping - an earlier guide The walk to school in Sarajevo Glitches Draft blog post for AEA365 Theory Maker! Inventory & analysis of small conservation grants, C&W Africa - Powell & Mesbach! Lots of charts! Answering the “why” question: piecing together multiple pieces of causal information rbind.fill for 1-dimensional tables in r yED graph editor Examples of trivial graph format Using attr labels for ggplot An evaluation puzzle: “Talent show” An evaluation puzzle: “Mobile first” An evaluation puzzle: “Many hands” An evaluation puzzle: Loaves and fishes An evaluation puzzle: “Freak weather” An evaluation puzzle: “Billionaire” Using Dropbox for syncing Shiny app data on Amazon EC2 Progress on the Causal Map app Articles and presentations related to Causal Maps and Theorymaker Better ways to present country-level data on a world map: equal-area cartograms A starter kit for reproducible research with R A reproducible workflow for evaluation reports Welcome to the Wiggle Room Realtime comments on a Theory of Change Responses to open questions shown as tooltips in a chart A panel on visualising Theories of Change for EES 2018?