|||

Crowd-sourced evaluation - why?

Some weaknesses in traditional evaluation approaches

Evaluation ToRs are too greedy …

Very often, evaluation TORs are so comprehensive that it is not really possible in the time available to do proper independent data gathering for many or most of the research questions and sub-questions. For example a TOR might contain eight major headings with say five sub-heads each, that makes say around 40 sub-questions. Often, just one of these questions, e.g. achievement of planned objectives” can break down further into dozens of additional sub-questions, e.g. according to different planned outcomes.

So a top-level question might be, say: Sustainability (1 of 8 top-level headings)

A next-level question might be, say: Institutional Sustainability (1 of say 5 kinds of sustainability: 8*5=40 questions at this level)

A level-3 question might be, say: / Institutional sustainability of Outcome A1 (One of say three outcomes: 853=120 questions at this level)

What is more, the ToR may mandate or imply that answers to these questions may in turn have to be broken down by key categories for example gender, region, ethnicity.

Finally, frequently there are additional lists of key additional questions” or cross-cutting issues” and so on which also have to be addressed alongside the standard questions mentioned already.

… so the evaluator is restricted to balancing opinions rather than gathering fresh evidence

So this understandable desire to have a large quantity of evaluation questions answered means that in practice the evaluator is limited in the quality and depth of independent data collection and triangulation of data sources which they are able to carry out.

In this kind of case, in order to answer so many questions, evaluators are usually limited to balancing different opinions and evidence given directly by some subset of stakeholders and key informants.

Evaluation matrix: way to go around the houses

Typically, an evaluator will ask these stakeholders one-by-one (or in small focus groups) a series of questions which are not directly congruent to the findings and recommendations sections of the evaluation report but which can somehow subsequently be synthesised into those headings.

The evaluation matrix” traditionally shows how this mapping and re-mapping from evaluation questions to pieces of evidence, and back again, is carried out; how each of the evaluation questions is answered by gathering and analysing different data from different sources. So at the start of the evaluation, the evaluation questions get translated into applications of specific tools with specific sources, then at the end of the evaluation process all this data is re-combined and translated back into the structure of the evaluation report Findings, Conclusions and Recommendation sections.

This is a good approach and ultimately it is in the background of Focus Doc processes too. But it is pretty tricky to manage.

How does Focus Doc address these weaknesses? Advantages of the Focus Doc approach

Feeding the Greedy ToRs

More time for additional primary evidence, whether brought in by contributors or freshly gathered for the evaluation, because less time is necessary to balance secondary opinions from contributors.

This procedure would certainly not replace traditional evaluation procedures; the evaluator will still be conducting normal interviews face-to-face and remotely, but he or she will not need to do so many, and can target interviews where the ongoing online discussion reveals a particular need.

Making the synthesis explicit

The Focus Doc process does not suggest that the right way to answer to evaluation questions is to take some kind of average of expert opinion on each question. Quite the contrary.

Focus Docs is not about finding average or consensus. The Focus Docs process aims to enable rich discussion and to allow different opinions to evolve around the evaluation questions like a kind of structured focus group discussion which evolves over a few days.

The Focus Docs is also not about relying only on opinion. Instead, it encourages contributors to provide, weight up and identify gaps in different sources of external evidence, the bedrock on which their opinions are, hopefully, built.

Direct, not around the houses

So, in this case, why not ask these stakeholders directly to contribute of the report?

Why not formulate the main findings and recommendations report headings as questions and ask key stakeholders to collaboratively provide answers to them directly (as well as improving the questions)?

Some other advantages of this procedure:

  • facilitates breaking down answers by type of respondent e.g. by gender
  • allows a broad discussion between all respondents almost simultaneously, e.g. a youth leader can comment on what the programme director said
  • allows respondents to quote and/or link to additional evidence which has not been covered so far
  • allows respondents to flag up where evidence seems to be missing, i.e. where the evaluator can help with some focussed additional enquiries or analyses
  • allows the evaluator to focus on issues which are obviously critical and/or controversial for contributors
  • makes it easy to bring in subject experts to make additional targeted questions, comments and interpretation
  • allows social network analysis of the responses (e.g. who responded to whom, positively and negatively; e.g. were some people more or less isolated)
  • importantly, frees up some evaluator time to carry out a small number of more in-depth investigations where more objective data is really needed
  • importantly, allows the evaluator to focus on contentious issues …

Disadvantages of Focus Doc?

  • Low familiarity with web tools
  • Excludes people without (easy) access to internet
  • Security/Privacy challenges

… backups, etc ….

The evaluator …

The evaluator’s role is similar to, but not quite the same as, that of evaluators in traditional evaluations.

… is irreplaceable!

The Focus Docs approach might seem to some to undermine the traditional (rather revered) role and status of the evaluator. But first and foremost, Focus Docs does require the more or less full-time involvement of an experienced evaluator.

Evaluator’s role

  • to make judgements about the value and quality of execution of a declared intention such as a to initiate a social project

  • to synthesise the answers
  • to specifically address the different ways in which contributors’ answers might not fully represent the truth. These issues arise in any evaluation.

Issues connected to lack of anonymity

  • If the evaluator and evaluation commissioner configure the evaluation process so that contributors are not anonymous, i.e. they are on the record”, issues can include
    • conflict between different contributors
    • fear of sanctions
    • corruption
    • incompetence So in the case of a Focus Doc process in which anonymity is not allowed, the group of contributors can very easily see who made what comment.

In a traditional interview, it is easier to let the evaluator know at least that you feel uncomfortable about something ….

Other issues

  • modesty
  • limitations in knowledge/information
  • lack of tools
  • limitations in ability to synthesise complex evidence
  • etc.

Evaluator’s role: Ensuring participation

To be eligible for the prize(s), you have to make at least 20 contributions in the first two days.

Evaluator’s role: Ensuring equal contribution

After 2-3 days, I analyse the contribution rate to see if there are any subgroups e.g. men, or older people, or people located more remotely, who are responding proportionately less often. I can send round a general e-mail with this information, saying do you want these groups to remain less represented?”

Evaluator’s role: Avoiding avoidance

There are always topics which some or all stakeholders would rather not talk about.

I can address these during individual talks with individuals and individual stakeholder groups as well as in the discussion on evalcrowd. On evalcrowd, I can address these themes ad-hoc but can also place them as questions:

  • What themes are we not talking about?
  • Are there some themes which some groups would rather not talk about?
  • Are there some themes which some groups are particularly keen to talk about?
  • Are there some groups which tend to get left out? Why?
  • Are there particular reasons why some information might not come to light, e.g. where pay depends on performance? What rewards are there for representing the project in a good light, what sanctions are there for representing the project in a poor light?

Place for anonymous questions?

Up next Crowd-sourced evaluation - the steps. At the very least, it has to be clear what is anonymous, what is not, what the data can be used for. If the client makes any part of the process Blot, a promising but almost invisible blogging platform Why have a blog? I now have a landing page which I find very handy to link to my online accounts like google scholar, linkedin etc. Still, there are
Latest posts Making notes on PDFs without computer or paper Publications causal-map Causal Map intro Causal Mapping - an earlier guide The walk to school in Sarajevo Glitches Draft blog post for AEA365 Theory Maker! Inventory & analysis of small conservation grants, C&W Africa - Powell & Mesbach! Lots of charts! Answering the “why” question: piecing together multiple pieces of causal information rbind.fill for 1-dimensional tables in r yED graph editor Examples of trivial graph format Using attr labels for ggplot An evaluation puzzle: “Talent show” An evaluation puzzle: “Mobile first” An evaluation puzzle: “Many hands” An evaluation puzzle: Loaves and fishes An evaluation puzzle: “Freak weather” An evaluation puzzle: “Billionaire” Using Dropbox for syncing Shiny app data on Amazon EC2 Progress on the Causal Map app Articles and presentations related to Causal Maps and Theorymaker Better ways to present country-level data on a world map: equal-area cartograms A starter kit for reproducible research with R A reproducible workflow for evaluation reports Welcome to the Wiggle Room Realtime comments on a Theory of Change Responses to open questions shown as tooltips in a chart A panel on visualising Theories of Change for EES 2018?