At the very least, it has to be clear what is anonymous, what is not, what the data can be used for. If the client makes any part of the process compulsory for staff, what is compulsory and what isn’t.
Ideally any evaluation will have an evaluation ombudsperson - someone in the know but outside the process with the ability to address any ethical issues.
If the evaluator plans to link answers in one part of the process with answers from the same individual from another part of the process (or even from other data sources like HR data), this needs to be made clear.
First, it is necessary to gently push back a little against the implicit requirement to answer so many evaluation questions and sub-questions, and to suggest only breaking questions down into sub-questions where there is really a need to do so (for example where the position of women and men obviously differ really widely and could be very important, or where they are non-negotiable for the donor).
We need to encourage participants to report differences where they are significant and relevant.
The outline for the actual final report will probably include some main headings like Introduction which are not included in the online Focus Doc, however it is also possible to crowdsource these sections too. (After all, the contributing experts working together can certainly give a better outline of the project and the history of the project than the evaluator can.)
The headings and subheadings are expressed in the form of questions - each section and subsection will aim to answer the evaluation question which heads it.
Often we think of this kind of structure
The detail doesn’t matter. The point is that we don’t usually see recommendations entering the game until the end. General evidence and Recommendations are treated differently. Findings are gathered early on and then are synthesised into Conclusions. Recommendations are supposed to come from nowhere after the completion of the Findings section, the spontaneous product of the evaluator’s mysterious expertise. Or they are partly a synthesis of stakeholder’s recommendations, but then shouldn’t they be already covered in the findings sections?
Instead, I would think of the evaluation process as synthesising in parallel general evidence alongside its normative counterparts like suggestions, lessons learned. The two threads should interact with each other right across whatever structure we give to the evaluation questions and correspondingly to the report.
There are lots of possibilities here.
Perhaps the simplest way is to keep all of the Focus Doc and the process private to a single group of contributors.
But it is also possible to define different groups of contributors who can
When using a Focus Doc, face-to-face contact is more important than ever, especially to counterbalance and validate the online process.
One possible procedure is this: The evaluator visits the client for 3-8 days early on in the evaluation, in order to meet the different stakeholder groups.
Ideally these stakeholder visits would be followed up by one big meeting of 1-2 hours in which a good number of all the different stakeholders gather together. I share first impressions, explain how we will be collaborating online and formally launch the online collaboration.
If a prize is to be awarded for the most constructive contributor, the details should be agreed and published.
Now we are ready to invite stakeholders and other key informants (“contributors”) to contribute to a “crowd-sourced evaluation report” or a “Focus Doc”.
You don’t have to contribute to all the topics, just the ones where you have something to say. And the contributor who gets most “likes” from other contributors will win an iPad mini! (I am donating this prize because this procedure should speed up the process for me as well as for you.)
We are writing to you early because we are sure you won’t feel intimidated by being among the first to make comments.
The discussion is private but you can register using the email address we sent this mail too (or a connected social media account).
We value your real opinions. We guarantee the opinions you express will not influence any future interactions you may have with Tunza. So please speak up and say what you really think. We might quote what you say in our evaluation report, but we promise not to write your name.
Here are some tips on taking part - you can read them again at the site.
When you log in, please look at the list of different topics and replies and click on any which interest you.
You can comment on the whole topic by clicking “reply” at the bottom of the other replies, or you can reply to an individual comment by clicking the pale blue “reply” button at the end of the comment.
You can “like” topics and people’s replies by clicking on the “heart” icon.
Please try to write clearly and briefly! Provocative replies are welcome just as much as attempts to find consensus.
Your reply will appear at the bottom of the page and also attached to the comment you replied to.
Please write at least five replies and “like” at least ten replies. Please add evidence for your views wherever you can!
The contributor who gets most “likes” from other people (and has written at least five contributions and “likes” at least ten other contributions) wins an iPad mini!
If you wish, you can also start a new topic with a new question for the other contributors.
If you have any questions, you can write to me, the external evaluator, directly at steve@promente.org.
You can also send me an email at steve@promente.org or if you want you can send me a completely anonymous mail at http://www.sendanonymousemail.net/
At least one representing the evaluation commissioner, ideally one for each contributor group.
During this stage, the evaluator will usually take on an active role as moderator - challenging “received opinion”, asking for evidence, encouraging responses from underrepresented groups, explaining the process etc.
This could start before the live phase, addressing evidence gaps noticed by the evaluator in the inception phase. But things get especially interesting when contributors themselves identify evidence gaps.
If at all possible, the evaluator (or other contributors) will post the new evidence, the results of additional investigations and analyses, in the Focus Doc attached to the questions they address before the end of the live phase, to allow the contributors to digest them - adding votes and downvotes, comments etc.
The evaluator can address the contributors online like this:
You find some evaluation questions particularly interesting or important but contributors are just exchanging opinions because evidence is lacking to answer them.
What can I offer you to get more new real evidence? Here are a few ideas. You can make your own suggestions (though I can’t promise to do them).
Most evaluations would benefit from specific expertise in areas like
Most evaluators, let’s face it, aren’t experts in all the areas which could be useful for the evaluation.
The evaluator synthesises the discussion into the evaluation report, adding any additional research carried out as an annex.
(If required, the draft can again be reviewed by some or all groups of stakeholders, using familiar tools like “tracked changes” in offline or online docs. However this is no longer part of the Focus Doc process.)
The most most productive contributor (probably the person whose contributions were “liked” by the most people) being awarded a significant prize (e.g an ipad or similar) which I will pay for.