Design, Monitoring and Evaluation for Peacebuilding

You are here

Reasserting Learning in Peacebuilding Evaluation

The importance of learning is being reasserted in the professional practice of peacebuilding. Several years ago, talk about evaluation in peacebuilding and conflict transformation was largely confined to internal conversations within individual organizations. Only recently has the discussion on evaluation in the professional practice of peacebuilding become a mainstream, inter-organizational conversation.

Major milestones include the publication of two mainstays in the design, monitoring and evaluation of peacebuilding and conflict transformation, Designing for Results: Integrating Montioring and Evaluation in Conflict Transformation Activities by Cheyanne Church and Mark Rogers, and Reflective Peacebuilding: A Planning, Monitoring and Learning Toolkit by John Paul Lederach, Reina Neufeldt and Hal Culbertson in 2006 and 2007 respectively , and the Reflecting on Peace Practice Project Handbook by CDA Inc. All three publications place particular emphasis on learning – both during implementation and evaluation and were critical in stimulating conversation and work in promoting learning and better evaluation practice.

Hot Tip! The authors of Designing for Results want to know how you think the handbook could be improved. Take the survey now!

More recently, the conversation regarding evaluation has expanded to include action aimed at transforming dynamics in the professional practice of peacebuilding that affect evaluation: the Peacebuilding Evaluation Project and Women’s Empowerment Demonstration Project at the Alliance for Peacebuilding, and the Learning Portal for DM&E for Peacebuilding.

Part of a learning agenda for the development of a professional field of practice is public access to evaluation reports. Making evaluation reports public unleashes the possibility of the generation of knowledge from individual pieces of data. Two possible activities are thematic evaluation and meta-evaluation.

Hot Resource! Evaluation Report Database at the Learning Portal for DM&E for Peacebuilding

Evaluation for Learning

According to the Development Assistance Committee at the OECD1, a thematic evaluation is an “evaluation of a selection of development interventions, all of which address a specific development priority that cuts across countries, regions, and sectors.” In other words, a thematic evaluation is an evaluation that examines a very specific theme, subject or activity across projects, geographies, organizations and sectors. One such theme might be, for example, the inclusion of women in peace processes.

This can be by either conducting an evaluation of a range of different projects (selected based on a pre-determined criteria that is relevant to the objectives of the evaluation) or by examining evaluation reports relating to the theme under investigation (a hybrid thematic-meta evaluation).

Similarly, meta-evaluation is an evaluation “designed to aggregate findings from a series of evaluations. It can also be used to denote the valuation of an evaluation to judge its quality and/or assess the performance of the evaluators.”2 It is a kind of peer-review for evaluations and evaluators.

Both of these activities can result in significant learning by producing new knowledge and recommendations for improvement in a variety of ways. Thematic evaluation tends to be about extracting lessons learned for the theme under investigation. It also results in the production of new knowledge that may not have been realized under individual project-based evaluations: it brings different evaluations and experiences in conversation with one another to result in the realization and/or generation of new knowledge or lessons learned that may not be otherwise possible.

Meta-evaluation, on the other hand, provides feedback on the evaluation itself. It can help organizations determine the quality of their evaluations, systems and processes: how can we improve future evaluations and their utility based on our past experience?

And these don’t need to be expensive, or even a concerted effort by an organization. By making evaluation reports public, either via your organization’s website, the Learning Portal for DM&E for Peacebuilding, or another public forum (such as Development Evaluation Clearinghouse by USAID), you open your data to the possibility of such activities – by researchers, students, and evaluators. There is real value in sharing.

Jonathan White is the Manager of the Learning Portal for DM&E for Peacebuilding at Search for Common Ground. Views expressed herein do not represent SFCG, the Learning Portal or its partners or affiliates.

 

Hi all. My feedback starts from experiences with 'meta-evaluation' and then adds some wider reflections about 'learning' in the context of evaluations.

In recent years I have twice carried out a form of 'meta-evaluation'. First in 2006 when I cross-read a large number of programme evaluations in order to distil what 'types of impact' our peacebuilding programmes seem to be able to produce. That could range from e.g. changes in the political discourse, over new policy options to the creation or reform of an institution. For each type some examples from different programmes served as illustration. That paper 'What Types of Impacts do our Programmes Produce' is on the organisational website www.interpeace.org. Some additional examples for the same types were harvested in the context of another paper, two years later. The second example of a 'meta-evaluation' (or a variation thereon) was a comparative review of 4 after-action-reviews of external evaluation experiences - AAR's carried out by the partner teams on the ground whose country programme had been evaluated, but also involving the Interpeace programme people that had been involved in the evaluation. That too proved an interesting exercise and gave pointers at good practices. At the same time, it would be overstating the case that 'evaluation moments' and 'reports' are really institutionalised organisationally as a significant learning opportunity. There is for example neither a policy nor a practice to ensure that periodic (internal) strategic reviews of programmes are well documented and those records or the key messages of them regularly shared. There is no institutionalised practice of e.g. conducting a self-evaluation before any external evaluation, which can take place in a somewhat less 'threatening atmosphere', nor is there an institutionalised practice of AARs. There is no habit of trying longitudinal meta-evaluations (a cross-reading of the successive evaluations of peacebuilding programmes in the same country, rather than the comparative reading of evaluations across countries) or of providing new programme and field team people with copies of evaluation reports - often the best more reflective documentation available. The lack of 'institutionalisation' is not related to 'evaluation' per se, but a more general organisational deprioritisation of 'learning' as an investment of time and resources. Koenraad Van Brabant

Thanks for sharing your thoughts and experiences, Koenraad.

It's true that learning is, generally, deprioritized by organizations, but I wonder what is being prioritized over learning--what factors are being considered? We know that evaluation is rarely done with the explicit purpose of learning; more often it is done to justify programming/funding to the donor or public and upwards accountability. In this line of thought, it's no wonder that evaluation rarely embodies learning. I'd imagine that learning is deprioritized due to misconceptions of evaluation in our work and an emphasis on deliverying 'results' (at the output level and some evidence to suggest and sometimes demonstrate at the outcome level). That's why I think transparency in evaluation processes is so important: it opens up the possibility of external studies to produce insight (of course researchers often ask different questions than practitioners, which complicates this possibility--no simple solutions). 

'Lessons learned' in evaluation are rarely that: no process to attempt to institutionalize, operationalize or share that learning. And often times the learning is so vague or so project specific that there is very little application beyond the immediate project. Have you come across any broad principles that would help make 'lessons learned' more applicable, Koenraad? It'd be great if TORs explicitly stated the levels to which lessons learned should be applicable (project, context/conflict, theme, subject, etc.). And it's a great idea to include relevant previous evaluations in new employee orientation. 

Some of our bad habits aren't entirely our fault either. Donors need to be willing to fund new and emerging methodologies for evaluation, including longitudinal evaluation beyond the scope of the funding program. But it is then up to us to convince donors that these methodologies are worthwhile and can produce usable insight, which itself is complicated from the sometimes rocky relationship between practitioner needs and academic/research interests. 

-Jonathan

 

It seems that there might also be a terminology concern – by stating “lessons learned” we are assuming that a change is being made based on an observation or evaluation outcome.  In my experience, that is not always the case – we document lessons observed/learned after an evaluation, but the real test of verifying if lessons were actually learned is to see if steps are taken as a result of the observations.  Hence, we have changed the language in our reporting to read “Lessons Observed” with the express intent of revisiting to see if any action was taken on the recommendations, which could be deemed “learning” from the observations gathered during evaluation.  In some cases, it would seem that organizations see a lesson as being learned simply by identifying what insights or areas of improvement were identified through the process (and by the end of the evaluation).  A focal point needs to be the implementation of lessons observed/lessons learned, where an organization can truly learn from the information gathered.

Ms. Tardiff brings up a good point and has made me think about the "meta-evaluation".  If I understand it correctly this would be a review of past evaluations with an eye toward commonalities and differences within the evaluations.  Is there a component of a Meta-evaluation that would assess a) the usefulness of the evaluation and b) its impact on the program/project/organization?  


The great fear of any evaluation is that all this time and effort will be spent and a nice glossy report laborusly typed up to sit on a shelf and never move.  The meta-evaluation method seems like a fine mechanism to highlight this pitfall.  

Hi Kevin,

Correct, meta-evaluation is an 'evaluation of evaluations' that, from my understanding, can serve two potential purposes: A) peer review of the evaluation methodology, validity of findings, etc. across studies in order to extract lessons and recommendations in how to improve evaluation moving forward; B) the extraction of lessons within a particular type of programming, either on evaluation methodology, project design/implementation, etc. (though this might more closely resemble thematic evaluation--the evaluation of a particular theme of projects). 

There's a really useful checklist available on the Learning Portal here