Design, Monitoring and Evaluation for Peacebuilding

You are here

Planning for Evaluation

How does one prepare for an evaluation, whether commissioning one or conducting one? Are we ready for an evaluation, and, if not, what can we do to assess whether we are on the right track? What do I need to do to prepare? Why is theory of change important in evaluation, and how do I identify and use theories of change in evaluation? What evaluation approach or design can I use, and how to I choose? This section provides resources to help answer these questions.

Is the Program or Initiative Ready for Evaluation?

This section features key documents on the importance of considering whether an organization is fit for evaluation, as well as on the practical aspects of actually assessing evaluability and on how to build an evaluative culture. Commissioners of evaluations and peacebuilding practitioners tasked with conducting evaluations may find these materials useful, as they help understand how the environment in which evaluation takes place affects the end result of evaluation.

Organizational Readiness
  • Mayne, John. "Building an Evaluative Culture for Effective Evaluation and Results Management." ILAC Brief 20, The Institutional Learning and Change (ILAC) Initiative, 2008.
    • Available here
    • Beginner
    • This brief outlines the main characteristics of an evaluative culture, and suggests some concrete, practical actions that organizations can take to build and support an evaluative culture.
  • Church, Cheyanne. "Effectiveness Assessment Tool for High-Performance Social Change Organizations: An Overview" Besa Working Paper 1, Besa, 2014.
    • Available here
    • Intermediate
    • This working paper presents Besa’s “Effectiveness Assessment Tool”, which analyzes the degree to which an agency has aligned their formal and informal operating environment with change as mission critical in order to identify ways to maximize agency effectiveness. A recording of a presentation by Cheyanne Scharbatke-Church on Maximizing the Contribution of M&E Systems to Mission Achievement is available here.
  • Preskill, Hallie, and Rosalie T. Torres. "Readiness for Organizational Learning and Evaluation Instrument (ROLE)." FSG.
    • Available here
    • Intermediate
    • Based on the book Evaluative Inquiry for Learning in Organizations (Preskill, H. & Torres, R. T., 1999), the ROLE is an instrument designed to help an organization determine its level of readiness for implementing organizational learning and evaluation practices and processes that support it. This is done by using a series of questions around Culture, Leadership, Systems and Structures, Communication, Teams, and Evaluation.
Evaluability Assessments
  • Davies, Rick. Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations, Working Paper 40. London: DFID, 2013.
    • Available here
    • Advanced
    • The purpose of this synthesis paper is to produce a short, practically oriented report that summarizes the literature on Evaluability Assessments, and highlights the main issues for consideration in planning an Evaluability Assessment. The paper starts with a discussion on the concepts of evaluability and evaluability assessments, and then moves into practical aspects of planning and conducting this type of assessments.
  • Reimann, Cordula. "Evaluability Assessments in Peacebuilding Programming." Program Review and Evaluation Working Paper 3, CDA Collaborative Learning Projects, 2012.
    • Available here
    • Intermediate
    • This document explores the different elements of evaluability assessments in relation to peacebuilding, and discusses how Reflecting on Peace Practice concepts and tools can be used to assess evaluability in peacebuilding.
  • UNICEF. Evaluability Assessment of the Peacebuilding, Education and Advocacy Programme (PBEA). New York, NY: UNICEF, 2013.
    • Available here
    • Intermediate, Advanced
    • This example of an evaluability assessment carried out by UNICEF helped to gather evidence to respond to the question of: “To what extent does the PBEA have the technical and strategic elements in place to manage effectively towards results and to credibly demonstrate such results in future evaluations?” This document allows the readers, particularly evaluation commissioners and potential evaluators, to get a sense of what an evaluability assessment looks like.

What Options do I have if my Program/Initiative is not Evaluable or we are not Ready?

The following are alternatives to formal evaluation that can be used if it is determined that a formal evaluation—whether formative or summative—is not appropriate or feasible.

Alternatives to Formal Evaluation
  • Reimann, Cordula, Diana Chigas, and Peter Woodrow. "An Alternative to Formal Evaluation of Peacebuilding: Program Quality Assessment." Program Review and Evaluation Working Paper 3, CDA Collaborative Learning Projects, 2012.
    • Available here
    • Intermediate
    • This paper addresses the importance of evaluation in peacebuilding programming, and proposes that, given the particular circumstances under which peacebuilding evaluations take place, a program quality assessment process driven by a commitment to learning might be a better fit than a formal evaluation. The paper discusses the key features of program quality assessments, explains when they should be conducted and by whom, and explains what type of data is needed. Additionally, the paper describes how these assessments can be based on standards indicated by RPP findings, tools and concepts.
  • Impact Assessment and Shared Learning (IASL) team. Resource Pack on Systematization of Experiences, edited by Samantha Hargreaves, and Mariluz Morgan. ActionAid International, 2012.
    • Available here
    • Intermediate
    • This resource pack gives step by step guidance on how to conduct a process of systematization of experience, a critical reflection about practice designed to improve practice.  This process involves people involved in programming or action with an external agent who provides methodological and analytical support for reflection on practice.
  • CDA Collaborative Learning Projects' Reflecting on Peace Practice Program. "RPP Program Reflection Exercise." CDA Collaborative Learning Projects, 2014.
    • Available here
    • Beginner, Intermediate
    • This tool is intended for use by program teams as a process for self-assessment or review. It incorporates tools and insights from the Reflecting on Peace Practice Program at CDA Collaborative Learning Projects to reflect on program strategy, design and effectiveness.  It can be used as a basis for an informal participatory self-evaluation with or without external facilitation—at the program development phase to examine underlying assumptions of a program, or later (mid-course or at the end of a program) to reflect on theories of change, assumptions and evidence of effectiveness.  An illustrative application to land-related violence explains how to use the tool in a concrete situation. 

What are Key Preparation Steps? What do I Need to do to Prepare an Evaluation?

In order to prepare a sound evaluation, a series of crucial steps must be taken into account. Although ensuring high quality evaluation is linked to the early stages of analysis and design, following the steps and recommendations suggested in the materials referenced in this section will help stakeholders involved in planning and implementing high quality peacebuilding evaluations.

This section is intended mainly for (but is not limited to) peacebuilding practitioners with little or no evaluation experience, and for more seasoned evaluators who can benefit from a more structured approach to planning an evaluation.

Prepare for an Evaluation
  • Managing an Evaluation available on BetterEvaluation is of particular interest to evaluation commissioners, managers and evaluators themselves. Information on this site provides details on the decisions that need to be made while planning an evaluation, including:
  • OECD. "Preparing an Evaluation in Situations of Conflict and Fragility." In Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results, DAC Guidelines and Reference Series, 39-55. OECD Publishing, 2012.
    • Available here
    • Intermediate
    • This chapter is specifically dedicated to preparing evaluations in situations of conflict and fragility, and is divided as follows:
      • Summary of key steps for preparing an evaluation
      • Define the purpose of the evaluation
      • Analyze the conflict context
      • Consider gender equality
      • Determine the scope of the evaluation
      • Decide on evaluation criteria
      • Outline key evaluation questions
      • Select evaluation approach and method to fulfill purpose
      • Take timing and logistical issues into consideration
      • Co-ordinate with other actors
      • Determine how the evaluation will be managed
      • Select and contract the evaluation team
      • Prepare to disseminate evaluation results
      • Control quality
  • Church, Cheyanne, and Mark Rogers. "Evaluation Preparation." In Designing for Results: Integrating Monitoring and Evaluation in Conflict Transformation Programs, 96-136. Washington, D.C.: Search for Common Ground, 2006.
    • Available here
    • Intermediate
    • Church and Rogers present a detailed account of the necessary steps to conduct evaluations taking into consideration the key aspects that range from core concepts to logistics and budgeting. This chapter is especially relevant for individuals who wish to learn more about evaluation of peacebuilding programming, as it directly links the issue of planning an evaluation to peacebuilding programming. This chapter contains the following information:
      • The Actors Involved in Evaluation Preparation
      • Length of the Evaluation Preparation Process
      • The Core Preparation Decisions and How They Relate to Each Other
        • Evaluation Objectives
        • Audience
        • Type of Evaluation
        • Evaluator’s Role
        • Evaluation Approaches
        • Evaluation Scope
        • Type of Evaluator
        • Timing of the Evaluation
        • Budget

What Resources can I Draw on to Identify or Clarify Theories of Change in Evaluation?

This section includes some critical documents that help understand the concept of theories of change and the implications of their use –or their absence thereof– in evaluation.  These resources can be used by program teams in the design phase, in review and reflection on programs and projects, as well as by evaluators in identifying theories of change and program logic during the evaluation process.

Understand the Concept of Theories of Change and the Implications of Their Use
  • Babbitt, Eileen, Diana Chigas, and Robert Wilkinson. Theories and Indicators of Change: Concepts and Primers for Conflict Management and Mitigation. Washington, D.C.: USAID and AMEX International, 2013.
    • Available here
    • Beginner, Intermediate
    • This practice-oriented document can be of use to peacebuilding practitioners engaged in evaluation activities as well as evaluation practitioners who wish to include theories of change as part of their work on evaluation of peacebuilding initiatives. The document presents a definition of theories of change and their relevance for peacebuilding, and guides readers in the construction of theories of change through an easy to follow step-by-step explanation. The authors present a Theories and Indicators of Change (THINC) Matrix, which summarizes and organizes the major theories of change in the practice of conflict management and mitigation.
  • Ober, Heidi. Guidance for Designing, Monitoring and Evaluating Peacebuilding Projects: Using Theories of Change. London: CARE International UK, 2012.
    • Available here
    • Beginner, Intermediate
    • This comprehensive and detailed guidance provides a complete overview of the use of theories of change for peacebuilding program design, and of how to do monitoring and evaluation based on theories of change.
  • Woodrow, Peter and Nick Oatley. Practical Approaches to Theories of Change in Conflict, Security & Justice Programmes, Part I: What they are, Different Types, How to Develop and Use them. CCVRI Guidance Series. London: DFID, 2013.
    • Available here
    • Beginner, Intermediate
    • This document touches on the different levels of theories of change (strategic, portfolio, project, activity), and highlights the utility of using theories of change for reality checks – which is particularly convenient for peacebuilding programming. The document also describes the qualities of a good Theory of change, and provides a useful glossary of key terms.
  • Rogers, Patricia. Theory of Change, Methodological Briefs, Impact Evaluation 2. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Beginner, Intermediate
    • In addition to providing guidance on how to develop a theory of change and use theory of change in impact evaluations, this brief gives examples of good practices.

How do I Determine that a Program is Effective and Addressing Drivers of Conflict?

This section provides resources that discuss criteria to evaluate interventions.

Criteria to Evaluate Interventions
  • Rogers, Mark. Evaluating Relevance in Peacebuilding Programs. Program Review and Evaluation Working Paper 1. CDA Collaborative Learning Projects, 2012.
    • Available here
    • Advanced
    • This paper strives to identify processes that can be employed to achieve credible and useful evaluation findings about relevance. The focus is on the actual evaluation methods that can be employed to examine relevance. This paper includes a first attempt at proposed standards against which program designs and performance can be compared through six distinct dimensions of relevance.
  • OECD DAC. "Criteria for Evaluating Interventions." In Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results. DAC Guidelines and Reference Series, 65-71. Paris: OECD Publishing, 2012.
    • Available here
    • Intermediate
    • Six criteria for evaluating peacebuilding activities are explained, with illustrative questions:  relevance, Effectiveness, Impact, Sustainability, Efficiency, Coherence and Coordination.
  • van Brabant, Koenraad. Peacebuilding How? Criteria to Assess and Evaluate Peacebuilding. Geneva: Interpeace, 2010.
    • Available here
    • Intermediate
    • This article provides a brief explanation of the criteria, developed as part of CDA’s Reflecting on Peace Practice Program, of effectiveness of peacebuilding; these are designed to help program teams and evaluators assess whether small and limited programs have an effect on “Peace Writ Large”.

What Evaluation Approach Should I Use?

Evaluation approaches refer to the principles or framework guiding the design and implementation of an evaluation. This section provides resources on a select number of evaluation approaches that can be and are commonly used to evaluating peacebuilding initiatives.

Situational appropriateness is increasingly being seen as the best criterion for choosing approaches and methods. Commissioners of evaluations, together with evaluators, should decide what approach is appropriate for the kinds of evaluation questions being asked, the users’ needs, the nature of the intervention and the context in which the evaluation will be conducted, as well as the availability of resources (both financial and human).  In Designing for Results (Chapter 8), Church and Rogers provide guidance on how to decide what approach to use and summarize the pros and cons of each.

Resources on some key Evaluation Approached are listed below.

Developmental Evaluation
  • Gamble, Jamie A. A. A Development Evaluation Primer. Canada: The J.W. McConnell Family Foundation, 2008.
    • Available here
    • Beginner, Intermediate
    • This ‘primer’ introduces the concept of developmental evaluation and provides tools to foster its use. The first part of the book discusses the basics of DE, as well as a series of myths around it, and highlights some of the conditions needed to assess if organizations are in an appropriate space to apply DE. The second part of the book focuses on how to apply developmental evaluation, and discusses the key features of a developmental evaluator, as well as tools, issues and challenges.
Empowerment Evaluation
  • Cox, Pamela J., Dana Keener, Tiffanee L. Woodard, and Abraham H. Wandersman. Evaluation for Improvement: A Seven Step Empowerment Evaluation Approach for Violence Prevention Organizations. Atlanta: Centers for Disease Control and Prevention, 2009.
    • Available here
    • Intermediate
    • This manual is designed to help violence prevention organizations hire an empowerment evaluator who will assist them in building their evaluation capacity through a learn-by-doing process of evaluating their own strategies.
Most Significant Change
  • Davies, Rick, and Jess Dart. The ‘Most Significant Change’ (MSC) Technique. A Guide to Its Use. 2005.
    • Available here
    • Beginner
    • This practical guide walks users through a clear step-by-step process of how to implement MSC, and also provides insights on its history, and how it compares to other approaches. In addition to reference to further reading, the guide is complemented by samples of story collection formats, sample MSC stories, and other annexes that can be useful for practitioners looking into using this approach for the first time.
Goal-Free Evaluation
  • Youker, Brandon W. and Allyssa Ingraham. "Goal-Free Evaluation: An Orientation for Foundations’ Evaluations." The Foundation Review 5, No. 4 (2013): 51-61.
    • Available here
    • Intermediate
    • This paper discusses the concept and main features of Goal-Free Evaluation, demonstrates GFE’s actual use, highlights aspects of its methodology, and details its potential benefits.
Outcome Mapping and Outcome Harvesting
  • Earl, Sarah. "Overview of Outcome Mapping." Filmed 2007 by the Pan Asia Networking project of IDRC at a workshop on Utilization Focused Evaluation (UFE) in Kuala Lumpur. 22:50.
    • Available here
    • Beginner, Intermediate
    • One of the originators of the approach, Sarah Earl, discusses the origins and fundamental features of outcome mapping and how it relates to evaluation, highlighting what can be done in M&E with outcome mapping. This resource is particularly useful for evaluators.
  • White, Jonathan. "Introduction to Outcome Mapping." DM&E for Peace.
    • Available here
    • Beginner
    • This short discussion on outcome mapping as an evaluative methodology provides an overview of OM and what it entails, and its relation to peacebuilding programming. It also provides useful “hot tips” and additional resources (e.g. webinars) for further reference.
  • Wilson-Grau, Ricardo, and Heather Britt. Outcome Harvesting. Cairo: Ford Foundation, 2012 (Revised November 2013.)
    • Available here
    • Intermediate
    • This brief is intended to introduce the concepts and approach used in Outcome Harvesting to grant makers, managers, and evaluators, with the hope that it may inspire them to learn more about the method and apply it to appropriate contexts. Thus, it is not a comprehensive guide to or explanation of the method, but an introduction to allow evaluators and decision makers to determine if the method is appropriate for their evaluation needs.
Participatory Approaches

Despite the fact that other approaches can be conducted in a participatory manner, participatory approaches are based on the premise that the structured participation of stakeholders throughout the different stages of the evaluation and in the decision-making process is essential to the conduct of evaluations.  There is little guidance for use of these approaches in evaluation of peacebuilding; these resources provide guidance on participatory approaches in other fields and could be adapted and tested for peacebuilding. 

  • KU Work Group for Community Health and Development. "Chapter 36, Section 6: Participatory Evaluation." In the Community Tool Box. Lawrence, KS: University of Kansas, 2015.
    • Available here
    • Beginner, Intermediate
    • This reading introduces the concept of Participatory Evaluation, and explains the reasons for using it (and not using it), and who should be involved in participatory evaluation. Additionally, it provides a series of steps for conducting participatory evaluations.
  • Guijt, Irene. Participatory ApproachesMethodological Briefs: Impact Evaluation 5. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • This guide explains the use of participatory approaches in impact evaluation, discussing when it is best to use this approach, and how to make the most of it.
  • Catley, Andy, John Burns, Dawit Abebe, and Omeno Suji. Participatory Impact Assessment: A Design Guide. Medford, MA: Feinstein International Center, Tufts University, 2014.
    • Available here
    • Intermediate, Advanced
    • This document provides step-by-step guidance on participatory approaches to measure impacts of livelihoods, development and humanitarian interventions.  While not specifically designed for peacebuilding interventions, it provides helpful guidance on how to organize, prepare and conduct impact evaluations in which local people participate in defining and measuring impact.
Theory-Based Approaches to Evaluation
  • Ober, Heidi et al. Guidance for Designing, Monitoring and Evaluating Peacebuilding Projects: Using Theories of Change. London: CARE International UK, 2012.
    • Available here
    • Beginner, Intermediate
    • This guide provides useful guidance on using theories of change in the design phase as well as to monitor and evaluate peacebuilding programs.
  • Funnell, Sue C. and Patricia J. Rogers. Purposeful Program Theory: Effective Use of Theories of Change and Logic Models. San Francisco, CA: Jossey-Bass/Wiley, 2011.
    • Available here
    • Intermediate
    • This book discusses ways of developing, representing and using program theory and theories of change in different ways to suit the particular situation. It discusses how to address complicated and complex aspects of program in terms of: focus; governance; consistency; necessity; sufficiency; and change trajectory. Additional information on the book is available here.
  • White, Howard. Working Paper 3Theory-Based Impact Evaluation: Principles and Practice. New Delhi: International Initiative for Impact Evaluation, 2009.
    • Available here
    • Advanced
    • This paper identifies the following six principles to successful application of Theory-Based Impact Evaluation: (1) map out the causal chain (program theory); (2) understand context; (3) anticipate heterogeneity; (4) rigorous evaluation of impact using a credible counterfactual; (5) rigorous factual analysis; and (6) use mixed methods.
  • Mayne, John. Contribution Analysis: An approach to exploring cause and effect. The Institutional Learning and Change (ILAC) Initiative, 2008.
    • Available here
    • Intermediate
    • Contribution analysis is one form of theory-based evaluation. After introducing the concept of contribution analysis, this document explains, step-by-step, how to conduct evaluations based on this approach.
Process Tracing
  • Oxfam GB. Process Tracing: Draft Protocol.
    • Available here
    • Beginner
    • This document presents the concept of process tracing and provides detailed guidance for undertaking evaluations using this approach.
  • Collier, David. "Understanding Process Tracing." Political Science and Politics 44, No. 4 (2011): 823 -830.
    • Available here
    • Intermediate
    • This article describes how process tracing works, and the essential elements of the approach.
Utilization Focused Evaluation (UFE)
  • Patton, Michael Q. Utilization-Focused Evaluation Checklist. DM&E, 2002.
    • Available here
    • Beginner, Intermediate
    • After a brief overview of UFE, this checklist explains the 12 steps of UFE, and highlights a series of premises, primary tasks and challenges related to each of these steps and that need to be taken into account in order to maximize use by intended users.
  • Patton, Michael Q. Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: SAGE Publications, 2012.
    • Available here
    • Intermediate
    • Based on Michael Quinn Patton's best-selling Utilization-Focused Evaluation, this briefer book provides an overall framework and essential checklist steps for designing and conducting evaluations that actually get used.
  • Ramirez, Ricardo, and Dal Broadhead. Utilization Focused Evaluation: A Primer for Evaluators. Penang: Southbound, 2013.
    • Available here
    • Beginner, Intermediate
    • This primer is designed for practitioner evaluators and project implementers who are interested in using Utilization Focused Evaluation. The primer covers each of the 12 steps of UFE, using case studies to illustrate what it is like to learn to use UFE.
Case Studies
  • Neale, Palena, Shyam Thapa, and Carolyn Boyce. Preparing a Case Study: A Guide for Designing and Conducting a Case Study for Evaluation Input. Watertown, MA: Pathfinder International, 2006.
    • Available here
    • Beginner
    • This short guidance provides basic information on case studies, their purposes and uses, and the elements of a case study conducted as part of an evaluation.
  • Balbach, Edith. Using Case Studies to do Program Evaluation. CA: California Department of Health Services, 1999.
    • Available here
    • Intermediate
    • This guide will help evaluators assess whether to use a case study evaluation approach and how to do a case study
  • Goodrick, Delwyn. Comparative Case Studies, Methdological Briefs: Impact Evaluation 9. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • This methodological brief provides “how to” advice on conducting comparative case studies, especially for evaluation of impacts of interventions, when there is a need to understand and explain how features within the context influence the success of program or policy initiatives.
Experimental/Quasi-Experimental Approaches

These approaches require a high level of technical expertise.  The resources below provide a good overview of the approach as well as broad guidance on how they are done, when and for what purpose they might be used, and challenges and ethical considerations—and can be useful for commissioners of evaluations and program teams engaging with evaluators using these designs.

  • Anderson Moore, Kristin. Quasi Experimental Evaluations. Part 6 in a Series on Practical Evaluation Methods. Washington, D.C.: Child Trends, 2008.
    • Available here
    • Beginner
    • This short paper presents the concept of quasi-experimental evaluations; what can be learnt from them; under what circumstances is it appropriate to conduct a quasi-experimental evaluation; and the types of quasi-experimental outcome evaluations. The paper finishes by discussing a series of risks and obstacles that may be faced when planning or implementing this kind of evaluation.
  • White, Howard, Shagun Sabarwal, and Thomas de Hoop. Randomized Control Trials (RCTs). Methodological Briefs, Impact Evaluation 7. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • This brief explains Randomized Control Trials (experimental methods), when to use them, and outlines basic steps for conducting them.  There are references to more technical guidance; this is useful for commissioners of evaluations and program teams considering such methods to assess impact, especially whether results are attributable to their programs.
  • White, Howard, and Shagun Sabarwal. Quasi-Experimental Design and Methods. UNICEF Methodological Briefs, Impact Evaluation 8. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • Quasi-experimental designs are used to test causal hypotheses about whether a program/intervention has produced a particular change, and are used when experimental methods (RCTs) are not possible.  This brief explains what quasi-experimental designs are and ways to develop comparison groups when random assignment is not possible.
Real Time Evaluation
  • Herson, Maurice, and John Mitchell. " Real-time Evaluation: Where does its value lie?." Humanitarian Exchange Magazine, No. 32 (2005).
    • Available here
    • Intermediate
    • This paper discusses the history of RTE, and addresses some common aspects of the RTE methodology and outcomes.
  • Cosgrove, John, Ben Ramalingam, and Tony Beck. Real-time evaluations of humanitarian action: An ALNAP Guide. London: ALNAP, 2009.
    • Available here
    • Intermediate, Advanced
    • The guide is intended to help both evaluation managers and team leaders in commissioning, overseeing and conducting real-time evaluations (RTEs) of humanitarian operational responses. While not directly applicable to peacebuilding, it has detailed guidance on RTEs that can be adapted for peacebuilding contexts. 
Additional Resources

This list of approaches is by no means exhaustive. The BetterEvaluation site provides a more comprehensive list of Evaluation Approaches, explaining their meanings and main features, and providing different degrees of details about each of them, as well as additional resources for further reference. The site currently has information on the following approaches: