Design, Monitoring and Evaluation for Peacebuilding

You are here

Managing and Implementing an Evaluation

How do I manage the evaluation process? How to I make sure ethical and conflict sensitivity considerations are addressed in the evaluation process? This section outlines the various dimensions of evaluation management and implementation—from management issues such as logistics and team development to robust data collection and analysis methods and ethics and conflict sensitivity. In addition, tools and resources developed for evaluations in specific sectors or topics are included here.

What do I need to do throughout the Evaluation Process to Manage the Evaluation?

How do I manage the evaluation process?  How do I make sure ethical and conflict sensitivity considerations are addressed in the evaluation process?  This section outlines the various dimensions of evaluation management and implementation — from management issues such as logistics and team development to robust data collection and analysis methods and ethics and conflict sensitivity.  Tools and resources developed for evaluations in specific sectors or topics are also included. 

For Evaluation Managers
  • Willard, Alice. "Managing and Implementing an Evaluation. Guidelines and Tools for Evaluation Managers." Catholic Relief Services (CRS) and the American Red Cross, 2008.
    • Available here.
    • Intermediate
    • This detailed guide provides evaluation managers with solutions on how to implement evaluations.  The module focuses on what needs to be done throughout the evaluation process to manage the evaluation team and minimize the inevitable disruptions to the project’s own implementation plan.
For Evaluators and Project Team
  • Church, Cheyanne and Mark Rogers. "Evaluation Management." In Designing for Results: Integrating Monitoring and Evaluation in Conflict Transformation Programs, 137-177. Washington DC: Search for Common Ground, 2006.
    • Available here
    • Beginner, Intermediate
    • Here, Church and Rogers provide a step-by-step explanation of the different items that need to be taken into account by project teams and evaluators as they get to the point of implementing an evaluation.  The chapter is divided as follows:
      • Developing the Terms of Reference
      • The evaluation plan
      • Frequently asked questions about working with external evaluators
      • Strategies for overcoming common evaluation pitfalls
  • OECD. "Conducting an Evaluation in Situations of Conflict and Fragility." In Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results, DAC Guidelines and Reference Series, 57-75. Paris: OECD Publishing, 2012.
    • Available here
    • Intermediate
    • This chapter is specifically dedicated to conducting evaluations in situations of conflict and fragility, and is divided as follows:
      • Allow an inception phase
      • Identify and assess the theory of change and implementation logic
      • Gather data
      • Criteria for evaluating
      • Draw conclusions and make recommendations
      • Reporting
      • Management response and follow-up action
      • Disseminate findings
      • Feed back into programming and engage in learning

What are Ethical and Conflict Sensitivity Issues in Evaluation, and How do I Manage them?

All parties interested in evaluation of peacebuilding can benefit from the materials in this section, especially commissioners of evaluations and evaluators.  The former may find it valuable as they may want to request that evaluators work within the framework of one of these guidelines, while evaluators themselves may find it useful, for they will probably have to conduct evaluations under some ethical guidelines and, as suggested on BetterEvaluation “evaluators responding to Requests for Proposal can include the guidelines as part of their proposal for conducting an evaluation.”

Ethics in Evaluation
  • United Nations Evaluation Group (UNEG). "UNEG Ethical Guidelines for Evaluation." Foundation Document, UNEG, 2008.
    • Available here.
    • Beginner
    • Applicable to the conduct of evaluation in all UN Agencies, the UNEG Guidelines for Evaluation highlight the importance of ethical conduct in evaluation – which is described as a shared responsibility of all relevant stakeholders. Besides discussing the Ethical Principles in Evaluation, special attention is given to the duties of evaluation managers and evaluation commissioners.
  • American Evaluation Association. "American Evaluation Association Guiding Principles for Evaluators." 2004.
    • Available here.
    • Beginner
    • Developed in 1994, the Guiding Principles for Evaluators serve as the cornerstone of good evaluation practice.  The Guiding Principles are broadly intended to cover all kinds of evaluation, and are aimed at guiding the ethical conduct of evaluation.  Additional guidance material on the Guiding Principles for Evaluators can be found here.
  • Duggan, Colleen, and Kenneth Bush. "The Ethical Tipping Points of Evaluators in Conflict Zones." American Journal of Evaluation 35 (2014): 1-22, doi: 10.1177/1098214014535658.
    • Available here.
    • Intermediate
    • This article highlights the specifics of conducting ethical evaluations in settings of conflict and how evaluators can manage the particular challenges that arise in these settings.  This article is relevant for both peacebuilding practitioners without evaluation experience and evaluators without prior experience in conflict zones.
  • Chigas, Diana, Madeline Church, and Vanessa Corlazzoli. "Evaluating Impacts of Peacebuilding Interventions: Approaches and Methods, Challenges and Considerations." CCVRI Guidance Series. London: DFID, 2014.
    • Available here.
    • Intermediate
    • This guidance includes a discussion of ethical and conflict sensitivity considerations for evaluations of peacebuilding impacts, including ethical and conflict sensitivity issues raised in different approaches to evaluation.

How do I Collect and Analyze Data?

These resources are intended to introduce users–especially practitioners without evaluation experience or knowledge–to quantitative and qualitative methods for evaluation.  The readings suggested provide a general overview of evaluation methods and their advantages and disadvantages, and offer different guidance on how to conduct evaluations using different methods.  Using mixed methods (both quantitative and qualitative) is increasingly recommended for robust evaluations; guidance on mixed methods is thus also included below.

Evaluation Methods and Data Analysis
  • Brikci, Nouria, and Judith Green. "A Guide to Using Quantitative Research Methodology." MSF, 2007. URI: http://hdl.handle.net/10144/84230
    • Available here
    • Beginner
    • This practical guide starts with a discussion on what is qualitative research, what are its aims uses and ethical issues, and then explains how to develop quantitative research designs.  Additionally, it explains how to generate data, with practical tips on how to ask questions, run a discussion, and other key aspects of quantitative research.  Finally, a discussion on data management and analysis and some practical issues helps the user make the most of their data.
  • Centers for Disease Control and Prevention (CDC). "Analyzing Qualitative Data for Evaluation." Evaluation Brief 19, CDC, 2009.
    • Available here
    • Intermediate
    • Relevant for evaluators, this brief focuses on analyzing qualitative data.  It includes an overview of qualitative data; how to plan for qualitative data analysis; how to analyze qualitative data; and the advantages and disadvantages of qualitative data.
  • Ober, Heidi. "Guidance for Designing, Monitoring and Evaluating Peacebuilding Projects: Using Theories of Change." London: CARE International UK, 2012.
    • Available here
    • Beginner, Intermediate
    • This general guidance contains practical tips on data collection methods in Section 4.6 (pp. 18-19).
  • USAID Bureau of Policy, Planning and Learning. "Conducting Mixed-Method Evaluations." Technical Note, Monitoring and Evaluation Series, Washington DC: USAID, 2013.
    • Available here
    • Intermediate
    • This technical note provides guidance on using mixed methods in evaluation, including concrete advice on how to get the most out of a mixed-method evaluation.
  • Resources on sampling are helpful for ensuring representativeness of the data collected:
    • BetterEvaluation.org: Easy to access introduction to sampling covering probability sampling (including random sampling), purposive sampling and convenience sampling and suggests further resources. 
    • Alexander, Jessica, and John Cosgrove. "Representative Sampling in Humanitarian Evaluation." Improving the Quality of EHA Evidance Discussion Series Method Note 1. London: ALNAP, 2014. Available here
    • Descriptions from the Encyclopedia of Survey Research Methods of purposive sampling (available here) and probability sampling (available here.)
  • Alexander, Jessica, and Francesca Bonino. "Ensuring Quality of Evidence Generated through Participatory Evaluation in Humanitarian Contexts." Improving the Quality of EHA Evidence Discussion Series Method Note 3. London: ALNAP, 2014.
    • Available here
    • Intermediate
    • This note presents experience-based lessons about what tactics have been used to ensure accuracy and representativeness of data and analysis generated through participatory approaches and discusses the benefits of participatory evaluations.

I am Interested in Evaluating Peacebuilding Programming in a Particular Sector. Are there Special Considerations and Tools?

These tools and guidance have been developed for evaluation of particular peacebuilding sectors (e.g., media, economic development, security sector reform, dialogue, etc.).  Several articulate specific criteria and considerations for a particular sector.

Integrated Development and Peacebuilding
  • Bayne, Sarah, and Tony Vaux. "Integrated Peacebuilding and Development Programming: Design, Monitoring and Evaluation." CCVRI Guidance Series. London: DFID, 2013.
    • Available here
    • Intermediate, Advanced
Security Sector Reform
  • Johannsen, Agneta M. "Security Sector Reform Assessment, Montiroing and Evaluation." In Gender and Security Sector Reform Training Resource Package, edited by Megan Bastick. Geneva: Geneva Centre for the Democratic Control of Armed Forces (DCAF), 2015.
    • Available here
    • Intermediate, Advanced
  • Rynn, Simon and Duncan Hiscock. "Evaluating for Security and Justice: Challenges and Opportunities for Improved Monitoring and Evaluation of Security System Reform Programmes." London: Saferworld, 2009.
    • Available here
    • Intermediate
Media
  • Költzow, Sarah. "Monitoring and Evaluation of Peacebuilding: The Role of New Media." Paper 9. Geneva: Geneva Peacebuilding Platform, 2013.
    • Available here
    • Intermediate
Transitional Justice
  • Duggan, Colleen. "Show Me Your Impact: Evaluating Transitional Justice in Contested Spaces." Evaluation and Program Planning (2010), doi:10.1016/j.evalprogplan.2010.11.001
    • Available here
    • Intermediate, Advanced
Negotiation