Design, Monitoring and Evaluation for Peacebuilding

You are here

Highlights from Day Two (Thursday) at Evaluation 2016 in Atlanta

Author, Copyright Holder: 
Jack Farrell
Language: 

Check out our highlights from Day 2 of Evaluation 2016 in Atlanta!


 

Session: Standards for Peacebuilding Evaluation

 

This session looked at the work of the Peacebuilding Evaluation Consortium (PEC) and the potential establishment of common standards for the Monitoring & Evaluation in peacebuilding.

 

These standards are still a work in progress but with respect to monitoring, they include:

 
  • Indicators should always be DIRECT MEASURES of the outputs & outcomes identified in the project design phase.

  • Monitor the conflict dynamics identified in the program’s conflict analysis.

  • Indicators should be CONTEXT SPECIFIC.

  • The conflict analysis should be UPDATED ROUTINELY to reflect the changes in the conflict dynamics.

  • It’s VITAL to collect both the perception data and the data on the actual state of the affairs.

  • Data collection processes should be CONFLICT SENSITIVE.

  • Use CREATIVE METHODS to collect data.

  • Selection of data collectors should be established after the conflict analysis.

  • Monitoring data should be reviewed regularly to allow for ADAPTIVE program management.

  • Monitoring data should be SHARED while remaining conflict sensitive.

 

And for evaluation:

 
  • The evaluation should provide a more in depth explanation of why change DID OR DID NOT occur.

  • Provide evidence based, HONEST FEEDBACK.

  • Evaluators should be prepared for the MODIFICATION of the original program design.

  • Evaluation field work should pay equitable attention to ALL CONFLICT STAKEHOLDERS.

  • PARTICIPATORY approaches should be used in the evaluation design process.

  • Evaluations should be initially designed with FLEXIBILITY.

 
 

Session: Developmental Evaluation: Where Program Design meets Evaluation

 

Developmental Evaluation (DE) is defined, by better evaluation, as an evaluation approach that can assist social innovators develop social change initiatives in complex or uncertain environments. This session focussed on the work of a DEPA-MERL Consortium and below are a few key points that arose from the discussion:

 
  • What are the criteria for deciding whether or not a DE is necessary?

 

1. Are the processes/implementation/objectives expected to change?

2. Is the program operating in a complex environment? Is the program complex?

3. Is everyone bought into the idea of an adaptive process?

4. Does the contracting mechanism allow for some flexibility?

5. Does the timing of the program lend itself to launching a DE?

 
  • Developmental Evaluation needs to be communicated as an integrated process. This is not the normal evaluative process. It needs to be built in from the design phase and be embedded throughout the lifecycle of a program.

  • Points 3 & 5 above raise some of the most important points when considering whether a DE is necessary. Firstly, it is imperative that everyone, program staff, funders, evaluators are on board with the DE process. And secondly, early intervention is vital. For a DE to be successful, it needs to be involved from the project’s beginning.

 

Session: BalanceD-MERL & Establishing a Culture of Learning

 

Check out these fantastic visual notes from Katherine Haugh! You can follow Katherine’s work on her website www.katherinehaugh.com & on Twitter: @Katherine_Haugh!

Balanced MERL.jpg-large

 
 

Session: Local Ownership of M&E: Participatory M&E Methods in Design & Practice.

 

This session discussed the incorporation of local partners and beneficiaries throughout the entire life cycle of monitoring and evaluation projects and system, as a way to alleviate some of those limitations. Local actors can provide deep contextual knowledge of the operating environment, the cultural setting and norms as well as the actors and dynamics among various actors. Drawing on their experience within their own organizations, Vanessa Corlazzoli of Search for Common Ground & Natalie Trisilla & Youssra Bakhir of IRI, they detailed a couple of key points with regards to local ownership of M&E.

 

Search for Common Ground:

 
  • There has been a definite shift towards local ownership of M&E, but we are not quite there yet. While it’s not quite local ownership, it can be classified as intentionally inclusive.

  • Search for Common Ground has been sharing all of their evaluations online since 2004 and they can be accessed here.

  • Despite sharing all of the evaluations, that does not guarantee that teams & programs implement all of the lessons learned.

  • With regards to coaching M&E to staff and local partners, there is no “one off” training and staff are constantly being updated with new developments.

  • Publishing evaluations has made the quality of evaluations better.

 

IRI

 
  • It has become increasingly important to engage local participants in the M&E process in creative and interactive ways.

  • Investment in local M&E capacity has been a game changer for IRI. It allows them to conduct context analysis’ more effectively and communicate M&E processes better with local stakeholders.

  • Adaptive processes are vital to the success of programs as it allows evaluation staff to be responsive.

 

Remember you can follow live updates from Atlanta with DME for Peace on Twitter!