Design, Monitoring and Evaluation for Peacebuilding

You are here

DM&E Tip: Evaluation Approaches

In the past evaluation of international development programs have been a straightforward process in which a basic log frame is utilized to develop an evaluation methodology. Sometimes even a gut feeling or an intuition would have sufficed for the basis of an evaluation. However, peacebuilding DM&E now has an emerging discussion on different approaches to evaluation. With that said, how exactly does an evaluator approach their craft?

An important factor to consider when choosing evaluation methodology is what you are evaluating. The type, level, stage, and theory of change of your program, combined with the purpose of the evaluation (upwards accountability; program performance and improvement; learning; etc.), determine which evaluation approach is the most appropriate. This DM&E Tip quickly goes over the differing schools of thought in evaluation and provides a few tips on when and where to use each approach.

Impact Evaluation

Impact Evaluation is the systematic identification of the effects – positive or negative, intended or not – on individual households, institutions, and the environment caused by a given development activity such as a program or project.1

Hot Resource! Impact Evaluation in Practice by Paul J. Gertler, Sebastian Martinez, Patrick Premand, Laura B. Rawlings, Christel M. J. Vermeersch

While the type of data collection tools Impact Evaluators use is wide ranging, Impact Evaluation seeks to answer one question: What is the impact or causal effect of a program on an outcome of interest?2 In other words, Impact Evaluations answer cause and effect questions which determine the extent to the program was successful.  This helps an evaluator measure the direct impacts or effects the program had on the intended outcomes.  However, it does not answer the larger questions of how and why change occurs; therefore it is rarely used to measure unintended or behavioral types of change.

Hot Tip! Impact Evaluation usually involves the use of a counterfactual, an expression of what would not have happened if the intervention did not occur. This may be both methodologically challenging and expensive.

Developmental Evaluation

Developmental Evaluation is an adaptive learning process that seeks to develop long-term, partnering relationships between evaluators and those engaged in innovative initiatives and development by engaging in team discussions with evaluative questions, data, and logic.3

Hot Resource! A Practitioner's Guide to Development Evaluation by Elizabeth Dozois, Marc Langlois, Natasha Blanchet-Cohen

Contrary to impact evaluation, developmental evaluation is highly fluid, adaptive, and contextual.  As a result, this approach is often used in complex and emergent situations that involve multiple actors and where change is often non-linear.  Often embedded into a program team instead of contracted, a development evaluator focuses on context and relationships to bring about innovation and informed decision making.

Hot Tip! Developmental Evaluation is, in many ways, quite similar to reflective practice. Check out our three part discussion on the topic!

Most Significant Change 

Most Significant Change (MSC) is a participatory and process oriented approach to evaluation that involves the collection of stories highlighting “significant change” that emanate from the field level, and the systematic selection of the “most significant” of these stories by panels of designated stakeholders or staff.

Hot Resource! The Most Significant Change Technique: A Guide to Its Use by Rick Davies and Jess Dart

The MSC approach to evaluation works more with stories than “indicators.” It works on the idea that the informal narrative of stories will allow for all involved parties to develop a discussion about what they see as the impact and performance of the program as a whole. As a result, MSC is valuable for locating undefined outcomes and unexpected change. 

Outcome Evaluation

Outcome Evaluation is defined as changes in the behavior, relationships, activities, or actions of the people, groups, and organizations with whom a program works directly.

Hot Resource! Outcome Mapping: Building Learning and Reflection into Development Programs by Sarah Earl, Fred Carden, Terry Smutylo

Outcome Evaluation is first and foremost concerned with behavioral change.  Here, the evaluation is not concerned with the direct impacts of the program, because often that is too convoluted, but rather the focus is on the contributions to the outcomes.4 Accordingly, Outcome Evaluation allows complexity and correlation to take precedence, rather than impact and causality. This is a particularly excellent approach when a program involves capacity building.

Hot Tip! Check out our DM&E Tip on When to Use Outcome Mapping

Utilization-focused Evaluation

Utilization-Focused Evaluation is a process for making decisions about the purpose, data, design, and focus of a program in collaboration with an identified group of primary users focusing on their intended uses of evaluation.

Hot Resource! Utilization Focused Evaluation Checklist by Michael Quinn Patton

In accordance with its name, Utilization-Focused Evaluation is first concerned with the utility of an evaluation.  If and how the evaluation is used takes precedence, and therefore, “the focus in utilization-focused evaluation is on intended use by intended users.”5 This approach is highly contextual and personal, as the goal is to develop an end product that is best-suited and customized to the needs of the intended users.

Hot Resources

Impact Evaluation in Practice by Paul J. Gertler, Sebastian Martinez, Patrick Premand, Laura B. Rawlings, Christel M. J. Vermeersch

A Practitioner's Guide to Development Evaluation by Elizabeth Dozois, Marc Langlois, Natasha Blanchet-Cohen

The Most Significant Change Technique: A Guide to Its Use by Rick Davies and Jess Dart

Outcome Mapping: Building Learning and Reflection into Development Programs by Sarah Earl, Fred Carden, Terry Smutylo

 

Joshua Wunderlich is the Instituional Learning Team Intern at Search for Common Ground. Views expressed herein do not represent SFCG, the Learning Portal or its partners or affiliates

 

  • 1. World Bank, Monitoring and Evaluation: Some tools, methods, and Approaches, pg. 10.
  • 2. Paul J. Gertler, Sebastian Martinez, Patrick Premand, Laura B. Rawlings, Christel M. J. Vermeersch ,Impact Evaluation in Practice, pg. 7.
  • 3. Adapted from Practitioners Guide to Developmental Evaluation, Elizabeth Dozois, Marc Langlois, Natasha Blanchet-Cohen. Pg. 15.
  • 4. Sarah Earl, Fred Carden and Terry Smutylo, Outcome Mapping: Building Learning and Reflection into Development Programs, Pg. 1.
  • 5. Michael Quinn Patton, Utilization-focused evaluation checklist, Pg. 1.