Design, Monitoring and Evaluation for Peacebuilding

You are here

Learning and Accountability: Behind the Buzzwords and Debate

If you’ve dabbled in M&E literature, you’ve probably noticed the words “learning” and “accountability” pop up time and again. But what’s the real story behind these words? What do donors mean when they say they want accountability? What does learning mean in the context of an evaluation? This post will dig into these terms and discuss the oft-cited tension between the two.

Why Evaluate?

To start, evaluations can be designed and implemented for a whole array of reasons. You might want to know whether or not the program was cost-effective, or if the program was relevant to the conflict at hand. Or perhaps you want to know if the project is sustainable beyond your involvement.

Accountability, as it’s often referred to in M&E speak, is exploring whether or not a program “worked.” Were the dollars devoted to a program spent in a way that produced intended results? How effective was the project at achieving its objectives? USIP’s Evaluation Policy, for instance, notes that there are often multiple accountabilities; to the communities in which they work, to funders, to partners, and ourselves as professionals. Evaluation can be a key tool to understand the extent to which we live up to our commitments to each of these groups.

On the other hand, when we say an evaluation’s purpose is for learning, we want to understand why certain outcomes occurred, or simply more about the process of implementing a program. Did the program target the appropriate groups or individuals to achieve its stated objectives? Were resources sufficient, and allocated in a way that was effective? In sensitive, dynamic conflict contexts in which peacebuilding groups frequently work, self-reflection and learning are keys to informing how we work, what we do, and where we do it.

To put it succinctly, both “learning” and “accountability” are used to describe different purposes of evaluating.

The Tension

Beyond the concepts themselves, much has been made about a tension between evaluating for the purposes “learning” or the purpose of “accountability.” Many contend the goals of evaluating to hold organizations accountable is incompatible with wanting to know why something worked, learn from it and improve in the future.

To highlight a practical example, Claire Hutchings talks about Oxfam’s struggle managing this learning versus accountability tug-of-war as they create an organization-wide system for evaluation. The learning-accountability dichotomy is also felt in other fields. Education policymakers, for instance, have been grappling with the issue for years.

With perhaps the best explainer on how the tension plays out day-to-day,Irene Guijt writes:

 “The daily reality is that tensions between the two are alive and kicking. This results in major headaches for many organizations and individuals, straining relationships up and down the ‘aid chain’. Official policies that profess the importance of learning are often contradicted by bureaucratic protocols and accounting systems which demand proof of results against pre-set targets. In the process, data are distorted (or obtained with much pain) and learning is aborted (or is too haphazard to make a difference).”

While the tension described here can indeed exist, it does not have to be inherent, and learning and accountability goals don’t have to be exclusive. In fact, there are certain concrete steps that can be taken to more closely link learning and accountability, detailed by Irene Guijt:

 1.       Make sure accountability has a learning component. In other words, one of the measures of success is how much has been learned in the process.  This wipes out any inherent contradiction between learning and accountability.

2.       Clarify what it means to “learn” and what it means to be “held accountable.”

3.       Sequence accountability requirements with learning opportunities. For example, evidence gathering can be timed to feed into strategy planning and reflective learning meetings

Of course, these are easier said than done, and may fall outside of your control or your organization’s scope. 

But, the takeaway is this: While thinking in terms of “learning” or “accountability” can be helpful to frame priorities and guide an evaluation, the terms shouldn’t limit your thinking or dampen creativity.

Michael Zanchelli is  a Senior Program Assistant for Learning and Evaluation at the U.S. Institute of Peace where he supports knowledge management, learning and evaluation initiatives.