Design, Monitoring and Evaluation for Peacebuilding

You are here

Measuring advocacy impacts: Lessons from piloting a hybrid methodology

Author, Copyright Holder: 
Monica Stephen
Language: 

This blog was posted with the permission of International Alert. You can view the original post, here.

International Alert recently evaluated an area of its peacebuilding advocacy. We experimented with a relatively new methodology for measuring advocacy impacts: namely a hybrid of contribution analysis and process tracing. Below is a brief introduction to the evaluation and some insights from the evaluation experience.

What was evaluated?

The evaluation looked at a specific activity from Alert’s seven year programme of engagement with the World Bank. An independent consultant examined Alert’s efforts to influence the results monitoring system of the World Bank’s International Development Association (IDA) - a fund for the world's poorest countries - during the negotiations that preceded the 17th replenishment of IDA. Alert’s aim was to develop a deeper understanding of how our approach to advocacy works, how we can strengthen impacts, and how best to evidence impacts.

How was it evaluated?

Elements from two different approaches to evaluation – contribution analysis and process tracing – were combined. The methodology involved developing and testing four different contribution claims (hypotheses).

First of all, these contribution claims were tested against a contribution story that took into account the activities of Alert, relevant World Bank teams and other relevant NGOs during the negotiation period for the 17th IDA. The contribution story provided a detailed picture of actors and initiatives involved in influencing the outcomes of IDA 17 with regard to engagement in Fragile and Conflict-affected Situations (FCS).

The credibility of Alert’s contribution claim was assessed alongside other contribution claims documented in the contribution story. Evidence was reviewed, relevant people were interviewed and results chains were tested. The assessment sought to find the main weaknesses in the different contribution stories. Process tracing was then used to further test the contribution claims – to check outcomes were consistent with the project theory of change and to check that alternative explanations for the outcomes could be ruled out.

What did the evaluation conclude?

The evaluation found that Alert had access to key stakeholders during the IDA 17 negotiations thanks to its specific expertise on World Bank policies and practice in FCS. It also found that Alert had influenced the quality of the discourse on FCS among World Bank decision-makers, relevant government agencies such as the Department for International Development (DFID) and NGOs such as Oxfam.

The evaluation found that Alert shifted the way in which World Bank decision-makers and influencers discussed issues related to FCS (i.e. the content of the discourse, its tone, and also its attentiveness to the nuances and complexities of working in FCS).

However, the advocacy approach that Alert used of targeting reformers within the institution and aligning its messages to shape internal discourse made determining its exact contribution difficult.From the material evidence available, the evaluation correctly concluded that Alert’s contribution to the IDA 17 results measurement system was less than efforts being made within the World Bank itself. Change after all, can only be driven from within. However, key World Bank staff point out that, “we [bank staff] had been involved in a very controversial and difficult dialogue with [Bank] shareholders about possible significant changes to IDA to strengthen support to FCS… Having a strong group of stakeholders from the outside helps to push the institution… I believe Alert played a role with those shareholders to support those changes.”

Why is evaluating advocacy important?

Why do we want to measure our specific influence? If change happens, is that not enough? No! Primarily we want to measure our influence because we want to be sure that our specific efforts are contributing to making a difference; that we are not wasting valuable resources to no effect.

Secondly, we want to learn about what works and how we can do more with the resources we have to make a bigger difference. And ultimately, we want to be accountable and transparent to the people that benefit from our work and to the people that fund our work. We want to be able to demonstrate that we are using our comparative advantage and limited resources well.

What is unique about evaluating advocacy?

Evaluating advocacy or influence is not like evaluating other development interventions. At it’s heart, advocacy is a political process. While organisations may seek to be apolitical and draw their recommendations from robust evidence, the reality is that all of us operate in a political environment, and in political processes, it is often difficult to unpick causal chains – why one thing led to another.

Indeed it is sometimes in the interests of individuals or organisations to blur causal chains. Furthermore, advocacy impacts can be direct, but more often they are indirect, unfolding over an extended timeframe. And it is widely accepted that advocacy results are rarely the product of one organisation, project or initiative working alone, but instead the product of multiple dynamics, organisations, individuals and initiatives coming together in unpredictable ways.

The findings of Alert’s IDA evaluation demonstrate this. Keeping all these realities in mind, we can conclude that successful advocacy is more about actors with a common goal interacting to identify and adapt to changing circumstances and opportunities, making the best use of their respective comparative advantages along the way, than about individual actors pursuing a predefined plan of action that works in theory but is not flexible enough to adapt to the realities of political processes.

What does this uniqueness mean for designing advocacy evaluations?

A key learning from the literature is that evaluators need to use the longest feasible time horizons to evaluate advocacy impacts. They need to look at the sustainability of hard won reforms, assessing what is needed to ensure reforms stick and become the foundation for further and deeper reforms.

In some cases it may be better to evaluate advocates or organisations as opposed to individual advocacy initiatives – assessing their ability to adapt to challenges and opportunities, to act strategically and collectively. Sometimes organisations are ‘knowledge suppliers’ and ‘influencers’ of intermediary organisations who in turn have the necessary connections to influence decisions on policy or practice.

In this instance, measuring the interaction between different entities or approaches that helped to produce the change is important. Network evaluation could be a useful methodology in this regard, assessing the reputation and influence of an organisation in its policy space.

For example, Alert can demonstrate strong network centrality when it comes to understanding and making recommendations on the policies and practices of international development banks operating in FCS. Alert’s comparative advantage comes from its capacity to link local first-hand experience to global policy dialogue through its network of country offices and international policy outreach. By drawing together experiences across different countries, Alert is able to challenge bank policy and practice, supporting shifts at the local and global level simultaneously.

Would the conclusion of this evaluation been different if it had had a longer time horizon?

By focusing the evaluation on a single initiative (engagement with the World Bank on the specific question of the IDA 17 results measurement system), within a narrow timeframe (from late 2012 to December 2013), the parameters of the evaluation did not allow the evaluator to take into account the cumulative impact of Alert’s engagement with the World Bank over the preceding five years. It also did not look at the function of the IDA 17 advocacy initiative within Alert’s wider strategy for engagement with the World Bank at the global and local levels.

A longer time horizon would have taken into account Alert’s long-term strategic objective for engagement with the World Bank: “to enhance the Bank’s responsiveness to, and prioritisation of, local peacebuilding needs and priorities in its policy and practice”. It would also have taken into account the fact that Alert has been advocating for the Bank to assess the performance of its country strategies and projects according to their attention to conflict and fragility dynamics and their action to address those dynamics since 2008.

Alert’s IDA recommendations were a version of the organisation’s long-term advocacy messages. A wider time horizon for the evaluation could also have produced deeper insights into the length of time and range of efforts and interactions it takes for a medium-sized, London-based INGO to achieve policy advocacy results with global, multilateral institutions like the World Bank.

What can we learn about advocacy and evaluating advocacy from this evaluation experience?

Expertise and trust matter. No other NGO was able to talk to the same officials about the same policy and practice issues with the same level of confidence and trust.

Timing is critical. Building key relationships and engaging in policy negotiation processes before they start reinforces organisational influence. A broader evaluation scope and timeframes could make all the difference to measuring organisational influence.

Visibility and documentation make a difference. Alert’s ‘insider inspirational’ approach to advocacy – meaning engaging individuals from a range of influential institutions in a process of collaborative critical inquiry, grounded in empirical data - involves close, behind-the-scenes engagement with decision-makers and their advisors, and collaboration with a wide range of networks and allies.

The character of Alert’s approach to advocacy requires investment in systems to document and record these behind-the-scenes interactions. Systemic documentation of interactions would enable evaluators to rapidly track Alert’s networks and influence across a range of issues, organisations and timeframes. However, to be effective, such a system would need to be easy-to-use for individuals with busy outreach schedules, and it would need to demonstrate its added-value early on in its role-out so that advocates across the organisation would see the value-added to their work in making use of the system.

The overall evaluation experience revealed that there’s no cutting corners. Systemically measuring the impact of advocacy is complex, it involves broad timeframes and as a result, it is costly. We’ve talked about timeframes and the complexity that comes from the political character of advocacy above, but we’ve not really looked closely at the complexity that comes from the multitude of stakeholders that play a role in influencing decision-makers and the direction of policy. If in any doubt about this complexity, take a look at a diagram of the policy cycle in this ODI presentation, and imagine the donor box to be the World Bank.

What becomes clear from this diagram, and from our experience of the hybrid contribution analysis – process tracing methodology is that proving specific and direct influence within this hive of highly-political activity, using the rigorous contribution analysis – process tracing methodology over an extended timeframe (six years) would require significant resources (time and money) to deliver, including a full team of evaluators.

For most organisations, the high cost of conducting such an evaluation would be hard to justify alongside other commitments. Other advocacy evaluation methodologies, such as network analysis at the organisational level, may be more cost-effective while still offering good quality insights into what works and what doesn’t.

Summaries of the evaluation report and learning from the evaluation methodology are available here.

 

Monica Stephen is an independent consultant for research and evidence-based advocacy for better peacebuilding and international development. Contact her at monicastephen@gmail.com. For more information about our approach to monitoring, evaluation and learning contact Marie Weiller atMWeiller@international-alert.org.