Design, Monitoring and Evaluation for Peacebuilding

You are here

How to Improve the Effectiveness of Evaluations in the Field?

The Peacebuilding Evaluation Consortium (PEC) and the Network for Peacebuilding Evaluation(NPE) were pleased to have hosted the Thursday Talk on with Michele Tarsilla on February 19, 2015, on "How to Improve The Effectiveness of your Evaluations in the Field? Rethinking your evaluation language from the design of your evaluation strategy and Terms of Reference to the write-up of your evaluation reports". If you missed the talk, please view the resources below.

“When we are all speaking the same languages, there are many other ‘languages’ at play behind and within what the speakers mean and what we in turn understand” (Minnich, 1990). Echoing these words, the Talk reviewed a number of implicit assumptions behind the use of the language in evaluation which profoundly impacts on the equity of evaluation efforts, just as much as – if not more than- the degree of rigor associated with the methods and designs being use. 

Recording and Transcript:

Due to a technical difficultly, there was no recording of this webinar. Our sincere apologies. 

To view the PowerPoint, please click here

To view the Transcript, please click here. 

About the Speaker:

A field person at heart with a good understanding of HQ processes, Michele has worked: Designing performance and impact evaluation strategies at the World Bank; Setting up a M&E system for a youth empowerment project implemented in a refugee camp in Northwestern Kenya; and Evaluating the performance and efficiency of a "Culture and Development" program aimed at rehabilitating street children living in a marginalized neighborhood of Kinshasa through the use of arts.

More recently, he has been working on comprehensive research on Evaluation Capacity Development (ECD) in a number of countries. And is currently developing a new analytical and evaluative framework to assess the performance and effectiveness of national evaluation associations. 

Questions from the Talk:

Barb Reed: Where may we find a descriptive guide to how to do Constructive Deconstruction?

 

Tom Grayson: Can you expand on the appropriate use of the term “value” or “values” in your work in the field?

 

Kaseem el Saddik: This is not a question but rather a comment suggesting that the language issue applies in all types of evaluation not only peace and humanitarian related.  I take the issue further and suggest that evaluations should not lose the human dimension and rapport. I am a passionate advocate of introducing coaching principles into evaluation practices. e.g listening, curiosity and deepening the learning among others... obviously there will not be constructive unless we get away from the buzz words and jargons and explore what people really say.

Dear DM&E for Peace Thursday Talk Participants,

Thank you very much for taking the time to attend my presentation on the use of language in evaluation last week. I am glad that we had a chance to exchange briefly in the course of the Talk and I look forward to learning more about your use of language within the scope of your own evaluation work. To this end, please look at the two questions showing on the last two slides of the presentation (posted on this website- see link above) and e-mail me your feedback/contribution at mitarsi@hotmail.com

Below are my responses to the three participants’ questions that I was transmitted by the DM&E staff:

Response to Barb's question: In response to your question, I am not aware of any descriptive guide on how to go about the “constructive deconstruction” process that I mentioned in the course of my presentation. While I hope that exchanges around the use of language in evaluation among practitioners might encourage our community (within and outside of peacebuilding) to rediscover the intentionality of the language we use in our  professional endeavors, it is worth mentioning here Cornwall’s work on “constructive destruction”.  That should give you a better idea of some of the most imporant related concepts. In particular, I strongly suggest you to read the following: Cornwall, A., “Buzzwords and fuzzwords”, Development in Practice, Volume 17, Numbers 4–5, August 2007

To access the article, please click on the following link: http://policy-practice.oxfam.org.uk/publications/buzzwords-and-fuzzwords-deconstructing-development-discourse-130903

 

Response to Tom's question: Tom, you raised a very important issue-- the use of the term “value” in evaluation fieldwork. As mentioned in the course of my Talk, evaluation is often -and erroneously I may say- defined as a neutral and objective endeavour aimed at providing an impartial assessment of why a certain project, activity, program or policy works or doesn't work (among whom and under what conditions). My understanding instead is that evaluations (as suggested by the root itself of the term e-valu-ation) have an intrinsic value dimension. First of all, an evaluation (as traditionally juxtaposed to research) is expected to respond to a concrete question (or set of questions) identified by a group of people (often managers, funders, policy makers) in order to inform their decision-making. These are people who have a clear set of distinct values and often, as is the case of participatory evaluation, an evaluator will often need to develop an evaluation strategy that will cater to multiple purposes, each of them feautring a different set of corresponding values/needs and interests behind them. Then evaluators themselves (based on their academic background and culture of their country of origin) bring their own set of values into their evaluation work with inevitable consequences on their use of evaluation methods. The default use of either structured surveys among a randomly selected sample of program participants or participatory learning and action approaches, for instance, is a good illustration of how different evaluators could address the same evaluation questions differently based on their own vision of the world (e.g., positivist or constructivist) and distinct goals (confirmatory as in the case of survey testing hypothesis or exploratory as in the case of case studies).

Likewise, one needs to take into account the even broader set of values underlying the interactions between the evaluators and the different evaluation stakeholders in the course of their planning, data collection, analysis and dissemination activities. This is where it gets tricky.   It is certainly feasible for  a large number of evaluators, familiar with the concept of Utilisation Focused Evaluation, to discriminate between primary and secondary users and, in doing so, to privilege the information needs (and corresponding values) of the earlier over the latter. In conflict setting, though, this is more difficult to do, as acknowledging a certain set of values associated with one group rather than the other, might instil among some of the interested parties the sense that the evaluation is partisan since its onset. In this case, the evaluation might become in and of itself the cause of additional and unnecessary conflicts and would infringe upon the "Do not harm" principle. In order to mitigate this risk, what I normally try to do (before starting my fieldwork) is to acknowledge the values of the different groups involved (directly or indirectly) in the project or program that I am asked to evaluate. How do I do this? First of all, I clarify with the client that I am a transformative evaluator (yes, I am upfront about my “political” stance in doing an evaluation since the beginning) and, as  a result, I explain that I will adopt a transformative evaluation approach (Mertens, 2008). Being transformative means to acknowledge the (often unintended) inequities and injustice associated with the specific program or policy being evaluated. Then, I try to learn about the different groups involved in the evaluation: this means that I would look for work done by anthropologists and peacebuilding specialists in those very same settings where I am expected to go work  (qualitative work will indeed allow me to understand what are the values, the fears, the hopes and the root causes of the existing conflict). In addition, I will try to look carefully at the language used in a lot of the existing program documents (not only those prepared by the funder of the project being evaluated but also and foremost by the local project implementers). Similarly, I will ask about sensitivity issues (either language-related or more political in nature) to my colleagues with prior experience in the same sites where I know that I will be conducting my evaluation. Morevore, I will explain to whomever I meet in the course of the evaluation that the respect of everyone’s sensitivity is of paramount importance to me and that, therefore, they should feel entitled to correct me any time one of my word/expression/gestures makes them feel uncomfortable or appears to be ambiguous, if not at all disrespectful to them. After all, I am perfectly aware that, despite my efforts, I will never gain a perfect level of cultural competence anywhere I will end up conducting an evaluation. Entering the field with this sense of being a bit culturally "inadequate" or "incompetent" has proved to be quite effective. Instead of frustrating or discouraging me during my evaluation fieldwork, this thought has rather contributed to the succesful competion of my evaluation assignments in two different ways. On the one hand, it had forced me to be humble and not to give the impression to local stakeholders that I have already figured out "what's wrong with them" and how the evaluation will help them fix "their problem". On the other hand, it has pushed to make a constant effort to be creative, adaptive and contingent in order to gain a sort respect and acceptance by local stakeholders (otherwise said, the awareness of my "limitations" has helped me to be sufficiently resourcesful in order to turn my incompetence into an "acceptable" form of incompetence in the eyes of the local stakeholders). 

Response to Kaseem's comment: Kaseem, I fully agree with you and I might even go further by saying that paying attention to language should become a default practice not only in evaluation but in all international development. Interestingly enough, while researching on the use of language in evaluation,  I ended up doing some pro bono work for an association serving individuals with development disabilities in France. It was incredible how this work experience taught me first-hands (in parallel to my research) how difficult and yet possible it is to reflect upon and be intentional with our own daily use of words and gestures every time we communicate with individuals who do not share our same linguistic codes, mental modes or cultural norms. As stressed in the course of my Thursday Talk, what I am trying to do with my current research is exactly to encourage evaluators to challenge and improve some of the terminology that circulates around evaluation circles. The reason for that is double-fold. On the one hand, such reflection of langauge will allow us to become more culturally competent in our profession and possibly to establish a more effective rapport with those partners and organizations in the field whom we work with in the course of evaluation and who often look at us with a certain sense of detachment in light of our "alleged" - and often imposed-intellectual and technical superiority.  On the other hand, a more intentional and well thought-out use of communication in the course of our evaluation work will enable us to better define and measure a variety of constructs which otherwise would remain hard to grasp and act upon.

I am grateful to this platform for the interest expressed in my research so far and I look to exchanging further with you on both language-related issues and cultural competence in evaluation in the coming months!