Design, Monitoring and Evaluation for Peacebuilding

You are here

Better Strategies for Analyzing Narrative Data

The Peacebuidling Evaluation Consortium (PEC) and the Network for Peacebuilding Evaluation(NPE) were pleased to have hosted the Thursday Talk with Reina Neufeldt of the University of Waterloo who will discuss using and analyzing narrative data. 

Interviews are an evaluation method that can yield depth and texture in an evaluation by helping to understand how an intervention has affected individuals and how they view the intervention and change.  Yet often interview data is underutilized by evaluators. Reina addressed the questions: How can we use interviews to get a more complex understanding of an intervention?  What are some better strategies for analyzing interview data in peacebuilding evaluation? 

About the Speaker

Ms. Neufeldt is an Assistant Professor in the Peace and Conflict Studies Program at Conrad Grebel University College, University of Waterloo.  She has engaged in program design, monitoring, evaluation and learning in peacebuilding and conflict resolution for over fifteen years.  

Reina has an MA in Social Psychology (York University) and a PhD in International Relations (American University); she is trained in both qualitative and quantitative research methods.  Reina’s current research focuses on field learning and explores the role of reflective practice in improved peacebuilding. 

Recording and Transcript

 

To review the accompanying Powerpoint, please click here

To review a written summary of the presentation and discussion, please click here

 

From Valentina Bau: 

Does Professor Neufeldt have any suggested readings on this topic? 

Dear Valentina,

Definitely! Here are some resources - some are on-line and others are books:

For short overviews on thematic coding I'd recommend:

Colin Robson. Real World Research.  Chapter 17: The Analysis and Interpretation of Qualitative Data. 2011 (or earlier edition). UK: John Wiley & Sons, Ltd.

Bruce Berg and Howard Lune. Qualitative Research Methods for the Social Sciences. Chapter 11: An Introduction to Content Analysis. NY: Pearson. There are a lot of editions of this book -- it was on its 8th edition in 2012. 

The qualitative software sites (like NVivo) often have examples of coding as well and explanations of coding processes.

For deeper exploration of content analysis a good accessible source is:

Margaret LeCompte  and Jean Schensul's Analyzing and Interpreting Ethnographic Data. Walnut Creek: AltaMira Press (A Division of Sage). 

There are also a lot of texts  on grounded theory, or quasi-experimental content analysis out there.

If you  want more on how to structure a good interview and set yourself up for good data, the books I mentioned earlier contain guidance on that; there's also a nice short summary on interviews as a data collection methodology on Better Evaluation at:

http://betterevaluation.org/evaluation-options/interviews 

Hope that is a helpful beginning!  Thanks for the question.

Thank you very much for the references.

Thank you everyone!

For lively engagement during today's talk and so many interesting questions. As we were unable to take all the questions during the alloted time, we are posting them here. 

----------

Charles Guedenet, (IREX): What kind of training is needed for coders to reduce inconsistent coding and improve reliability?

Miek van Gaalen: Do you have any advice to ensure that you have the right cultural lenses in your interpretations and coding of interviews?

Laura McGrew: How are you able to negotiate adequate time in an evaluation process for the lengthy transcription and coding process?

Annette Ching’andu, (Habitat for Humanity International): What would be a good strategy to add a quantitative element to qualitative data so that those using the data can have a sense of how to generalize the findings to the target population?

Ruth Simpson: Would an additional phase focusing on who we choose in KIIs help? i.e. more range of narrative data to begin with? Such as bellwether interviews, interviews with underrepresented groups (who might have an alternative view) and those who were not engaged in project.

Barb Reed: How do you code "leading questions" that interviewers may use?

Judith Russell: You briefly mention the idea of going back to interviewees/stakeholders to check in with them once the data is in Round 2 of the analysis. Can you provide more details or more ideas on how this process can be more participatory without it becoming too time consuming and expensive?

Mary Dalsin: You mentioned putting quotes into the report to emphasize areas/ideas - do you promote the use of case studies as well?

Peter Woodrow: Can you address the process of capturing actual quotes? You mentioned taking hand written notes.  In this age of electronics, are people more comfortable with recording? And which method would capture their words more accurately?

Isabella Jean (CDA): Could you give an example of managing a participatory data analysis process where interpretation of data /meaning, and deliberation on conclusions featured very different and opposing perspectives, and no consensus reached among staff/partners/participants and the evaluation team?

Tersoo Yese: Is this same as quantifying qualitative data? If yes, can we have a practical example?

Sho Igawa: In an organization operating in a local community, is it helpful to divide up roles between someone who is engaging with locals the most in operations vs. someone who is gathering data for evaluation? What I am asking is how to make it clear for participants what is the nature of their engagement with your organization. I feel that if it were me, I would be torn between engaging with local people on a personal level and engaging in order to observe/analyze my organization's intervention's impact… do you believe in dividing up these roles or is their just a balance to be struck?

Jelena Zelenovic: How do you ensure that the interview process itself and the time given by the interviewees does not raise expectations of the interviewees that they will see a concrete quick result/ improvement in their life/ additional support from the project or from someone else? And how do you try to mitigate the possibility that they will be very affected by the detail and emotion they are sharing with you as part of the evaluation yet as an evaluator, your job is not to offer support?

 

All,

 

Thanks for the thoughtful questions; they offer good additions to my remarks and raise additional points for consideration. I’ll do my best to respond to them in terms of different themes.  

 

Choosing Interviewees

Ruth – yes, I agree with those suggestions around a pre-phase process for key informant interviews, as well as developing a larger pool of narrative data from other sources.

 

Transcription

Peter – thanks.  If participants agree, yes, definitely record (although give them the option to not be recorded).  It is hard to code and cross-check audio (has to do with the nature of sound) so I still recommend transcription for coding.  There are programs than can do automatic transcription though, which can help speed the process! You usually still need to go back and clean up the data.

 

The Coding Process

The process I spoke to was an individual coding process where you do it yourself, and you are transparent about your categories and substantiate from where these categories emerged –and provide the data that supports your interpretation (via quoted material).  If you do it yourself, it is time consuming but well worth it in terms of depth of knowledge from my perspective.  It is less time-consuming if you have a narrower filter for relevance – so narrower set of coding filters (like coding for four dimensions of change is less time consuming than open-coding of the full interview transcript).  When you do coding and categorization yourself, you will want to check your own filters and knowledge base – Miek’s comment is helpful in pointing out that external evaluators may be challenged to really understand local context, turns of phrases and implied understandings (and silences).  It is worth discussing your interpretations of people’s comments with a variety of stakeholders from a given context who can help identify limits to your own thinking (e.g. cultural filters).

 

You can get others to code interviews. That works best if you have a pre-established coding framework, and is part of the process of turning interpretative data into data for statistical analysis.  This is the type of coding, Tersee, where narrative data is turned into coding categories that can be counted, and statistical analyses performed.  As Charles notes, you then need to have a training session to ensure your group is all coding uniformly based on those categories. In these cases, you can run inter-rater reliability tests on the coding to assess the degree to which coding is reliable between coders (e.g. Goodman and Kruskal’s gamma or Pearson’s R). 

 

Barb, your question of coding “leading questions” in the interview speaks to me about the importance of developing good questions before the interview.  In the previous post, I noted a link to a website that gives guidance on interviewing more generally – structuring interviews so they don’t include leading questions is critical.  If you are analyzing data that has been generated in response to a leading question, you have to take that into consideration in terms of its meaning –if/when you use it, add a caveat to note your own questions about the circumstances in which it was generated.  

 

Evaluation Structure and Participation

Some of the questions are good questions about how you structure a team and what design choices you make in evaluation more broadly.  More than I can get into here.  Good places to look for more information on design in addition to those listed in my previous post are Michael Quinn Patton’s Utilization Focused Evaluation, or Developmental Evaluation.  There’s also a nice blog post on participatory evaluation approaches on the DM&E for peace website, which includes links to evaluation resources: http://www.dmeforpeace.org/discuss/dme-tip-participatory-evaluation-designs   Isabelle – your question about failed participatory processes is great, and maybe worth its own discussion. 

 

Interview Ethics – Expectations

Jelena, you raise really important points.  This is tough because there are competing expectations – sometimes interviewees believe more funding will come if the evaluation is positive (which affects what people say), or expect reimbursement for participation, or other commitments from the evaluator.  There are strong and divergent opinions on these matters and it is worth exploring in greater detail than I can here.  Good to flag the need to protecting interviewees as well as the interview and evaluation process.  This might include referring people for support after you leave (e.g. for trauma support).  The imperative to treat people as ends in and of themselves, rather than as a means to an evaluator’s information ends, comes to mind and may be a good place to end.

 

Thanks again for the great questions and apologies if I've missed some,

 

Reina