The New York Times has a great op-ed by David Brooks today, February 18, entitled ‘What Data Can’t Do’. The underlying message? That mass quantitative data alone cannot inform decision making: it must be contextualized and valued.
In peacebuilding, and fragile and conflict-affected states in general, contextualization and valuing of data requires first and foremost and intimate reading of the pulse of a country: the conflict and context dynamics and what is feasible and appropriate within those boundaries. This in turn requires a conflict analysis that logically and realistically separates the conflict from the context—already a challenge in peacebuilding programming, even more so in evaluation.
Feasibility also requires that we identify logical entry points, connectors and dividers, and whether our skillset adds value to those connectors and dividers given on-the-ground realities. Just because an action can be taken doesn’t mean that it should be taken. Familiarity with peacebuilding concepts, processes, values and worldviews is therefore critical for an evaluator to make an effective statement on relevance.
And this is the challenge in peacebuilding, particularly in evaluating relevance: contextualizing data with our own unique worldviews and skillsets, while at the same time recognizing the strengths and weaknesses, reaches and limitations of our actions. This is something data cannot tell us. It is through the innate qualities that make us uniquely human that such difficult decisions are made, and making our values and assumptions explicit allows us to check for another unavoidable occurrence in humanity: fallacy.
Listed here are just a few of the implications of the limits of data in monitoring and evaluation. What others do you experience? And how do you overcome or work within these limitations?
What Data Can’t Do by David Brooks, New York Times February 18 2013
Evaluating Relevance in Peacebuilding Programs: CDA Working Paper Series by Mark M. Rogers for CDA Inc
Evaluability Assessments in Peacebuilding Programming: CDA Working Paper Series by Cordula Reimann for CDA Inc
Peacebuilding How? Good Practices in Conflict Analysis by Koenraad Van Brabant for Interpeace
This post raises many interesting and important points. Data can be used to make strong correlations and inferences, and it can influence how and what practitioners choose to do. While I understand the importance and potential uses of data, I also recognize its limitations, as pointed out by this post and Brooks op-ed. I agree that data cannot tell us everything and that scholars and practitioners should be vary of depending/reading too much into data. I particularly agree with Brooks' analysis of why and how data struggles with context. Data can fail to take into consideration the interdependence and interplay between multiple causes and multiple contexts of any conflict. While I agree with this post, I am also interested in learning how technology has enhanced practitioner’s ability to gather data and if technology advances have bridged any of the aforementioned existing gaps, such as “data struggles with context?”