Complexity is a hot term right now. You hear it used to describe various things throughout the project or programme cycle, but particularly in evaluation in fragile and conflict-affected environments: “the environment and changes we seek to affect are too complex to be readily measured”.
But, not everything is as complex as it might seem. There are scales of difficulty. And measuring results might be aided by distinguishing between that which is simple, complicated and complex in programme theory.
This distinction will help to identify the low-hanging fruit that can be easily measured, and that which requires further and more advanced efforts to reach. Making this distinction may help prepare for the evaluation, such as budget and time allocations, as well as with data collection, tool design and analysis protocols.
Check out this image from Getting to Maybe: How the World is Changed for everyday illustrations of the simple, complicated and complex
Simple problems are when “the destination is known, the path to reach it is known, and it is simple to follow the plan.”1 Simple problems are predictable and the outcome is known. For example, following a recipe to baking a cake is a simple problem. It is what you can measure. An example from fragile and conflict-affected environments could be knowledge change resulting from a well-tested and delivered training.
Complicated problems refer to interventions in which multiple components are required to produce the intended result (multi-step causal chains).
For example, a complicated problem might be an evaluation of a campaign to create favourable public opinion resulting in the passing of an anti-discrimination law. The intervention involved multiple elements, all of which need to mesh together in just the right way in order to arrive at the desired result.2
Such problems may require additional measurement effort, such as the use of multiple tools to triangulate data to arrive at conclusions. This might be applied to multiple causal chains that occur simultaneously, which suggests that the programme employed more than one overall theory of change. Alternatively, conducting data triangulation for complicated problems can mean recognizing different causal mechanisms across a range of interventions in different contexts,3 such as programme-level evaluations.
Complex problems refer to interventions where the causal pathway is adaptive or emergent: “…interventions where it is not possible to set out in advance the details of what will be done.”4 The intervention may not have pre-identified outcomes, but rather a vague, goal-level description of the desired end-result without a clear pathway of how to get there; it is developmental. It might also include investigation of recursive feedback loops and emergent outcomes, such as unintended changes and resulting system dynamics.
Keep in mind that problems are rarely one-dimensional: simple, complicated and complex programming elements frequently overlap.
Patricia Rogers states that “the art of dealing with the complicated and complex real world lies in knowing when to simplify and when, and how, to complicate.”5 Measurement efforts can be improved by knowing when and how to simplify, complicate and ‘complex-ify’ our measurement systems, tools, and approaches.
Patricia Rogers, "Using programme theory to evaluate complicated and complex aspects of interventions." Evaluation 14, no. 1, 2008: 29-48.
Patricia Rogers and Richard Hummelbrunner. "On-line e-Learning programme on: Module 1 Equity-focused Evaluations.” Presentation
This post was adapted from Vanessa Corlazzoli and Jonathan White, Measuring the Un-Measurable: Solutions to Measurement Challenges in Fragile and Conflict-affected Environments (DFID: Department for International Development, 2013).
Thanks for your thoughts on how to encourage a culture of response, that is appropriate to the kind of problem we are facing. While that seems a natural way to act, I think you’d agree we all find it very difficult to do in the field! Michael Quinn Patton, one of the authors of Getting to Maybe, takes this idea of simple, complicated, complex – one(or two) steps further in his book on Developmental Evaluation(DE). He plots these categories on the Stacey Matrix(seen here) that work on axis’s of certainty and agreement. I think this kind of tool can help groups and evaluators begin to have conversations about the nature of the context they find themselves in and the kind of framework that would be most helpful to successfully navigate their situation to the intended purpose. It could definitely be seen that characteristics of the users, context, evaluation and evaluator also would play a hefty role in placing a program/project on the matrix.
The other tool Patton offers in his book is the Cyefin framework developed by Snowden and Boone, which is also based on the notions of simple(known), complicated(knowable) and complex(unknowable) problems. While it emphasizes similar variations in the nature of causality, it also offers corresponding implications for decision-making and action. You can see the framework here. Patton goes on to apply these kinds of responses to the role of an evaluator in various situations and contexts. The ongoing message about evaluation in complex situations is clear; every evaluation is context specific, no one size fits all, something unexpected will often emerge and we must become friends with non-linearity. I think DE offers a tangible strategy for those trying to navigate those complex problems to help deal with uncertainty by tracking developments in real time , facilitating assessments of what those developments mean and supporting evaluation of alternatives going forward. This kind of change is more than dynamic (which about change that is evolutionary and in a fairly manageable direction), it is about change that is dynamical (which is unpredictable, turbulent, non linear and complex). Sound like fun!
I’ll leave you with an Eric Berlow TED talk, a wonderful ecologist who asserts that simplicity is on other side of complexity. I'd be interested to hear what you think of these tools, strategies and approaches.
Reference: Developmental Evaluation, Michael Quinn Patton, 2011, Guildford Press, New York