Tag Archives: owen brader

Owen Barder on Development and Complexity


Owen Barder, Senior Fellow and Director for Europe of the Center for Global Development last week posted a talk online, adapted from his Kapuściński Lecture of May 2012, in which he explores the implications of complexity theory for development policy (the talk is also available as audio-only version on the Development Drums podcast).

The talk tells a persuasive story of what has gone wrong in international development and in the various models of growth it used; that the adoption of the concepts of adaptation and co-evolution allow for much more accurate models; a brief description of complex adaptive systems and complexity theory; and what consequences these insights have for development policy. But these positive turns in development come for a price: we can no longer ignore that we – the developed nations – are also a part of the larger system and that our (policy) actions strongly influence the development potential of poor countries. It is no longer enough to ‘send money’ and experts and think that this will buy us out of our responsibilities towards those countries.

I want to quickly summarize what I think are the key points of Owen’s presentation, starting with what seems to me an obvious point:

Development is not an increase in output by an individual firm; it’s the emergence of a system of economic, financial, legal, social and political institutions, firms, products and technologies, which together provide the citizens with the capabilities to live happy, healthy and fulfilling lives.

Owen talks about various (economic) models and theories that have neglected this systemic perspective and, subsequently, failed to deliver successes in development. The focus of the economic models shifted over the years from providing capital and investment to technology.

Since this approach of ‘provision’ did not work out, the lack of favorable policies was blamed for hindering the market to achieve its theoretical potential. As a consequence, the Washington Consensus introduced which policies needed to be adopted by a country to be able to grow. As we know, this also did not work out, although the Washington Consensus did, according to Owen, have some positive impacts in developing countries.

After the Washington Consensus, development agencies focused on weak institutions and spent (and are still spending) huge amounts of money on institutional strengthening and capacity building initiatives. The results have been modest. Adding to the difficulties is the fact that it is still not clear which institutions are really important for development.

Most recently, a new book published by Daron Acemoglu and James A. Robinson (Why Nations Fail) promotes politics as culprit of failing development. According to them, the institutions are weak because it actually suits the elite that is in power to run them like this [what an insight …!!!]

All these models that were applied were actually based on traditional economic theory. After seeing all these approaches fail, Owen switches to a new way of describing economic development, based on adaptation and co-evolution in complex adaptive systems.

After making a compelling argument why complexity theory can actually better describe the real economy out there, Owen describes seven policy implications deducted from that insight.

  1. Resist engineering and avoid isomorphic mimicry. The first point mainly stems from the fact that solutions developed through evolution generally outperform design. The latter point mainly implicates that institutions that were mainly built after a blueprint following ‘best practices’ but do not connect to the local environment will have not much use.
  2. Resist fatalism. Development should not be seen as a pure Darwinian process. Smart interventions by us can accelerate and shape evolution.
  3. Promote innovation.
  4. Embrace creative destruction. Innovation without selection is no use. Feedback mechanisms to force performance in economic and social institutions are necessary.
  5. Shape development. The fitness function which the selective pressure enforces should represent the goals and values of a community.
  6. Embrace experimentation. Experimentation should become a part of a development process.
  7. Act global. We need to make a bigger effort to change processes that we can control, for example international trade, the selection of leadership in international organization, etc.

Owen is not telling any news in his presentation, but he succeeds to develop a compelling storyline on why complexity theory is relevant for development and why processes that are based on adaptation and co-evolution much better describe why some countries develop while other seem stuck in the poverty trap.

In my view this is an immensely important contribution to the discussion on how we can reform the international aid system to live up to our responsibility of enabling all people on this planet to live happy and fulfilled lives.

Flipping through my RSS feeds

Google ReaderAfter three weeks of more or less constant work, I’m finally having some time to have a look at my RSS feeds. After the first shock of seeing more than 3000 new entries, containing over 100 unread blog posts, I just started reading from the top. Here a couple of things I found interesting (not related to any specific topic):

SciDevNet: App to help rice farmers be more productive – I don’t know about the Philippines, but I haven’t seen many rice farmers in Bangladesh carrying a smartphone (nor any extension workers for that matter).

Owen abroad: What are result agenda? – An interesting post about the different meanings of following a ‘results agenda’ for different people, i.e., politicians, aid agency managers, practitioners, and (what I call) ‘complexity dudes’. I’m not very satisfied with Owen’s assessment, though, because I think he is not giving enough weight to the argument that results should be used to manage complexity. I think to manage complexity, we don’t need rigorous impact studies, but much more quality focused results regarding the change we can achieve in a system and the direction our intervention makes the system move.

xkcd: Backward in time – an all time favorite cartoon of mine, here describing how to make long waits pass quickly.

Aid on the Edge: on state fragility as wicked problem and Facebook, social media and the complexity of influence – Ben Ramalingam seems to be back in the bloggosphere with two posts on one of my favorite blogs on complexity science and international development. In the first post, he explores the notion of looking at fragile states as so called ‘wicked problems’, i.e., problems that are ill defined, highly interdependent and multi-causal, without any clear solution, etc. (see definition in the blog post). Ben concludes that the way aid agencies work in fragile states needs to undergo fundamental change. He presents some principles on how this change could look like from a paper he published together with SFI’s Bill Frej last year.

In the second piece Ben looks into the complex matter of how socioeconomic systems can be influenced, and how this can be measured, by giving an example of Facebook trying to calculate its influence on the European economy and why its calculations are flawed. The basic argument is that one’s decision to do something is extremely difficult to analyze and even more difficult to trace back to an individual influencer. Also our decisions and, indeed, our behavior, are complex systems. One of the interesting quotes from the post: “Influentials don’t govern person-to-person communication. We all do. If society is ready to embrace a trend, almost anyone can start one – and if it isn’t, then almost no one can.”

Now, to make the link back to Owen’s post mentioned above on rigorous impact analyses: how can we ever attribute impacts on a large scale to individual development programs or donors if we cannot measure the influentials’ impact on an individual’s behavior? I rather like to think of a development program as an agent poking into the right spots, the spots where the system is ready to embrace a – for us – favorable trend. But then to attribute all the change to the program would be preposterous.

Enough reading for today, even though there are still 86 unread blog posts in my RSS reader, not the least 45 from the power bloggers Duncan Green and Chris Blattman. I’ll go and watch some videos now of the new class I recently started on Model Thinking, a free online class by Scott E Page, Professor of Complex Systems, Political Science, and Economics at the University of Michigan. Check it out: http://www.modelthinking-class.org/
For people with less time, a couple of participants are tweeting using #modelthinkingcourse

Spotting ’emerging patterns’ to report on changes

In a training on evaluating projects I attended a while ago, a representative of the Swiss Charity HEKS presented their results measurement (RM) system. The presentation caught my immediate attention and interest since HEKS is using principles of complexity theory as a basis for their RM framework. Based on this rather experimental framework, the organization published a first ‘effectiveness report‘ in March 2011. I want to present some of the interesting features of the RM system, based on the effectiveness report.

HEKS acknowledged when building their RM framework that development takes place in complex and dynamic systems with the consequence that the behavior of such systems is largely unpredictable and, thus, effects of interventions also hard to predict.

This challenging perspective implies a different understanding of cause and effect. Connected to their environment, living systems do not react to a single chain of command, but to a web of influences.

As a consequence, HEKS does not base its projects on rigid impact logics and impact chains, but they are conscious that

HEKS cannot always objectively trace the effects of its actions, but can make its intentions, input and observations transparent.

As a consequence, HEKS’ particular approach focuses on the changes observed and experienced by different stakeholders involved at several levels of their projects.

The focus is more on the significance than on the quantification of such changes for the people who experience them. HEKS herewith takes a path different from strict measurement and hard data collection. Its aim is to grasp and understand the changes in the purpose, identity and dynamics that hold and drive the systems it gets involved – rather than to measure their ever changing dimensions.

Subsequently, HEKS’ method is to adopt a bird’s-eye view, look for ‘emerging patterns‘ and try to interpret them. Qualitative data is collected on three levels, i.e., the indivudual level, the project level and the programme level through methods like ‘Most Significant Changes’, monthly newsletters and annual reports focusing on observations of different level staff as well as a two days workshop for compilation.

Nevertheless, HEKS defined 10 key indicators that are collected for all countries they are active in. These indicators are for example number of beneficiaries, income increase, yield increase, etc.

For me, this is a very interesting approach and it resonates very well with the discussion on ‘experiential knowledge and staff observation’ of the GROOVE network that I mentioned in my last post. Also the staff observation have as an implicit goal to grasp emerging patterns of positive changes in the system the project tries to influence in order to amplify this change.

Owen Barder, on whose presentation on evolution and development I wrote in my last post, is asking for more rigorous evaluation of project impacts in order to be able to see what works and what doesn’t. Is the RM framework proposed by HEKS rigorous enough to comply with Owen’s demand? After all, HEKS’ approach is not using result chains at all, although they are one of the mainstays of results measurement – at least according to the DCED Standard on Results Measurement. Are the 10 universal indicators enough? And what about the attribution of the changes and emerging patterns?

When I read through the four patterns that were described by the HEKS effectiveness report, I see that they are very much focused on the community level – naturally, since this is where also the focus of interventions lie. Here is an example:

Pattern 1: Sustainable development starts with the new ways in which people look at themselves. Women especially become a driving force in the development of their communities.

Or another one:

Pattern 2: People who are aware of their rights become players in their own development. They launch their initiatives beyond the scope of HEKS’ projects.

The question that immediately pops up in my mind is: What are the consequences of the projects’ actions on the wider system, beyond the community? What are the ripples that the successful projects have throughout the wider system, e.g. in the market system or the policy environment? Or even more fundamentally: Can we achieve changes in the wider system by focusing on the community level? What additional interventions are needed?

There are still many open questions, but for me, HEKS is making a huge and courageous step in the right direction.

Using principles from evolution in development

Recently, I listened to a presentation by Owen Brader titled “What can development policy learn from evolution“. I want to briefly summarize here my main insights from the presentation and put some thoughts down.

Here some insights from his presentation:

  • Experience tells us that a simplistic approach based on pre-canned policy recommendations that were gained through technical analyses and regressions simply doesn’t work. The reality is much more complex.
  • What is called “almost impossible problems” or “wicked problems”, i.e., problems we face in complex systems, are solved through evolution, not design.
  • For evolution to work, it requires a process of variation and selection.
  • In the development work today there is a lot of proliferation without diversity and certainly not enough selection.
  • Especially missing are feedback loops to establish what works and replicate it while scaling down on things that don’t work.
  • One specially important feedback are the needs, preferences and experiences of the actual beneficiaries. Due to too little efforts spend on rigorous impact evaluation and too much on process and activity evaluations, this feedback loop often doesn’t work. The direct feedback on the citizen themselves should be better taken into account: “People care deeply about whether or not they get the services they should be getting.”
  • The establishment of better and more effective feedback loops as a crucial ingredient to improve program effectiveness: “We have to be better in finding out what is working and what is not working”.
  • In evolutionary words: we should not impose new designs, but rather we should try to make better feedback loops to spur selection and amplification.
  • But as a direct consequence, we also need to acknowledge things that don’t work, i.e., failures, and adopt and adapt what is working. On international policy level, there are no necessary mechanisms to replicate success or kill of the failures.

These insights remind me a lot of a discussion I was involved recently with a group of international development organizations that are working together in a network called the GROOVE. The discussion was about ‘integrating experiential knowledge and staff observations in value chain monitoring and evaluation’. In the discussion that was held during a webinar, two important insights were voiced that correspond with Owen’s points above:

  1. Staff observations can add a lot of value to M&E systems in terms of what works in the field and what doesn’t.
  2. There is a need for a culture of acknowledging and accepting failures in order to focus on successful interventions.

Now, what does this mean if we have – for example – to design a new project? Firstly, I think it is important that the project has an inception period where a diversity of interventions can be tested. But we also need an effective mechanism to assess what impact these interventions have – if any. Now there is the problem of time delays – often, the impact of the interventions are delayed in time and might become apparent too late, i.e., only after the inception period. Especially when we base our M&E on hard impact data, we might not be in a position to say which intervention was successful and which wasn’t. Therefore, we need to rely on staff observation and perceptions of the target beneficiaries. Again, a very good understanding of the system is necessary in order to judge the changes that happen in the system.

As already Eric Beinhocker describes in his book “The Origin of Wealth”, evolution is a very powerful force in complex systems. Beinhocker defines economy as a complex system as he writes: “We may not predict or direct economic evolution but we can design our institutions to be better or worse evolvers”. I think that the same goes for our development systems. We cannot predict or direct evolution in developing countries, but we can support  the poor to become better evolvers. This has also strong implication on our view on sustainability, but I’m already sliding into the topic for another post.