Yesterday I was at the launch of a fascinating report on how to better fund organisations that aim to achieve change in complex systems. Though the report draws mainly on public sector commissioners and charitable funders in the social sector in the UK, it is relevant far beyond that. We can take many if not most of the principles the report found and with some tweaking apply them in funding for international development.
The aim of the report is to attempt to answer the question “How should organisations which have a desire to help improve people’s lives, and resources to allocate to achieve this goal, manage the distribution of those resources most effectively?” This question is certainly also relevant for international development, as its goal equally is to improve people’s lives – even though many organisations and initiatives have much narrower aims – which is a problem in itself, but that’s for another post.
Entrepreneurship is the modern-day philosopher’s stone: a mysterious something that supposedly holds the secret to boosting growth and creating jobs.
This is how a recent Schumpeter column in the Economist starts out. The argument that the author shares with us is basically that the heist for entrepreneurship both in developed countries as well as in developing countries (although he focuses on the first) is based on a faulty understanding of what an entrepreneur actually is: Continue reading
Before venturing into an outline of a results measurement framework I want to point out two blog posts by Duncan Green on aid in complex systems that are very well written and to the point. Continue reading
ODI just published a great paper by Richard Hummelbrunner and Harry Jones titled “A guide for planning and strategy development in the face of complexity.” It is a great piece that takes the discussion around harnessing complexity for more effective development to a much more concrete, practicable and practitioner friendly level.
In the relatively short (12 pages) and easy to read paper, Hummelbrunner and Jones introduce complexity, name the biggest challenges in the face of complexity, propose three core principles to face them, and even showcase a number of tools that can be applied in these situations.
I want to share some of my Sunday reading and listening with you.
First a blog post by Dave Algoso on his blog “Find What Works”: in the article Kuhn, Chambers and the future of international development he talks about paradigm shifts from science to international development. This is interesting as I myself and many around me are saying that a paradigm shift is needed in international development that appreciates the complexities of the environments we work in. Algoso features two posts by Robert Chambers where he sketches out how such a new paradigm could look like (direct links here and here)
Secondly, a TED talk by Sugata Mitra about the future of schools and learning. His basic thesis: “schools as we know them are obsolete”. One quote that particularly struck me, as the language he uses is very much the one we use when talking about development from a complexity perspective:
… we need to look at learning as the product of educational self-organization. If you allow the educational process to self-organize, then learning emerges. It’s not about making learning happen. It’s about letting it happen. The teacher sets the process in motion and then she stands back in awe and watches as learning happens.
Watch it yourself, it’s about 20 minutes:
I just came across this blog post by Cynthia Kurtz, who wrote with Dave Snowden the paper “The new dynamics of Strategy: Sense-making in a complex and complicated world“. Cynthia describes in her post how she perceives regularities in complex systems, so called oscillations.
I like the post because it uses a really refreshingly simple and jargon-less language to talk about this characteristic of complex systems. Compared with other texts on complex systems, it’s fun reading and seeing oscillation and, connected to it, unpredictability in complex systems with different eyes.
Here an example:
Those leaves remind me of a conversation I had once with a person with whom I was discussing the differences between complicated and complex patterns. He said something like, “You say a complicated pattern repeats and a complex one doesn’t, right? But how do you explain the fact that complex patterns sometimes do repeat?” I said, “They repeat until they don’t.” What I meant was, when a leaf is oscillating, it looks like it’s connected to some perfectly engineered device governed by a mechanical timer. But that’s an illusion that bursts when the leaf suddenly stops. Complicated patterns repeat because somebody or something made them repeat. They stop repeating when somebody or something stops them repeating, or when they break down and need to be fixed (after which they repeat again, if somebody or something makes them). Complex patterns repeat because they started repeating, and they stop repeating because they’ve stopped repeating. Keep in mind, of course, that the patterns we see in our world are rarely purely complex or complicated. Even those oscillating leaves I see out of my window have been influenced by the complicated design of the house that separates us.
Thanks to my engagement in the ‘Systemic M&E’ initiative of the SEEP Network (where M&E stands for monitoring and evaluation but we really have been mainly looking into monitoring), I have been discussing quite a bit with practitioners on monitoring and results measurement and how to make monitoring systems more systemic. For me this bottom up perspective is extremely revealing in how conscious these practitioners are about the complexities of the systems they work in and how they intuitively come up with solutions that are in line with what we could propose based on complexity theory and systems thinking. Nevertheless, practitioners are often still strongly entangled in the overly formalistic and data-driven mindset of the results agenda. This mindset is based on a mechanistic view of systems with clear cause-and-effect relationships and a bias for objectively obtained data that is stripped from its context and by that rendered largely meaningless for improving implementation. Continue reading