When I wrote my last post about experimenting with new structures for a complexity aware Theory of Change (ToC) in Myanmar, I had a few elements in place, but still some questions. Going further back to an earlier post, I was clear that differentiating between clear causal links for complicated issues and unpredictable causalities for complex ones is critical. I have been thinking about that a lot and last week I have taught a session on monitoring in complex contexts and I think I have found the final piece of the puzzle. Continue reading
In this post I want to share with you an idea of a new conceptual framework for monitoring and results measurement (MRM) in market system development projects. To manage your expectations, I will not present a finished new framework, but a model I have been pondering with for a while. The model is still in an early state but it would be great to harness your feedback to further improve on it. Indeed, what is presented here is based on everything I have learned in recent years from a large number of practitioners that contributed to the discussion on Systemic M&E, my work in the field, but particularly also from the guest contributions here on my blog by Aly Miehlbradt and Daniel Ticehurst and the intense discussions on MaFI. The ideas are strongly based on the learning from the Systemic M&E Initiative and also apply the seven principles of Systemic M&E, although I am not doing this explicitly. Continue reading
We now need to start a constructive discussion on how a truly systemic Monitoring and Results Measurement (MRM) framework could look like (as Evaluation does not play a big role in the current discussions, I am adopting the expression MRM and avoid the M&E). In this post, I will take up the discussions on MRM and the DCED Standard for Results Measurement from the two guest posts by Aly Miehlbradt and Daniel Ticehurst and will add from a discussion that runs in parallel on the very active forum of the Market Facilitation Initiative (MaFI). I will also add my own perspective suggesting that we need to find a new conceptual model to define causality in complex market system. Based on that, in my next post, I will try to outline a possible new conceptual model for MRM. Continue reading
After submitting a long comment as a reply to Aly Miehlbradt’s post, I could win Daniel Ticehurst to instead write another guest post. Daniel’s perspective on the DCED Standard nicely contrasts with the one put forward by Aly and I invite you all to contribute with your own experiences to the discussion. This was not originally planned as a debate with multiple guest posts, but we all adapt to changing circumstances, right?
Dear Marcus and Aly, many thanks for the interesting blog posts on monitoring and results measurement, the DCED standard and what it says relating to the recent Synthesis on Monitoring and Measuring changes in market systems.
This is a guest post by Aly Miehlbradt. Aly is sharing her thoughts and experiences on monitoring and results measurement in market systems development projects. She highlights the Donor Committee for Enterprise Development (DCED) Standard for Results Measurement and its inception as a bottom-up process and draws parallels between the Standard, her own experiences, and the recently published Synthesis Paper of the Systemic M&E Initiative.
In one of Marcus’s recent blog posts, he cites the SEEP Value Initiative paper, “Monitoring and Results Measurement in Value Chain Development: 10 Lessons from Experience” (download the paper here), as a good example of a bottom-up perspective that focuses on making results measurement more meaningful for programme managers and staff. Indeed the SEEP Value Initiative was a great learning experience, and is just one example of significant and on-going work among practitioners and donors aimed at improving monitoring and results measurement (MRM) to make it more useful and meaningful. The DCED Results Measurement Standard draws on and embodies much of this work and, also, promotes it. In fact, the lessons in MRM that emerged from the SEEP Value initiative came from applying the principles in the DCED Results Measurement Standard.
I am really happy to announce the publication of the Synthesis Paper of the so called ‘Systemic M&E’ initiative. The paper is the synthesis of conversations that started in MaFI in June 2010 and a series of online and in-person conversations that took place in the second half of 2012. It captures the voices of practitioners, academics, donors and entrepreneurs who are trying to find better ways to monitor and evaluate the influence of development projects on market systems and learn more, better and faster from their interventions. Continue reading
In a training on evaluating projects I attended a while ago, a representative of the Swiss Charity HEKS presented their results measurement (RM) system. The presentation caught my immediate attention and interest since HEKS is using principles of complexity theory as a basis for their RM framework. Based on this rather experimental framework, the organization published a first ‘effectiveness report‘ in March 2011. I want to present some of the interesting features of the RM system, based on the effectiveness report.
HEKS acknowledged when building their RM framework that development takes place in complex and dynamic systems with the consequence that the behavior of such systems is largely unpredictable and, thus, effects of interventions also hard to predict.
This challenging perspective implies a different understanding of cause and effect. Connected to their environment, living systems do not react to a single chain of command, but to a web of influences.
As a consequence, HEKS does not base its projects on rigid impact logics and impact chains, but they are conscious that
HEKS cannot always objectively trace the effects of its actions, but can make its intentions, input and observations transparent.
As a consequence, HEKS’ particular approach focuses on the changes observed and experienced by different stakeholders involved at several levels of their projects.
The focus is more on the significance than on the quantification of such changes for the people who experience them. HEKS herewith takes a path different from strict measurement and hard data collection. Its aim is to grasp and understand the changes in the purpose, identity and dynamics that hold and drive the systems it gets involved – rather than to measure their ever changing dimensions.
Subsequently, HEKS’ method is to adopt a bird’s-eye view, look for ‘emerging patterns‘ and try to interpret them. Qualitative data is collected on three levels, i.e., the indivudual level, the project level and the programme level through methods like ‘Most Significant Changes’, monthly newsletters and annual reports focusing on observations of different level staff as well as a two days workshop for compilation.
Nevertheless, HEKS defined 10 key indicators that are collected for all countries they are active in. These indicators are for example number of beneficiaries, income increase, yield increase, etc.
For me, this is a very interesting approach and it resonates very well with the discussion on ‘experiential knowledge and staff observation’ of the GROOVE network that I mentioned in my last post. Also the staff observation have as an implicit goal to grasp emerging patterns of positive changes in the system the project tries to influence in order to amplify this change.
Owen Barder, on whose presentation on evolution and development I wrote in my last post, is asking for more rigorous evaluation of project impacts in order to be able to see what works and what doesn’t. Is the RM framework proposed by HEKS rigorous enough to comply with Owen’s demand? After all, HEKS’ approach is not using result chains at all, although they are one of the mainstays of results measurement – at least according to the DCED Standard on Results Measurement. Are the 10 universal indicators enough? And what about the attribution of the changes and emerging patterns?
When I read through the four patterns that were described by the HEKS effectiveness report, I see that they are very much focused on the community level – naturally, since this is where also the focus of interventions lie. Here is an example:
Pattern 1: Sustainable development starts with the new ways in which people look at themselves. Women especially become a driving force in the development of their communities.
Or another one:
Pattern 2: People who are aware of their rights become players in their own development. They launch their initiatives beyond the scope of HEKS’ projects.
The question that immediately pops up in my mind is: What are the consequences of the projects’ actions on the wider system, beyond the community? What are the ripples that the successful projects have throughout the wider system, e.g. in the market system or the policy environment? Or even more fundamentally: Can we achieve changes in the wider system by focusing on the community level? What additional interventions are needed?
There are still many open questions, but for me, HEKS is making a huge and courageous step in the right direction.