Research and Learning
Generate knowledge to make decisions and evaluate your progress
The objective of the Pilot stage is to learn whether, and how, your solution works in the real world. Your pilot must therefore facilitate a process for continuous learning and discovery so that you understand the impact of your innovation. We have developed a Research and Learning framework to support you in these endeavours.
Across the humanitarian sector there are concerns among innovation stakeholders that there are too many trials of new ideas that do not, in practice, produce sufficient evidence of improvement to gain traction for wider uptake – and do not always give full consideration of the risks associated with experimentation in fragile contexts. This section of the guide aims to provide you with the knowledge to avoid this fate.
Panzi and Make Music Matter’s Healing in Harmony project is pioneering the use of music therapy for survivors of sexual violence in conflict settings. Despite very positive results in humanitarian settings in DRC, using music-making as therapy was often dismissed as a gimmick. It was hard for the innovators to quantify their impact in ways that funders found credible.
To address this obstacle to scaling it became imperative to find a robust way to quantify the relationship between their form of music therapy and objective medical outcomes. Randomised control trials (RCTs) were used and the results proved that the music therapy delivered better results than any other form of therapy offered in these areas, including CBT.
A step-wedge design was put in place to carry out RCT every three months as a new cohort of women join the programme, with follow-up trials every three months thereafter. Panzi were able to use this systematic and robust research to evidence the impact of their work and to secure financial support that led to a substantial acceleration to scale.
In the development of this framework, we reviewed over 200 published resources on monitoring, evaluation, research and learning. These resources were useful in many different ways, but none of them were directly applicable for our purposes, for a number of reasons:
- Some were intended for assessing innovations, but not in humanitarian contexts
- Some were grounded in humanitarian contexts but intended for conventional solutions and business-as-usual programming
- Some were suitable for early-stage innovation, but not for pilot testing in real-world humanitarian environments
- Some provided data generation techniques and tools (such as interview guides, or story boarding tools) but did not provide a way for users to make appropriate decisions about which techniques and tools would be most suitable for their purpose.
- Some were developed specifically for monitoring and evaluation of humanitarian innovation (eg, Elrha-ALNAP working papers on monitoring and evaluation) but were not intended to provide a complete framework
We therefore developed this Research and Learning framework with the ambition of covering all the areas we feel are necessary for understanding how to design and deliver a robust research and learning agenda for a humanitarian innovation pilot. The current version of the framework provides guidance on what you need to consider, and is oriented around the following three objectives.
Understanding evidentiary requirements
Learning objectives may range from simply learning whether or not the intervention is reaching its target group to demonstrating comparative improvement over existing solutions in order to make the case for scale. The role of evidence in the innovation process is defined by the applications to which it is put, and different applications require different levels of rigor. We call this ‘evidentiary requirements.’
Our aspiration is to enable the appropriate application of advanced social science research methods, monitoring, evaluation and learning practices, and evidence-based design approaches to meet these evidentiary requirements – and to recognise when and how these methods, practices and approaches might be ‘good enough.’ As a first step, we seek to equip innovators to understand the range of evidentiary requirements needed to fulfil key learning objectives and achieve core milestones.
Making context-appropriate decisions
There is no single ‘right way’ to measure the impact of innovation, and even if there were, such approaches would be constrained by the complex and insecure environments that characterise humanitarian emergencies.
There are a range of factors that influence the ways in which research and learning activities are carried out, in terms of the workflows and processes employed, the methodologies and techniques used, the granularity and scope of data required, and the ways in which analytical products are designed, disseminated and applied in support of decision-making.
Our aim is to ensure that you can make context-appropriate decisions when assigning tools to tasks by taking stock of the tensions and trade-offs between various options.
Operationalising fit-for-purpose resources
Once you have navigated the tensions and trade-offs in your approach, we provide a number of steps to help you bring together a suitable combination of methods, techniques and tools to reach key Research and Learning milestones.
These are the outputs that any innovation programme or project should seek to produce, regardless of the particular learning objectives, evidentiary requirements, operational context and/or available resources on hand.
While there is significant variation in the combinations of methods and techniques you may employ, or the kinds of data you might generate and make use of, there is nevertheless a central sequence of practices that characterise any approach to research and learning used to understand whether, and how, a solution works in the real world.