The purpose of Research Design is to provide a core document to share with team members and other stakeholders articulating how you will undertake your research, the techniques and methods you will use, the type of data you will collect, and the learning outputs you intend to produce.
The Research Design is a planning document that clearly and concisely lays out key decisions on how information will be generated, analysed and applied to meet your learning objectives, and why you have made these decisions. We recommend inclusion of the following elements:
- A set of research questions that examine hypotheses regarding your solution.
- The selection of an experimental or evaluative approach that will guide your data collection and analysis efforts.
- An explanation of your sample frame, including specific units of analysis, sites and participants you will examine.
- A description of the data collection activities and analytical techniques you will use, and a discussion around the key variables that are important to pay attention to, and how you will measure them.
- A step-by-step procedure, including a roadmap, project timeline and tasks.
- A set of assumptions and parameters for accepting or refuting hypotheses that will ultimately govern the research agenda and hold the project accountable.
In the following sections we provide guidance on how to approach each of these elements, along with some other things you’ll need to consider.
Translate Learning Objectives into Research Questions
All research and learning activities should be guided by a set of simple and clearly defined research questions. Depending on your learning objectives, these questions will be any of the following, or a combination of all three:
- Descriptive: To describe contexts, people, interactions and other aspects of a particular problem or situation.
- Evaluative: To assess the impact of a particular programme, intervention or service against measurable changes in social conditions or organisational effectiveness.
- Explanatory: To develop and test viable hypotheses about what is causing, or contributing to, phenomena of interest.
Assess potential positive and negative outcomes
An ethical approach to carrying out your pilot requires that you have a clear understanding of the range of possible outcomes, so that you can explicitly acknowledge risks and mitigate them where possible. Generally, there are two dimensions of potential outcomes: (a) positive vs negative; (b) expected vs unexpected.
Expected outcomes can be thought of at the start of the process, but unexpected ones can’t, as by definition they are unexpected. For example, a prototype mobile data management system might lead to more timely, precise and cost-efficient food distribution. This would be a positive and expected outcome. Alternatively, your pilot – by bringing together multiple community-based organisations – might improve social cohesion between religious, ethnic and politically-oriented groups. This would be a positive but perhaps unexpected outcome.
But a pilot might also produce negligible impact, meaning that financial, material and human resources have been diverted away from conventional, life-saving food-security efforts. This would be an unintended, but potentially foreseeable, outcome. Worse, it could lead to the accidental disclosure of sensitive data that results in protection issues. Understanding this will help your team plan for the best, but be equipped to anticipate, monitor and respond to a range of outcomes, including worst-case scenarios.
As a team, think through the possible impacts of the pilot, both in the Implementation workstream and the Research and Learning workstream. Review your Assumptions Log, and work through a risk assessment and threat modelling exercise to identify and take stock of potentially harmful adverse scenarios. When you’ve completed these activities, map out potential outcomes to orient your research and program management activities.
If you have negative expected outcomes, you will need to decide whether they are acceptable or not, and that they are mitigated against where possible. Some small negative impacts may be acceptable if counterbalanced by significant positive impacts (eg, side effects from life-saving medicine, such as nausea or headaches), but if they aren’t, then you will need to put your pilot on hold. The positive expected impacts should be the same as described in your value proposition.
These are the aspects that you can anticipate. The unexpected impacts are those that you have not anticipated but become apparent during the implementation of the pilot (and sometimes after). When they are identified they should be logged and fed into the Periodic Reviews and After-Action Review.
Select the Right Approach
To ensure that you meet your learning objectives, you will need to select the right methods and techniques to generate evidence of the required standard. Refer to your Learning Objectives and Research Feasibility Assessment and finalise which methods and techniques you will use. The key choices you make will be between explanatory, evaluative and descriptive approaches and methods.
We are currently developing a Research Approaches and Methods Table to help you understand the different research techniques that you might apply. We do not expand on each of the individual approaches, as these can be found in other guidance material. The aim of this Research and Learning workstream is to point you in the right direction, so that you can hone down on appropriate approaches and methods. If you are not a researcher, we hope to provide an accessible framework that enables you to have more detailed discussions with research experts.
Evaluate Tensions and Tradeoffs
Choosing your approach and methods is not as simple as choosing from a menu. There are tensions and trade-offs that must be considered in order to identify the best possible approach.
First, look at the approach and methods you have chosen and cross-reference them with your Research Feasibility Assessment. Do you see any tensions or trade-offs? For example, will access to the project sites be limited, reducing face-to-face access to the target group? Does this affect the technique you are proposing? If so, what trade-offs might you need to make on the technique you use?
Because there is no single ‘right way’ to measure the impact of innovation, our goal is to ensure that you can make context-appropriate choices around the approaches and methods that are most fit for the job by taking stock of the tensions and trade-offs that come into play when making choices between various options. Key factors to think through include:
- Context appropriateness
- Time and resource requirements
- Skill levels of your researchers and enumerators
- Risk levels
- Evidentiary standards required
Selecting Research sites and a Research Sample
The importance of site selection and sampling cannot be over-stated. The decision of precisely what ‘subjects’ you will be researching will have a significant impact on the standards of the evidence you produce in your research.
In contrast to many conventional evaluation approaches where considerations around sampling and site selection are chosen as a matter of course (ie, evaluating existing programmes), innovation projects can sometimes be more proactive in choosing a site that is beneficial for the preferred research and learning activities. However, the pilot must still be carried out in the context for which it is designed.
Considerations around site selection and sampling will derive from a number of factors, including:
- Accessibility of potential users and target groups
- The nature and complexity of the solution being researched
- The security conditions and physical infrastructure
- The costs of using a particular methodology in a given context
- The unit of analysis (eg, individual, household, village)
We recommend taking time to discuss the site selection and sampling with a researcher while you are designing your pilot to ensure that it has the best chance of delivering robust evidence without undermining the implementation of the project itself.
Define Variables and Select Data Collection Tools
Whether or not you are making use of explanatory, evaluative or descriptive approaches, you will need to find creative ways to measure the variables and indicators factors that you need. These variables and indicators are those that will prove or disprove whether you are having the impact and outcomes you are seeking; whether you have the coverage, reach and functionality you are looking for.
Working through decisions on measurement starts with an understanding of levels of measurement (the unit of measurement, eg, time, number) and whether it is quantitative (numerical data or information that can be converted into numbers) or qualitative (non-numerical data). You will then need to think through the best techniques for collecting the data, eg, surveys tools, interviews, direct observation, shadowing and focus group discussions, among other techniques.
When deciding what you are measuring, it is critical to find out whether there are standard indicators being used in humanitarian or development programmes that could provide data you need. This is particularly important for comparative studies, as you will need to be able to prove that your solution works as well as, or better than existing solutions. To do this, you would need to measure the same thing for direct comparison, eg, weight-for-height z-scores for nutritional innovations.
Further inspiration
DG ECHO’s website includes lists indicators available to download. They are not exhaustive lists, but they can act as a guide to the types of indicators that are collected, and the results indicators are helpfully broken down by sector.
Draft Your Research Procedure
Once you have finalised your Research Design you will need to translate it into a step-by-step procedure for carrying out the research. We recommend that this includes the following:
- A set of activities and project timeline that translates your Research Design into an actionable plan for your researchers.
- A set of measures and expectations that will govern the research agenda and hold the project accountable (eg, assumptions, parameters for accepting or refuting hypotheses, and discussion around limitations and potential risks).
- Orientation and training materials for researchers, enumerators and other participants.
- Data collection materials, such as interview guides, survey tools or focus group agendas.
- A plan for calibration and testing of measurement approaches and data management systems.
- Baseline data or pre-test assessments.