Monitoring and Evaluation in Emergencies
Monitoring and Evaluation in Emergencies
Monitoring and Evaluation in Emergencies
Emergencies
Session overview
• Introduction to M&E in emergencies
• The project cycle:
– Monitoring
– Evaluation
– LogFrames
– Indicators
• Evaluation of humanitarian action and the
DAC criteria
Learning objectives
By the end of this session, you should be able to:
• Understand the basic concepts of monitoring and evaluation
• Be capable of describing key evaluation parameters and the
importance of each of the parameters
• Be aware of the importance of monitoring and evaluation for
nutrition interventions in emergencies
• Be aware of the present gaps in practice in terms of
monitoring and evaluation of interventions in emergencies.
Introduction
• Monitoring and evaluation in emergency
contexts has two functions:
Plan
Design
Advocacy Re-design
Monitor
Implement
Evaluate
What is Monitoring?
• Monitoring = the routine oversight of the
implementation of an activity / intervention
• Aim = to establish the extent to which an
activity is proceeding according to plan and
allow timely corrective action, as necessary
PROCESS (PERFORMANCE) IMPACT (SITUATION)
MONITORING MONITORING
Food management
Plan
Design
Advocacy Re-design
Monitor
Implement
Evaluate
Approaches to evaluation in emergencies
• No evaluation!
• Single-agency post-intervention evaluation
• Increasing move towards:
– Inter-agency evaluations: the objective with these is to
evaluate responses as a whole and the links between
interventions
– Real time evaluations: carried out 8 to 12 weeks after the
onset of an emergency and are processed within one
month of data collection
Good practice
• Describe methods used
• Use a multi-method approach and cross-
check
• Talk to primary stakeholders
• Disaggregate findings
• Ensure a focus on social process and
causality
• Make clear any evaluator bias
• Integrate the DAC criteria!
Challenges to M&E in emergencies
It involves identifying:
• strategic elements (inputs,
outputs, outcomes and
impact)
• their causal relationships
• Indicators
• assumptions and risks that
may influence success and
failure.
Impact
Outcome
Output
Output
Output
Logframes take the analysis further to the
identification of indicators and the means
of verification (or sources of data) for Impact
those indicators.
Outcome
Logframes push a discipline of identifying
indicators for each component in the logic Output
model. Output
Output
Global /
standardised Standardised global indicators are
comparable in all settings.
Locally
developed Other indicators tend to be context specific
and must be developed locally.
Performance indicators
Impact
are measures that show results
relative to what was planned at
Outcome
each level of the "results chain"