Basic Principles of Monitoring and Evaluation
Basic Principles of Monitoring and Evaluation
Basic Principles of Monitoring and Evaluation
AND EVALUATION
CONTENT
2. THEORY OF CHANGE
4. PERFORMANCE INDICATORS
4.1 Process (implementation) indicators
4.2 Process (implementation) indicators
4.3 Progression indicators (labour market attachment)
6. MEASURING RESULTS
MONITORING EVALUATION
Implementation Results
Relevance. Indicators should be relevant to the needs of the user and to the purpose of monitoring. They should be able to clearly
indicate to the user whether progress is being made (or not) in addressing the problems identified.
Disaggregation. Data should be disaggregated according to what is to be measured. For example, for individuals the basic
disaggregation is by sex, age group, level of education and other personal characteristics useful to understanding how the
programme functions. For services and/or programmes the disaggregation is normally done by type of service/programme.
Comprehensibility. Indicators should be easy to use and understand and data for their calculation relatively simple to collect.
Clarity of definition. A vaguely defined indicator will be open to several interpretations, and may be measured in different ways at
different times and places. It is useful in this regard to include the source of data to be used and calculation examples/methods. For
example, the indicator “employment of participants at follow-up” will require: (i) specification of what constitutes employment (work
for at least one hour for pay, profit or in kind in the 10 days prior to the measurement); (ii) a definition of participants (e.g. those who
attended at least 50 per cent of the programme); and (iii) a follow-up timeframe (six months after the completion of the programme).
Care must also be taken in defining the standard or benchmark of comparison. For example, in examining the status of young
people, what constitutes the norm – the situation of youth in a particular region or at national level?
The number chosen should be small. There are no hard and fast rules to determine the appropriate number of indicators.
However, a rule of thumb is that users should avoid two temptations: information overload and over-aggregation (i.e. too much data
and designing a composite index based on aggregation and weighting schemes which may conceal important information). A
common mistake is to over-engineer a monitoring system (e.g. the collection of data for hundreds of indicators, most of which are
not used). In the field of employment programmes, senior officials tend to make use of high-level strategic indicators such as
outputs and outcomes. Line managers and their staff, conversely, focus on operational indicators that target processes and
services.
Specificity. The selection of indicators should reflect those problems that the youth employment programme intends to address.
For example, a programme aimed at providing work experience to early school leavers needs to incorporate indicators on coverage
(how many among all school leavers participate in the programme), type of enterprises where the work experience takes place and
the occupation, and number of beneficiaries that obtain a job afterwards by individual characteristics (e.g. sex, educational
attainment, household status and so on).
Cost. There is a trade off between indicators and the cost of collecting data for their measurement. If the collection of data
becomes too expensive and time consuming, the indicator may ultimately lose its relevance.
Technical soundness. Data should be reliable. The user should be informed about how the indicators were constructed and the
sources used. A short discussion should be provided about their meaning, interpretation, and, most importantly, their limitations.
Indicators must be available on a timely basis, especially if they are to provide feedback during programme implementation.
Forward-looking. A well-designed system of indicators must not be restricted to conveying information about current concerns.
Indicators must also measure trends over time.
Adaptability. Indicators should be readily adaptable to use in different regions and circumstances.
Source: adapted from Canadian International Development Agency (CIDA), 1997. Guide to Gender-Sensitive Indicators
(Ottawa, CIDA).
Note: * Entrants are all individuals who start a specific programme. Participants are all individuals who entered and attended the programme for a minimum
period of time (usually determined by the rules of the programme as the minimum period required to produce changes, for example 50 per cent of the
programme duration). Completers are those who completed the whole programme. Dropouts, usually, are those who left the programme before a minimum
period of attendance established by the rules of the programme (e.g. the difference between entrants and participants).
The indicators in the second and third rows serve to measure the
evolution of the programme’s intake. It is normal, in fact, to see increases
in intake as the programme matures. The time t may be any time interval
(yearly, quarterly or monthly). The indicator in the fourth row serves to
measure the overall coverage of the programme. Depending on its scope,
the denominator can be the total number of youth (in a country, region,
province or town) or only those who have certain characteristics (e.g. only
those who are unemployed, workers in the informal economy, individuals
with a low level of education). The indicator in the fifth row serves to
measure the pace of implementation compared to the initial plan, while
the indicator in the last row is used to calculate overall costs.
Total number
participants Programme: training, subsidy, self-employment,
public work scheme
(including dropouts)
Individuals by sex, age group, education level,
unemployment duration, type of disadvantage,
prior occupation/work experience
= -----------------------
Number of placements
= -----------------------
Number of placements
Vocational training
= number of individuals who complete the training programme/number of entrants*
Completion rate = number of individuals who passed standardized testing at the programme’s
end/number of entrants
Graduation rate = number of individuals who left the course in the first (30, 60, 90) days of
programme/number of entrants
Drop-out rate
* For training programmes, it is necessary to distinguish between those who
entered the course (entrants) and those who attended a minimum period
(participants). In some programmes, the term “completers” is used to denote
those who complete the whole programme.
Employment subsidy
Employment subsidy
Average cost per subsidized worker employed at = total cost of subsidy/number of subsidized workers employed at
follow-up follow-up
Average cost of subsidy per subsidized worker = total cost subsidy/number of participants
Self-employment assistance
Average cost of assistance per person still self- = total cost of assistance/number of self-employed at follow-up
employed at follow-up
Average added employment generated by assisted = number of additional jobs created (individuals employed) by self-
self-employed employed individuals assisted by the programme
EU 27 population (15-74)
337.1 million persons