THE KISUMU NATIONALPOLITECHNIC
BUSINESS AND ENTREPRENEURSHIP DEPARTMENT
MONITORING & EVALUATION LECTURE NOTES
INTRODUCTION
What is Monitoring and Evaluation?
Monitoring and Evaluation is a process of continued gathering of information and its analysis, in
order to determine whether progress is being made towards pre-specified goals and objectives,
and highlight whether there are any unintended (positive or negative) effects from a
project/programme and its activities.
What is a Monitoring?
1. Monitoring is a continuous process of collecting, analyzing, documenting, and reporting
information on progress to achieve set project objectives. It helps identify trends and
patterns, adapt strategies and inform decisions for project or programme management.
2. A management activity that allows a continuous adaptation of the intervention if
problems arise or if changes in the context have an influence on the performance of the
operation.
3. The systematic collection of information on all aspects of the project while it is being
implemented.
4. A continuing function that aims primarily to provide the management and main
stakeholders of an ongoing intervention with early indications of progress, or lack
thereof, in the achievement of results.
What is Evaluation?
1. Evaluation is a periodic assessment, as systematic and objective as possible, of an on-
going or completed project, programme or policy, its design, implementation and results.
It involves gathering, analysing, interpreting and reporting information based on credible
data. The aim is to determine the relevance and fulfilment of objectives, developmental
efficiency, effectiveness, impact and sustainability.
2. A systematic and objective assessment of ongoing or completed project. It makes
comparison of the outcomes of the project with planned ones.
3. An assessment, as systematic and objective as possible, of an ongoing or completed
project, program or policy, its design, implementation and results. The aim is to
1
determine the relevance and fulfillment of objectives, developmental efficiency,
effectiveness, impact and sustainability. An evaluation should provide information that is
credible and useful, enabling the incorporation of lessons learned into the decision-
making process of both recipients and donors.
4. The process of determining the worth or significance of a project to determine the
relevance of objectives, the efficacy of design and implementation, the efficiency or
resource use, and the sustainability of results. An evaluation should (enable) the
incorporation of lessons learned into the decision-making process of both partner and
donor.
The differences between Monitoring & Evaluation
Monitoring is ongoing and tends to focus on what is happening. Monitoring data is
typically used by managers for ongoing project implementation, tracking outputs,
budgets, compliance with procedures, etc.
Evaluation is a process of assessing whether the project has achieved its intended
objectives. By drawing conclusions, evaluation intends to provide recommendations for
the improvement on the future course of the project as well as lessons learned for other
projects. Some big organizations use specific criteria when they do evaluation. Often, the
main criteria assessed are efficiency, effectiveness and impact. Also, relevance and
sustainability are usually included.
Purpose/Importance of Monitoring and Evaluation
Timely and reliable M&E provides information to:
1. Support project/programme implementation with accurate, evidence-based reporting that
informs management and decision-making to guide and improve project/programme
performance.
2. Contribute to organizational learning and knowledge sharing by reflecting upon and
sharing experiences and lessons.
3. Uphold accountability and compliance by demonstrating whether or not our work has
been carried out as agreed and in compliance with established standards and with any
other stakeholder requirements
4. Provide opportunities for stakeholder feedback,
2
5. Promote and celebrate project/program work by highlighting accomplishments and
achievements, building morale and contributing to resource mobilization.
6. Strategic management in provision of information to inform setting and adjustment of
objectives and strategies.
7. Build the capacity, self-reliance and confidence stakeholders, especially beneficiaries and
implementing staff and partners to effectively initiate and implement development
initiatives.
Key benefits of Monitoring and Evaluation
a. Provide regular feedback on project performance and show any need for ‘mid-course’
corrections
b. Identify problems early and propose solutions
c. Monitor access to project services and outcomes by the target population;
d. Evaluate achievement of project objectives, enabling the tracking of progress towards
achievement of the desired goals
e. Incorporate stakeholder views and promote participation, ownership and accountability
f. Improve project and programme design through feedback provided from baseline, mid-term,
terminal and ex-post evaluations
g. Inform and influence organizations through analysis of the outcomes and impact of
interventions, and the strengths and weaknesses of their implementation, enabling development
of a knowledge base of the types of interventions that are successful (i.e. what works, what does
not and why.
h. Provide the evidence basis for building consensus between stakeholders
Characteristics of monitoring and evaluation
Monitoring tracks changes in program performance or key outcomes over time. It has the
following characteristics:
Conducted continuously
Keeps track and maintains oversight
Documents and analyzes progress against planned program activities
Focuses on program inputs, activities and outputs Looks at processes of program
implementation
Considers program results at output level
Considers continued relevance of program activities to resolving the health problem
Reports on program activities that have been implemented
3
Reports on immediate results that have been achieved
Evaluation is a systematic approach to attribute changes in specific outcomes to program
activities. It has the following characteristics:
Conducted at important program milestones
Provides in-depth analysis
Compares planned with actual achievements
Looks at processes used to achieve results
Considers results at outcome level and in relation to cost
Considers overall relevance of program activities for resolving health problems
References implemented activities
Reports on how and why results were achieved
Contributes to building theories and models for change
Attributes program inputs and outputs to observed changes in program outcomes and/or
impact
Types of Evaluation
Three ways of classifying:
When it is done - Ex-ante evaluation; Formative evaluation; Summative – end of project,
and Ex-Post evaluation.
Who is doing it - External evaluation; Internal evaluation or self-assessment
What methodology or technicality is used- Real-time evaluations (RTEs); Meta-
evaluations; Thematic evaluations; Cluster/sector evaluations; Impact evaluations
The details are as follows: -
a) Ex–ante evaluation: Conducted before the implementation of a project as part of the planning.
Needs assessment determines who needs the program, how great the need is, and what might
work to meet the need. Implementation(feasibility)evaluation monitors the fidelity of the
program or technology delivery, and whether or not the program is realistically feasible within
the programmatic constraints
b) Formative evaluation: Conducted during the implementation of the project. Used to determine
the efficiency and effectiveness of the implementation process, to improve performance and
assess compliance. Provides information to improve processes and learn lessons. Process
evaluation investigates the process of delivering the program or technology, including alternative
delivery procedures. Outcome evaluations investigate whether the program or technology caused
demonstrable effects on specifically defined target outcomes. Cost-effectiveness and cost-benefit
4
analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs
and values
c) Midterm evaluations are formative in purpose and occur midway through implementation.
d) Summative evaluation: Conducted at the end of the project to assess state of project
implementation and achievements at the end of the project. Collate lessons on content and
implementation process. Occur at the end of project/programme implementation to assess
effectiveness and impact.
e) Ex-post evaluation: Conducted after the project is completed. Used to assess sustainability of
project effects, impacts. Identifies factors of success to inform other projects. Conducted
sometime after implementation to assess long-term impact and sustainability.
f) External evaluation: Initiated and controlled by the donor as part of contractual agreement.
Conducted by independent people – who are not involved in implementation. Often guided by
project staff
g) Internal or self-assessment: Internally guided reflective processes. Initiated and controlled by
the group for its own learning and improvement. Sometimes done by consultants who are
outsiders to the project. Need to clarify ownership of information before the review starts
h) Real-time evaluations (RTEs): are undertaken during project/programme implementation to
provide immediate feedback for modifications to improve on-going implementation.
i) Meta-evaluations: are used to assess the evaluation process itself. Some key uses of meta-
evaluations include: take inventory of evaluations to inform the selection of future evaluations;
combine evaluation results; check compliance with evaluation policy and good practices; assess
how well evaluations are disseminated and utilized for organizational learning and change, etc.
j) Thematic evaluations: focus on one theme, such as gender or environment, typically across a
number of projects, programmes or the whole organization.
k) Cluster/sector evaluations: focus on a set of related activities, projects or programmes,
typically across sites and implemented by multiple organizations
l) Impact evaluations: is broader and assesses the overall or net effects -- intended or unintended
-- of the program or technology as a whole focus on the effect of a project/programme, rather
than on its management and delivery. Therefore, they typically occur after project/programme
completion during a final evaluation or an ex-post evaluation. However, impact may be
5
measured during project/programme implementation during longer projects/programmes and
when feasible.
6
Recommended tools and/or techniques to be used in PME
Logical Framework Analysis (LFA),
✓ Environmental Impact Assessment (EIA),
✓ Social Impact Assessment (SIA) and
✓ Strength, Weakness, Opportunity and Threat (SWOT) analysis.
✓ Logical Framework Analysis (LFA).
M&E System and Framework
A project monitoring and evaluation (M&E) system covers all the work carried out during or
after a project to define, select, collect, analyse and use information. It is where everything
comes together, from the initial selection of objectives and indicators through to the final
evaluation of a project.
DESIGN AND IMPLEMENTATION OF M&E SYSTEMS
Section Overview
This section explains what can go wrong with project M&E systems and sets out a framework of
concepts and principles that can aid the design and implementation of effective project M&E. It
provides the core of a guidance manual or handbook for professional work in this field.
Section Learning Outcomes
By the end of this section, students should be able to:
understand the M&E systems and their relation to the logical framework analysis
be familiar with the challenges of M&E and the concepts of results-based management
2.1 M&E systems and common deficiencies
A monitoring and evaluation system is made up of the set of interlinked activities that must be
undertaken in a co-ordinated way to plan for M&E, to collect and analyse data, to report
information, and to support decision-making and the implementation of improvements.
Think to yourself for a few moments about what you think constitutes the main aspects of an
M&E system for a rural development project.
7
The key parts of an M&E system are succinctly set out in 2.1.1.
2.1.1 The six main components of a project M&E system
– Clear statements of measurable objectives for the project and its components.
– A structured set of indicators covering: inputs, process, outputs, outcomes, impact, and
exogenous factors.
– Data collection mechanisms capable of monitoring progress over time, including baselines and
a means to compare progress and achievements against targets.
– Where applicable building on baselines and data collection with an evaluation framework and
methodology capable of establishing causation (ie capable of attributing observed change to
given interventions or other factors).
– Clear mechanisms for reporting and use of M&E results in decision-making.
– Sustainable organisational arrangements for data collection, management, analysis, and
The design of an M&E system should start at the same time as the overall project preparation
and design, and be subject to the same economic and financial appraisal, at least to achieve the
least-cost means of securing the desired objectives. Such practice has been followed for projects
in recent years. Problems arose with earlier M&E systems that were set up after the project had
started. Often this was left to management alone, who by that time already had too much to
grapple with and could not provide sufficient time, resources or commitment.
The ‘supply side’ of M&E design should not be overlooked. Skilled and well-trained people are
required for good quality data collection and analysis. They may be a very scarce resource in
developing countries, and should be ‘shadow-priced’ accordingly when appraising alternative
M&E approaches. It is inevitable that the system designed will not be as comprehensive as is
desirable, and will not be able to measure and record all the relevant indicators. It is here that the
project analyst must use the tools of economic appraisal, and judgment based on experience, to
find the best compromise.
Evaluations of existing M&E systems by agencies have shown certain common characteristics,
weaknesses, and recurrent problems which are important causes of divergence between the
theory of M&E and actual practice in the field. These are worth bringing to the attention of both
designers and operators of M&E systems, as problems to be avoided in the future:
poor system design in terms of collecting more data than are needed or can be processed
8
inadequate staffing of M&E both in terms of quantity and quality
missing or delayed baseline studies. Strictly these should be done before the start of project
implementation, if they are to facilitate with and without project comparisons and evaluation
delays in processing data, often as a result of inadequate processing facilities and staff
shortages. Personal computers can process data easily and quickly but to make the most of these
capabilities requires the correct software and capable staff
delays in analysis and presentation of results. These are caused by shortages of senior staff,
and by faulty survey designs that produce data that cannot be used. It is disillusioning and yet
common for reports to be produced months or years after surveys are carried out when the data
have become obsolete and irrelevant. This is even more the case when computer printouts or
manual tabulations of results lie in offices, and are never analysed and written up
finally, even where monitoring is effective the results often remain unused by project staff
Experience from the World Bank-funded agricultural water management projects, reflecting
upon the quality of M&E systems carried out by the projects, is highlighted in 2.1.2, below.
M&E System and Framework
A project monitoring and evaluation (M&E) system covers all the work carried out during or
after a project to define, select, collect, analyse and use information. It is where everything
comes together, from the initial selection of objectives and indicators through to the final
evaluation of a project.
DESIGN AND IMPLEMENTATION OF M&E SYSTEMS
Section Overview
This section explains what can go wrong with project M&E systems and sets out a framework of
concepts and principles that can aid the design and implementation of effective project M&E. It
provides the core of a guidance manual or handbook for professional work in this field.
Section Learning Outcomes
By the end of this section, students should be able to:
understand the M&E systems and their relation to the logical framework analysis
9
be familiar with the challenges of M&E and the concepts of results-based management
2.1 M&E systems and common deficiencies
A monitoring and evaluation system is made up of the set of interlinked activities that must be
undertaken in a co-ordinated way to plan for M&E, to collect and analyse data, to report
information, and to support decision-making and the implementation of improvements.
Think to yourself for a few moments about what you think constitutes the main aspects of an
M&E system for a rural development project.
The key parts of an M&E system are succinctly set out in 2.1.1.
2.1.1 The six main components of a project M&E system
– Clear statements of measurable objectives for the project and its components.
– A structured set of indicators covering: inputs, process, outputs, outcomes, impact, and
exogenous factors.
– Data collection mechanisms capable of monitoring progress over time, including baselines and
a means to compare progress and achievements against targets.
– Where applicable building on baselines and data collection with an evaluation framework and
methodology capable of establishing causation (ie capable of attributing observed change to
given interventions or other factors).
– Clear mechanisms for reporting and use of M&E results in decision-making.
– Sustainable organisational arrangements for data collection, management, analysis, and
The design of an M&E system should start at the same time as the overall project preparation
and design, and be subject to the same economic and financial appraisal, at least to achieve the
least-cost means of securing the desired objectives. Such practice has been followed for projects
in recent years. Problems arose with earlier M&E systems that were set up after the project had
started. Often this was left to management alone, who by that time already had too much to
grapple with and could not provide sufficient time, resources or commitment.
The ‘supply side’ of M&E design should not be overlooked. Skilled and well-trained people are
required for good quality data collection and analysis. They may be a very scarce resource in
developing countries, and should be ‘shadow-priced’ accordingly when appraising alternative
M&E approaches. It is inevitable that the system designed will not be as comprehensive as is
10
desirable, and will not be able to measure and record all the relevant indicators. It is here that the
project analyst must use the tools of economic appraisal, and judgment based on experience, to
find the best compromise.
Evaluations of existing M&E systems by agencies have shown certain common characteristics,
weaknesses, and recurrent problems which are important causes of divergence between the
theory of M&E and actual practice in the field. These are worth bringing to the attention of both
designers and operators of M&E systems, as problems to be avoided in the future:
poor system design in terms of collecting more data than are needed or can be processed
inadequate staffing of M&E both in terms of quantity and quality
missing or delayed baseline studies. Strictly these should be done before the start of project
implementation, if they are to facilitate with and without project comparisons and evaluation
delays in processing data, often as a result of inadequate processing facilities and staff
shortages. Personal computers can process data easily and quickly but to make the most of these
capabilities requires the correct software and capable staff
delays in analysis and presentation of results. These are caused by shortages of senior staff,
and by faulty survey designs that produce data that cannot be used. It is disillusioning and yet
common for reports to be produced months or years after surveys are carried out when the data
have become obsolete and irrelevant. This is even more the case when computer printouts or
manual tabulations of results lie in offices, and are never analysed and written up
finally, even where monitoring is effective the results often remain unused by project staff
Experience from the World Bank-funded agricultural water management projects, reflecting
upon the quality of M&E systems carried out by the projects, is highlighted in 2.1.2, below.
11
Answer the following questions to see how much you know
about this topic. Go to page 19 to see the correct answers.
1. M&E plans should include:
a. A detailed description of the indicators to be used
b. The data collection plan
c. A plan for the utilization of the information gained
d. All of the above
e. a and b only
2. The purpose of indicators is to:
a. Demonstrate the strength of the information system
b. Serve as benchmarks for demonstrating achievements
c. Provide program accountability
d. Describe the objectives of a project
3. The problem statement and goals and objectives of a
project should be described in the M&E plan.
a. True
b. False
3. The results of M&E activities can be disseminated through:
a. Written reports
b. Press releases
c. The mass media
d. Speaking events
e. All of the above
4. When should the M&E plan be created?
a. During the design phase of a program
b. At the midpoint of the program
c. At the end of the program
d. After all of the data have been collected but before they
12
are analyzed.
13