Misrak Final Thesis
Misrak Final Thesis
Misrak Final Thesis
IMPLEMENTATION PROJECT.
JUNE 2022
ADDIS ABABA
THE EFFECT OF MONITORING AND EVALUATION ON
PROJECT PERFORMANCE: THE CASE OF ADDIS MACHINE
AND SPARE PARTS MANUFACTURING INDUSTRY (AMSMI)
KIZEN IMPLEMENTATION PROJECT.
ID NO: SGS/0463/2013A
JUNE 2022
ADDIS ABABA
APPROVAL
ST. MARY’S UNIVERSITY SCHOOL OF GRADUATE
STUDIES
I, Misrak Kassahun, the under signed, declare that this thesis is my original work. I have
undertaken the research work independently with the guidance and support of the research
advisor. This study has not been submitted for any degree or diploma program in this or any
other institutions and that all sources of materials used for the thesis has been duly
acknowledged.
i
LETTER OF CERTIFICATION
This is to certify that the thesis prepared by Misrak Kassahunthat is submitted in partial
fulfillment of the requirements for the Degree of Masters of Arts in project management
complies with the regulations of the University and meets the accepted standards with respect to
originality and quality.
ii
Table of Contents
Declaration ....................................................................................................................................... i
LETTER OF CERTIFICATION .................................................................................................... ii
Table of Contents ........................................................................................................................... iii
List of Figure................................................................................................................................... v
List of Tables ................................................................................................................................. vi
Abstract ......................................................................................................................................... vii
CHAPTER ONE ............................................................................................................................. 8
INTRODUCTION .......................................................................................................................... 8
1.1 Background of the Study .......................................................................................................... 8
1.2 Statement of the Problem .......................................................................................................... 9
1.3 Research Questions ............................................................................................................ 12
1.4 Objective of the Study ............................................................................................................ 12
1.4.1 General Objective ............................................................................................................. 12
1.4.2 Specific Objective ............................................................................................................ 12
1.6 Scope of the Study .................................................................................................................. 13
1.7 Limitations of the Study.......................................................................................................... 13
1.8 Organization of the Research .................................................................................................. 14
CHAPTER TWO .......................................................................................................................... 15
RELATED LITERATURE REVIEW .......................................................................................... 15
2.1 Theoretical Review ................................................................................................................. 15
2.1.1 Concept of M&E .............................................................................................................. 15
2.1.2 Challenges of Establishing M and E Systems .................................................................. 18
2.1.3 Factor affecting M&E practice ......................................................................................... 19
2.1.4 Project Performance ......................................................................................................... 19
2.2 key performance indicators of project performance ............................................................... 20
2.3 Empirical literature ................................................................................................................. 21
2.2.1 M & E Planning Process and Project Performance .......................................................... 21
2.2.2 Technical Expertise .......................................................................................................... 22
2.2.3 Stakeholder Involvement.................................................................................................. 23
2.3. Conceptual Framework .......................................................................................................... 26
CHAPTER THREE ...................................................................................................................... 27
RESEARCH METHODOLOGY.................................................................................................. 27
iii
3.1 Research Design...................................................................................................................... 27
3.2 Research Approach ................................................................................................................. 27
3.3 Sampling Design ..................................................................................................................... 28
3.4 Data Sources Tool and Data Collection .................................................................................. 29
3.4.1 Data Source ...................................................................................................................... 29
3.4.2 Data Collection Tools ...................................................................................................... 29
3.4 Data Collection procedures ................................................................................................. 30
3.5 Data Analysis Methods ........................................................................................................... 30
3.6 Reliability and Validity Analysis ............................................................................................ 30
3.7. Ethical Considerations ........................................................................................................... 30
CHAPTER FOUR ......................................................................................................................... 32
DATA PRESENATION AND ANALYSIS ................................................................................. 32
4.1 Reliability Statistics............................................................................................................. 32
4.2 Demographic Profile of Respondents ................................................................................. 33
4.3 Project Monitoring and Evaluating Practices ...................................................................... 34
4.3.1 Monitoring Projects ...................................................................................................... 34
4.3.2Evaluating Projects ........................................................................................................ 36
4.4 Organizational Performance of AMSMI ............................................................................. 37
4.5 Effects of Monitoring and Evaluation on performance of AMSMI .................................... 38
4.5.1 Correlation analysis ...................................................................................................... 38
4.5.2 Regression Analysis ......................................................................................................... 40
CHAPTER FIVE .......................................................................................................................... 46
CONCLUSION AND RECOMMENDATION ............................................................................ 46
5.1 Conclusion .............................................................................................................................. 46
5.2 Recommendations ................................................................................................................... 48
APPENDIX I: REFERENCES ..................................................................................................... 49
APPENDIX II: QUESTIONNAIRE ............................................................................................. 53
iv
List of Figure
v
List of Tables
vi
Abstract
The aim of the study was to analyze the effect of monitoring and evaluation on
project performance in the case of AMSMI. The study used explanatory and
descriptive research design. The research used mixed (qualitative and
quantitative) research approach. Purposive sampling technique is used for
selecting the sample size .the sample size of the study was 78employess of the
organization. The study used primary source of data. Primary data was gathered
through the use of self-administered questionnaires. The statistical package for
social science (SPSS) version 20 was used to analyze the data obtained from
primary sources. Out of the total questionnaires 76 were returned back, which is
about 96% of the total distributed. The study found that evaluation has a
significant effect on project performance. Monitoring also has a strong positive
significant effect on project performance. It is recommended that mangers/leaders
at AMSMI to make monitoring practice more feasible to have better project
performance. It is also recommended that mangers/leaders at AMSMI to practice
evaluation. Policy makers at the bank level and/or the national bank level should
consider developing the M&E practice of the organization to have better project
performance.
vii
CHAPTER ONE
INTRODUCTION
This chapter presents a general background of the study, Objectives of the study as well as
statement of the problem. The chapter further describes the scope of the study, significance of
the study and organization of the study.
Monitoring and evaluation is a very useful tool for project work activities. It offers a crucial
mechanism for understanding how any project operates, how activities can be measured, and
how it can aid in the accomplishment of project goals that, in the end, result in an organization
performing successfully. M&E is therefore essential to the effective operation of any
organization, whether public, private, non-governmental, or civil society as a whole. Monitoring
and evaluation of the project's interventions are crucial for determining how they affect the lives
of individuals and the community as a whole. Organizations are required to use M&E for three
main purposes: to understand their own processes and outcomes, to support internal planning and
development, and to demonstrate their accountability to the stakeholders (UNDP, 2009).
Monitoring critical functions include gathering feedback from participants, collecting data,
observing the project's implementation, analyzing contextual changes, and providing an early
warning system of potential challenges. Monitoring data must be analyzed to ensure that the
project is being implemented correctly and that the desired outcomes are being achieved.
Midcourse correction should be performed if the project is not moving in the desired direction.
Monitoring is necessary at all levels of a program (from input, process, output and outcome). The
focus is usually on output data, but it's also important to keep track of the goals and objectives.
Monitoring should ideally be a project management team's internal function. As a result,
monitoring is crucial to a project's success. Evaluation aids in the analysis of deviations from
planned objectives and goals. M&E facilitates learning by doing by providing feedback to
project functionaries. As a result, learning organizations must invest in the development and
enhancement of in-house capabilities to anchor M&E functions.
8
However, many organizations now consider M&E as a donor need rather than a management
tool for assessing progress and detecting and addressing errors in project planning or
implementation (Shapiro, 2001; Alcock, 2009; Armstrong & Baron, 2013). Donors have a right
to know whether their money is being spent wisely, but the primary purpose of M&E should be
for the organization or project to examine how it is operating and learn how to do it better.
According to Naidoo (2011), efficient project monitoring and evaluation strengthens the
foundation for evidence-based project management decisions. There are many misconceptions
and myths about monitoring and evaluation notions, such as how difficult it is, how expensive it
is, how high level skills are required, how time and resources are used, how it arrives at the end
of a project, and how it is someone else's responsibility.
There is frequently a sense of frustration because M&E activity expectations appear to surpass
resources and skill sets. Monitoring and evaluation, according to (Kusek and Rist 2004), is one
of the most powerful tools for influencing the performance of a project, program, or policy
(M&E). According to Shapiro (2004), monitoring and evaluation allow one to assess the quality
and impact of a project in comparison to project plans and work plans. (Wysocki and McGary,
2003) concludes, "If you don't care how well you're doing or what impact you're having, why
bother implementing a project at all?" Monitoring performance is the only way to determine how
well you are doing (Wysocki &McGary, 2003). M&E may offer accurate data for policy
initiatives to project managers, beneficiaries, government officials, and members of the civil
society. In the end, the process offers chances to build on prior knowledge, enhance service
delivery, effectively plan and re-allocate resources, and show grassroots results as part of a
strong accountability process. The development community's emphasis on outcomes helps to
explain the growing interest in the field of M&E. (IFRS, 2001).
Around the world, people are becoming more and more aware of the need for Monitoring and
Evaluation Systems (M&Es). It makes sense that M&E is a fundamental component of projects
that an organization undertakes because it helps to achieve the project's goal. M&E of project
helps to improve an overall efficiency of project planning, management and implementation. As
9
a result, various initiatives are launched to improve the social, political, and economic well-being
of the nation's citizens (Jm,2000).
According to Ethiopia Country Program Evaluation [ECPE] (2010), most government
organizations in Ethiopia do not use appropriate monitoring and evaluation systems for their
projects. According to a World Bank report on capacity building in Africa (Ethiopia), existing
assessments of monitoring and evaluation capacity in Ethiopia reveal gaps in both institutional
and individual skills development for monitoring and evaluation (2006). The report demonstrates
that project monitoring and evaluation are not carried out properly. Furthermore, two
fundamental points that support this argument are: The first point raised by the government
report over the last five years is that almost no project has been completed on time and within
budget. The second point is that the project's management capacity is at its lowest level; the work
of project administration and leadership, as well as completion according to cost, schedule, and
quality, is an acute problem of the project.
Projects generally fail as a result of poor planning, constant changes in the scope and
consequently deadline and budget, as well as the lack of monitoring and controlling practice
(Mir, Pinnington, 2014). Government organization projects must be completed within the
planned budget, scheduled time and required quality. However, some of the projects experienced
project delay and cost overrun. Projects without effective and efficient Monitoring and
Evaluation, it would be difficult to monitor performance and accomplishment of the projects
based on the desired requirements. In Ethiopian some of the projects haven’t achieved the
desired objectives based on the projects plan. Project overruns and time delay are the main
problem of the projects in the institution. Having effective Monitoring and Evolution would help
the organization to monitor and control the progress and success of the project’s goal (Wholey,
Hatry, & Newcomer, 2010). .
10
2012). According to those studies, the impact of M&E on project performance is insufficiently
established, leading organizations to view M&E as an additional burden with little or no benefit.
Project success is the question of completing a project against its main design parameters set at
the start of the project and on time, within budget, in accordance with the set specifications or
standards, and with customer satisfaction respectively (Khan, 2001). The successful execution of
projects and keeping them within estimated cost and prescribed schedules depend on a
methodology that requires sound engineering judgment. According to Kusek and Rist (2004)
M&E is an important activity in projects for the reason that it determines project success. In this
process all stakeholders are regularly informed, in good time and accurately, the actual status of
a project at a given time compared to the original objectives. The success of projects depends on
various factors. One of the key factors for project success is having a sound monitoring and
evaluation system and practices to make informed decisions and document lessons learnt for
future programming, design and implementation. Monitoring and evaluation (M&E) is described
as a process that assists project managers to scale up performance and influence the results.
M&E aims at improving present and future use outputs, outcomes and impact. Kusek and Rist
(2004) assert that monitoring provides management and stakeholders with clear indicators of
advances and attainment of forecasted results using the available resources. Previous studies
argue that, a poor M&E practices can lead to a poor project performance, erroneous decisions,
inappropriate feedback on important situations, poor quality of outputs, low productivity, cost
and time overrun, poor scope change management during variation and modifications works. The
existence of an effective system is critical; this implies the importance of having an excellence
M&E system is critical.
Existing conditions at the firm show that when the firm draws plan for its projects it is going to
be based on many ideas and events, however this does not guaranty us that the plan is going to be
implemented without any drawbacks. It is a well-known fact that during the project
implementation stage, we might come across a lot of unexpected circumstances which we did not
plan for during the planning phase. Hence the need to consistently monitor and evaluate the
implementation of project plans is undisputable, till the end. In addition to that, similar studies
state that the information gathered through an M&E practices supports the organization through
facilitating the achievement of its objectives and to make an informed decision.
11
In conclusion, there are many challenges which organizations face when they are looking to
develop their Monitoring and Evaluation processes and activities. One of the major challenges in
effective monitoring and evaluation processes is in finding the time and resource to do it well. In
addition, technical expertise within an organization can be a significant challenge to developing
effective monitoring and evaluation processes and activities. Another challenge to M&E is
making sure that you have a culture within your organization which supports the process.
Monitoring and Evaluation is more than one any individual activity or process it is about having
a team which focuses on learning and adopting a growth mindset. This study therefore seeks to
establish specifically, the effect that M&E activities play on project performance. The study
analyzed M&E challenges, factors affecting M&E practice, and employee’s attitude toward
M&E activities. In this accordance, the aim of the study is to analyze the effect of evaluation and
monitoring on project performance in the case of AMSMI.
2. Are there the effects of using M&E in the success of projects implemented by AMSMI?
As the general objective is mentioned above the specific objectives of the study are forwarded
below:
1. To assess the project monitoring and project evaluation in the case of AMSMI.
2. To analyze and examine the effect of using M&E in the success of projects implemented by
AMSMI
12
1.5 Significance of the Study
This study helped to acquire knowledge about overall monitoring and evaluation system and
particularly AMSMI monitoring and Evaluation system. The research showed clearly if there is a
link between effective monitoring and evaluation and projects goals success or failure and the
remedy where necessary to identify monitoring and evaluation weaknesses and recommendations
given out leads to alternatives solutions. The research showed if there was any relationship
between effective monitoring and evaluation and success or failure of development projects
goals achievement. The study added to existing knowledge in the area of monitoring and
evaluation. AMSMI management will get a copy of the research and use the research findings to
improve its monitoring and evaluation system to better achieve its projects goals.
The research will be helpful to other researchers in the monitoring and evaluation field. The
findings of this research will serve them as secondary data. The findings helped the organization
to understand the M&E system in projects: Effectiveness and Weakness and allocate their
limited resources in the possible best way to achieve recurring successes.
Limitations are the study's constraints, or aspects of the study that were not covered for various
reasons. Furthermore, it is clear that the time allotted for this study is limited. Uncooperative
respondents are expected to be encountered during the research. Respondents' skepticism of
being politicized, the confidentiality of some business strategies; respondents' carelessness and
hesitant behavior could leave flaws in the thesis' completeness, and the study is limited to a
single government organization.
13
1.8 Organization of the Research
The study comprises five main chapters. Chapter one is devoted to the general introduction
covering the background of the study, the statement of the problem, the objectives, significance,
scope, limitations and how the research was organized. Chapter two is mainly concerned with the
review of related literatures and gives a detailed explanation on the issue. Chapter three provides
the methodology that was applied to achieve the research objectives including primary data and
method of analysis. Chapter four covers the analysis and presentation of data. This chapter
discusses the result obtained in accordance with the research questions. Finally Chapter five
deals with summary of findings, conclusion and recommendations.
14
CHAPTER TWO
After defined monitoring, we want to talk about the different types of monitoring. Different
conceptualizations of monitoring typologies abound in the literature. UNICEF (2003)
distinguishes between two types of monitoring: situation and performance monitoring. Situation
monitoring assesses changes in a condition or set of conditions, as well as the absence of change,
whereas performance monitoring assesses progress toward specific implementation goals. The
guidelines on project or program monitoring and evaluation published by the International
Federation of Red Cross and Red Crescent Societies (IFRC) in 2011 identify seven types of
monitoring. These are, Results monitoring, process or activity monitoring, compliance
monitoring, and also situation or (context monitoring), beneficiary monitoring, financial
monitoring, and project monitoring are all examples of these types of monitoring.
15
Different authors have different definitions of evaluation. The idea is tough to define. According
to Rossi, Lipsey, and Freeman (1999), evaluation is the systematic interrogation of the
effectiveness of social intervention programs that are adapted to their political and project
conditions using social research procedures and processes. Evaluation, according to Dinnito and
Due (1987), is the assessment of a program's effectiveness in meeting its objectives, or the
assessment of the relative effectiveness of two or more programs in meeting common objectives.
Evaluation seeks to replay the effectiveness, efficiency, impact, efficacy, relevance, and
sustainability of a improvement intervention. The above are referred to as evaluation criteria by
the United Nations Children's Fund (UNICEF) (2003). External or independent evaluators are
frequently used to conduct evaluations. More objectivity is possible as a result of this. Evaluation
is usually done at the end of or near the end of a developmental intervention. Evaluation is
carried out for a variety of reasons.
One of the most important is that it allows evaluation results to be consolidated and used to
inform decision-makers about ways to improve the project's operation so that the intended
benefits are realized by the beneficiaries. It also demonstrates the project's unintended
consequences, which were not anticipated.
The theory of change was first published by Carol Weiss in 1995, and it is simply and elegantly
defined as a theory of how and why an initiative works. It focuses not only on determining
whether a project is successful, but also on describing how and what methods are used to achieve
success (Cox, 2009). A project's theory of change is a blueprint for how it should be run. To put
it another way, it acts as a road map for the project's final destination. Monitoring and evaluation
put the road map to the test and refine it, while communications facilitate change and help you
get to your destination. The theory of change also serves as a foundation for claiming that the
intervention is successful (Msila&Setlhako, 2013). According to this theory, if project staff and
evaluators understand what the project is trying to achieve, how, and why, they will be able to
monitor and measure the desired results and compare them to the original theory of change
(Alcock, 2009). However, because project success is much more complicated, this theory falls
short (Babbie&Mouton, 2006). It takes more time to knowing "what works" to achieve success.
Obtaining sufficient knowledge and understanding to predict – with some degree of confidence –
16
how a project and set of activities might work in a different situation, or how it needs to be
adjusted to achieve similar or better results, thus influencing project performance, is an important
task for monitoring and evaluation (Jones, 2011).
The realistic evaluation theory, on the other hand, was first published in 1997 by Pawson and
focuses on determining what outcomes are produced by project interventions, how they are
produced, and what is significant about the various conditions in which the interventions occur
(Pawson& Tilley, 2004).The question that realistic evaluation addresses is, "What works for
whom in what circumstances and in what ways, and how?" (Pawson&Tilley, 2004). The model
allows the evaluator to determine which aspects of an intervention are effective or ineffective, as
well as which contextual factors are needed to replicate the intervention in different settings
(Cohen, Manion, & Morison, 2008). In order to learn more about how interventions work,
realistic evaluation aims to identify the contextual factors that influence their effectiveness
(Fukuda-Parr, Lopes, & Malik, 2002).
In the mid-1980s, the Australian government pioneered the results-based management (RBM)
theory, which gained traction in the 1990s thanks to the Organization for Economic Cooperation
and Development (OECD) (OECD). This theory focuses on outcomes, as the name suggests.
Previous theories such as Public Sector Management in the 1960s, Program Management by
Activity in the 1970s and 1980s, Management by Objectives (MBO) and Logical Framework
Approach in the mid 1970s, New Public Management (NPM), and Total Quality Management
(TQM) in the 1980s influenced the evolution of the results-based theory, according to the Results
Based Management Group (RBMG).
One of the management strategies is RBM. All ground actors who contribute to the achievement
of specific development outcomes, whether directly or indirectly, ensure that their processes,
products, and output contribute to the achievement of long-term outcomes (Crawford and Bryce,
2013). (OECD).Roles is clearly defined in RBM. It specifies the end results while also requiring
progress toward long-term goals to be monitored and self-evaluated, as well as performance
tracking (UNDP, 2012). Beginning with the fundamentals of detailed planning, which include
defining the vision, mission, and framework tools based on results, RBM is a continuous
approach whose key aspects all intensify M&E elements. After deciding to run a series of results
17
through a program, execution begins, with monitoring becoming a crucial step in achieving long-
term results. In order to support lesson learning and process improvement, RBM is a continuous
process that requires regular participant feedback (UNDP, 2012).Main plans are adjusted on a
regular basis based on lessons learned during monitoring and evaluation. Plans that have been
used in the past are modified, and new ones are created based on the current lessons. RBM
emphasizes monitoring as a continuous process, with lessons learned from the process being
discussed on a regular basis. They assist in the execution of projects by informing actions and
decisions. Assessments are conducted to help the project improve over time. Changes to current
projects, as well as planned future projects, are implemented.
Kusek and Rist (2010) warn that establishing M and E systems in developing countries will be
difficult. These difficulties should not be overlooked. It is critical to recognize that implementing
an M&E system is a lengthy process rather than something that can be completed in a single day.
Given that all countries, developed and developing, require good information systems, building
an M&E system should not be viewed as "too complicated, too demanding, or too sophisticated"
(Kusek and Rist 2010) for African countries to undertake.
Africa's challenges in designing M&E systems are similar to those faced by developed countries,
though their magnitudes differ. Demand and ownership of such systems are significant
challenges for African states when it comes to the design of their M and E systems. The lack of
demand for M and E capacity-building, particularly in the public sector, is due to a lack of an
evaluative culture (Schacter 2000). Even in the NGO sector, access to M&E systems and related
activities is determined more by donor requirements than by demand.
The majority of African countries' public sector M&E systems are weak, scant, and non-existent.
This is due to a scarcity of powerful champions who actively advocate for the implementation of
such systems. In countries like Egypt (Minister of Finance), Zambia (Secretary to the Cabinet),
and the Kyrgyz Republic, Kusek and Rist (2010) elaborate on the presence of high-ranking
officials championing the establishment of M and E systems despite associated political risks
(Minister of Health). A national champion can go a long way toward assisting a country in
18
developing and maintaining M and E systems. Some African countries lack strong and effective
governance and administration institutions. As a result, as suggested by Kusek and Rist, they
require a variety of civil service reforms, legal reforms, and regulatory frameworks (2010).
Given this conundrum, it is suggested that at the very least, a traditional implementation-focused
M&E system (Kusek and Rist 2010) be established, capable of producing baseline data that
specifically show where developing countries are currently at with respect to a given policy or
program.
The lack of workforce capacity to develop, supports, and sustain M and E systems adds to the
challenges that developing countries face when it comes to establishing M and E systems. This is
exacerbated by the emigration of highly qualified individuals to other regions, particularly in
Zimbabwe, where over 2 million human capitals are estimated to have emigrated during the
"Zimbabwean Crisis" (Murisa 2010). Officials should be trained to collect, monitor, and analyze
data, according to Kusek and Rist (2004).
There are numerous factors that will influence the design and implementation of an M&E
strategy. Some of these come from within the organization, while others come from outside
sources. The influencing factors, when combined, will set limits on what can and cannot be
accomplished through M&E. The combination of different influences, according to international
NGO training and research center (INTRAC), makes it pointless to look for magic bullets or off-
the-shelf M&E systems that will meet all of an organization's M&E needs. On a case-by-case
basis, M&E approaches must be carefully tailored to the needs of the relevant project, program,
or organization. Many organizations, in the end, want to walk a fine line between developing
planning, monitoring, and evaluation approaches that meet their own needs while also attempting
to meet the needs of donors.
A project is an endeavor that is undertaken in order to develop a unique product or service that
brings about change and benefit (Anandajayasekeram and Gebremedhin, 2009). Projects' finite
nature contrasts sharply with processes, or rather operations, which are either permanent or not.
19
A project's success is determined by whether or not it has produced a successful product or
service for the company. Project management success, which entails managing projects to the
approved scope, time limit, budget, and quality, is closely related to this. Customer relationships
are maintained, and project teams are not burnt out (Houston, 2008).As a result, project delivery
performance is measured by whether project requirements and outcomes are met positively and
delivered on time, resulting in increased revenue or lower costs.
Four performance dimensions have been identified by (Shenhar, 2011). The first dimension
includes, among other things, time efficiency, cost and quality, and production efficiency.
Organizations should exercise restraint in order to avoid limiting performance measurement by
employing efficiency measures, as these only measure project performance during successful
execution and do not reflect overall project performance. The effect on the client is the other
factor to consider. Finally, how does the performance help the organization change and organize
in the future?
The areas of an organization or a project that are essential to its success are represented by
performance indicators (KPI). As a result, management must concentrate on these areas to foster
high levels of performance. Success must be viewed in terms of time as well. A certain project
may seem to have few immediate benefits, but in the long run, its full effects might be much
20
more significant. There are numerous types of data that can be useful to any organization, even
though the scope and conditions of an organization's KPIs may vary from project to project.
Several studies on the use of results-based monitoring and evaluation have been carried out.
Nyagah (2015) conducted research on the use of the result-based monitoring and evaluation
system by development organizations, finding that management support, budgetary allocation,
staff capacity, and the availability of baseline data are all important factors that make the use of
result-based monitoring and evaluation by development organizations much easier. Turabi et al.
(2011), in a study on a novel performance monitoring framework for health systems, found that
the primary barrier to development organizations adopting the Result Based Monitoring and
Evaluation system is a lack of political will among the organizations' leadership. Managerial
apathy is a barrier to effective implementation of results-based monitoring and evaluation in
organizations. In his study on Monitoring and Evaluation in the Sector: Meeting Accountability
and Learning Needs, Ellis (2009) acknowledges that results-based monitoring and evaluation
takes a lot of time and money, and that if it isn't done well, inaccurate data and incomplete
reporting are to be expected.
According to a study conducted in Washington by Mackay and the World Bank, (2007),
planning for monitoring and evaluation was critical in improving project performance on
government projects. The study focused on government projects that are primarily funded by the
World Bank. The goal of the study was to figure out how to get better governments by
monitoring and evaluating projects. The findings of this study, which used descriptive statistics,
were that the majority of respondents indicated that there was a lack of monitoring and
evaluation practices in the various projects in which they were involved. Project management, on
the other hand, provides an organization with control tools that advance its capability of
planning, implementing, and controlling project activities, according to a study by Muhammad et
al (2012) on project performance, with the variables Project Planning, Implementation, and
Controlling Processes in Malaysia College of Computer Sciences and Information, Aljouf
21
University. The goal of the research was to find ways to improve project performance through
the planning, implementation, and monitoring processes. Variable models were used to
determine how each stage aids in the project management process. To accomplish this goal, data
from various projects and models related to project planning, execution, control, and proposal of
project performance were examined; the findings revealed that project-planning processes
contribute to project performance.In addition, a study by Singh, Chandurkar, and Dutt (2017)
found that monitoring and evaluation was the most important factor in development projects. The
goal of this research was to see how monitoring and evaluation affected development projects.
However, according to the findings of this study, management should provide full support and
fully participate in the monitoring and evaluation process, as this will assist them in making
sound and well-informed decisions.
According to Vittal (2008), technology awareness is critical in project monitoring and control
because of the increased challenges in today's technology-enabled projects, particularly where
technological tools are used in project management practices. This research contributed to a
better understanding of the fundamental links between technical expertise and project success.
Understanding the indulgent function of expertise to the project team in cultivating improved
project performance is the next step. According to the findings of this study, project teams with
the appropriate technical skills are linked to project success. The study discovered that it is
difficult to distinguish between project performance and the use of technology, and that the lack
of such a link induced project performance. As a technical expert in project monitoring and
evaluation, you can play an important role in assisting the project team in managing projects
effectively and efficiently. Sunindijo (2015) of the Faculty of Built Environment in Australia
conducted research on project manager multi-layered tasks that had a significant impact on
project performance. Other research has identified four skills for effective project managers:
mental, human, stakeholder, and technical skills, in addition to their 16 other skill competencies.
The goal of the study was to see if project technical skills have an impact on project
performance. A questionnaire assessment method was used to collect data from 107 project team
members. The findings of the study revealed that the technical skills of project team leaders have
an impact on project performance. Visioning, sensitivity intelligence, interactive skill, dynamic
22
leadership, interpersonal influence, integrity, quality management, and document and agreement
administration are all skill components that contribute to excellent project performance. The
outcome can be used by project managers to assign project managers with the "right" skill profile
or to focus their human resource development on skills that are critical to project success.
Njuki et al. (2015) investigated the role of stakeholders and their contribution in project
implementation in their study Participatory Monitoring and Evaluation (PM& E) for Stakeholder
Engagement, Project Impact Evaluation, and Institutional and Community Learning and Change
Enabling Rural Innovation in Africa - CIAT-Africa, Uganda. The study concluded that
integrating local indicators with project-level indicators was necessary to improve the delivery of
outputs, outcomes, and results. This provided a more comprehensive view of the project's
advantages. This process also provides indicators for measuring often difficult-to-measure
outcomes like empowerment from the perspectives of the project's communities or participants.
Njuki et al. (2015) investigated the role of stakeholders and their contribution in project
implementation in a study entitled Participatory Monitoring and Evaluation (PM & E) for
Stakeholder Engagement, Project Impact Evaluation, and Institutional and Community Learning
and Change Enabling Rural Innovation in Africa - CIAT-Africa, Uganda. According to the
study, integrating local indicators with project-level indicators is necessary to improve the
delivery of outputs, outcomes, and results. This gave a more comprehensive picture of the
project's advantages. This process also provides indicators for measuring often difficult-to-
measure outcomes, such as empowerment, from the perspectives of the project's communities or
participants. Negotiating with various stakeholders allows for performance measurement from
various project stakeholders' perspectives. Participation of communities in development projects
that benefit them has been shown to be critical to achieving long-term development. The theory
is that participants will be better able to recognize and understand their economic and social
challenges, as well as have a deeper understanding of how to outline initiatives that will benefit
them (Benjamin, 2012). In an ideal world, stakeholders' consented participation in participation
initiatives would allow those who are interested in, or who are affected by, a decision, to have a
say in the final outcome. Stakeholders play an important role and interact on a variety of levels–
from local to global–and their role and collaboration have an impact on a development's
23
effectiveness of a development intervention. A multi-sect oral approach, which includes
delegating some work to stakeholders, improves learning, strengthens ownership, and promotes
transparency among the participants. This is especially true when considering the purpose of
monitoring and evaluation, as well as how the data is used, analyzed, and influences ongoing
project planning (Wayne, 2010).
24
date. The project manager must focus on the vision, encourage team members, encourage
teamwork, and manage risk in order to achieve this recognition.
Management involvement contributes to better project insights and improves the evaluation
process's reliability. Increased reliability ensures that the findings are more widely accepted. A
good results-management procedure aims to involve as many relevant stakeholders as possible in
reasoning in a responsive and creative manner. The project's beneficiaries have a clear idea of
what they want to accomplish and are motivated to organize and produce acceptable results. The
managers set up a monitoring and evaluation process to keep track of progress and use the data
to improve performance (Lipsey, 2011). Budget allocation is heavily influenced by management.
Decision-makers must allocate significant resources to the project. They play an important role
in determining priorities, deadlines, exceptional approvals, and resource allocation. They are
required to commit to the implementation of a monitoring and evaluation system, which allows
them to assess the adequacy of budget allocations, provide budget revision advice, and revise
project work plans. The disadvantage of project management support is that some managers
place little or no emphasis on implementing an active monitoring and evaluation system
(Goyder, 2009).
25
2.3. Conceptual Framework
Based on the objectives of the research and review of existing literature regarding to the topic
:The study has developed the following framework that is expected to explain the relationship
between M&E and Project performance in case of AMSMI in the study area. The following
figure depicts the relationship between the independent and dependent variables of the study.
This chapter included a review of literature that demonstrated, among other things, the evolution
of M&E and demonstrated that, because of its ability to track project progress, it has a broader
impact on project performance. This chapter demonstrates that M&E serves a variety of reasons
and use a variety of approaches to achieve its goal of increasing project performance under the
section on forms of M&E. However, in the section on project performance, M&E is still a
strategy and tool for project management promotion and the outcomes must be applied through a
management hierarchy. This chapter also provided a review of the empirical literature which
consisted of the past studies done on the topic of the study. The chapter lastly presented the
study’s Conceptual Framework.
26
CHAPTER THREE
RESEARCH METHODOLOGY
The purpose of this section is to express the procedural structure that will be used to achieve the
survey's stated goal, as well as to clarify the study hypotheses that is assumed. The study plan,
type and basis of information, population explanation, sample size, nature of sampling, sampling
methods and explanation of alternative information gathering tools, and technique of facts
evaluation will be the main topics discussed in this section.
The research design is a blueprint for achieving research objectives and answering research
questions (John.A 2007). A research design is simply the frame work of the study. From
different types of research designs explanatory type of research design was employed as a main
research design for this study to analyze the effect of M&E on project performance in AMSMI.
This study uses explanatory and descriptive research design to explaining, understanding,
predicting and controlling the relationship between variables. By taking cross-section of the
population relevant data was collected at one point in time. This study will then describe and
critically analyze the effect of monitoring and evaluation on project performance.
27
quantitative approaches, according to Mark (2011), has the potential to compensate for the
weaknesses of one method with the strengths of the other.
Thus, the study used explanatory methods approach as a design in methods in which the
researcher collects quantitative data analyzes the results, and then uses the results to find
conclusion and recommendation. The study was quantitative where survey research is followed
since it provides a quantitative or numeric description of trends, attitudes, or opinions of a
population by studying a sample of that population that includes a cross-sectional study using
survey questionnaires for data collection with the intent of generalizing from a sample to a
population.
(Okiro&Ndungu, 2013) defines target population as "the entire group of individuals or objects to
which researchers wish to generalize their findings." The target populations for this study will be
all AMSMI employees involved in the Monitoring and Evaluation core process, including team
leaders, M&E experts/officers, and project implementers. There are 128 employees they are
directly participated in the projects. These individuals are expected to have knowledge of the
M&E system, either as a result of their job structure and training, or as a result of the
responsibilities and accountability they have taken on.
28
With this in mind, Yamane Yamane (1967) will be used to calculate the sample size for this
study.
n =N/1+N (e) Where n = sample size from total population, N = total population, E = the level
of precision 1 = the probability of event occurring
Therefore: n = 128/1+128(0.07)2
n = 128/1.6272
n = 78
Primary data was gathered through the use of questionnaires. This study's is gathered through the
use of a questionnaire. A questionnaire is defined as a formalized schedule or form that contains
a collection of carefully formulated data collection questions (Wong, 1999). Closed and open-
ended questions were used to collect primary data for the study from selected samples in order to
obtain employee opinions on M&E practices. The researcher proposes that the data to be used
will base on both primary and secondary data which will help the researchers to provide enough
data for investigation.
The secondary data was collected from reports of the AMSMI from the selected areas which are
documented in the office. The questionnaire is prepared to be inclusive of the constructs
measured in the study. The questionnaire has two sections. The first section covered the
29
demographic profile of the participants like age, sex, educational level and other background
data. The second section is structured on a likert scale of 1-5 to show their degree of agreement
or disagreement to the sentences about the constructs under study.
In this study both close –ended questionnaire prepared in the form of liker scale will used to
collect the required data in the relation to the effect of monitoring and evaluation on project
performance from sample respondents involved in the Monitoring and Evaluation core process,
including team leaders, M&E experts/officers, and project implementers.
Validity refers to the ability of the instrument to measure what it is designed to measure. Validity
is the strength of our conclusions, implications or propositions. It is concerned with whether an
instrument is on target in measuring what is expected to measure. To check the validity of the
instrument the researcher worked with the adviser as the expert and agreed whether the
instrument was valid or not. The questionnaire was also developed based on the literature review
and frame of reference to ensure validity of the results.
It is imperative that ethical issues are considered during the formulation of the evaluation and
data collection plan. Considerations include:
30
Anonymity: Anonymity is a stricter form of privacy than confidentiality, as the identity
of the participant will remain unknown.
This study considered some ethical issues while conducting the research. The participants in this
research had the right to choose whether or not to participate. They were also informed of all
aspects of a research task. Consumers were also given the right to privacy about the information
they provided. The participants name was never mentioned in any of the data presentation and it
will remain confidential.
31
CHAPTER FOUR
DATA PRESENATION AND ANALYSIS
In this chapter, the data that are collected through the structured questionnaire summarized and
analyzed in order to realize the ultimate objective of the study. This chapter contained the data
presentation, analysis and discussion of the sample population based on the primary data
collected. The demographic facts obtained from the respondents were summarized using
frequency distribution. Scale typed questionnaires were analyzed by using descriptive statistics,
correlation, and particularly regression is used to test the research concepts and answering the
research questions. The data was analyzed using SPSS. Only 99% of the total distributed
questionnaires were responded back. While a total of 78 questionnaires were distributed, 76 of
them were returned back to the researcher.
Below, Table 3.1 shows the reliability statistics of the data collected is 0.892. Which is seen as
adequate and permitted, for the scale variables?
.892 14
Reliability test was conducted to ensure internal consistency of the research instrument and
Cronbach’s alpha is used to measure the internal consistency of the measurement items. For this
study we used 14 items in measurement of three variables and we came to know that the items in
this study are reliable. The reliability coefficient which is more than or equal to 0.60 should be
considered adequate to develop a questionnaire. Therefore, a low coefficient alpha indicates the
sample of items perform poorly in capturing the construct motivating the measure. Conversely, a
large coefficient alpha implies that the items test correlates with the true scores closely.
32
4.2 Demographic Profile of Respondents
The table above shows the descriptive Statistical Analysis of the respondent’s demography. It
displays brief descriptive coefficients that summarize a given data set, which can be either a
representation of the entire or a sample of a population. Descriptive statistics are broken down
into measures of central tendency and measures of variability (spread). This section presents the
descriptive statistics of the data regarded.
The table above shows the proportions regarding gender is not balanced. The male respondents
constituted the largest share of the gender composition representing 65 (85.5%), while 11
(14.5%) were female, as shown on Table 4.2. This shows the largest number of respondents were
male with 85.5%, while female respondents constituted 14.5% of the total respondents.
33
Regarding age distributions, respondents in the age range between to 40-49 amounted to only
10.5% of the total respondents, while the age group of 30 - 39 years of Age were 35.5%.
Respondents between 20 - 29 years were the most respondent’s percentage of the total sample
with 53.9% of the total sample population contribution. The age descriptive frequency is
presented in the table above. This implies most of the respondents were between the ages of 20
to 39, it constitutes about 89% of the total respondents.
Regarding years in the organization of respondents, respondents less than 0 - 2 years in the
company were 29 constituting 38.2% of the total respondents. The years in the organization
between 2 - 5 years amounted to only 17.1% of the total respondents which is 13 respondents
from the total respondents. While the 6 - 10 years in the organization were 25 respondents. It
encompassed 32.9% of the total respondents. Only 9 respondents were above 10 years in the
organization, which constitutes 11.8% of the total respondents. This implies most of the
respondents worked less than 10 years in the company; it constitutes about 89.2% of the total
respondents.
The following results are focused on displaying the descriptive statistics of the independent
variables that are monitoring and evaluating projects, to the queries in the questionnaire.
The following table presents the questionnaire requests regarding monitoring projects. As the
mean of the result shows, the majority are in agreement for the questions asked. Most
respondents agreed that there is an organized process of overseeing and checking the activities
undertaken in a project with 4.53. On the contrary, most respondent disagreed that the
monitoring process suggests continuous corrective actions. The following statements interpret
the monitoring related data collected by the researcher.
34
There is an organized process of overseeing and checking the activities undertaken in a
project
Respondents gave their response to the following monitoring projects related statements on the
questions of agreement or disagreement, the mean of the respondent are shown. The detailed data
is presented in the table below.
Std. Devia-
Monitoring Projects N Mean
tion
3.77
35
4.3.2Evaluating Projects
Table 4.3 shows, the data collected by questionnaire requests regarding evaluating projects. The
mean of the result shows, the majority of the respondents slightly agreed for project evaluation
for the questions asked. While almost all statements are responded with agreement, the statement
"The evaluation process has always implementation plans" got most respondents to neutrality
with the statement. The following statements interpret the evaluation related data collected by the
researcher.
Ambiguity between agreement and neutrality to two statements raised to set level of
agreement for the evaluation process has always implementation plans
It is strongly agreed that, the evaluation process gauges the success of the project
Respondents gave their response to the following projects evaluation related statements on the
questions for agreement or disagreement, the mean of the respondent are shown. The detailed
data is in the table below.
The evaluation process gauges the success of the project 76 4.47 .629
The evaluation process has always implementation plans 76 3.47 1.479
There is effective identification of project phases for evaluation 76 4.00 .983
The project evaluation has a program in meeting the objectives 76 4.20 .847
4.04
36
4.4 Organizational Performance of AMSMI
The following table presents the questionnaire requests regarding organizational performance of
AMSMI. As the mean of the result shows, the majority are in agreement for the questions asked.
Most respondents agreed that the project performance of AMSMI is effective with 4.43. On the
contrary, most respondent disagreed that The M&E increased AMSMI revenue growth. The
following statements interpret the organizational performance related data collected by the
researcher.
The purpose of the M&E unit contributed to the success of the project.
Project Monitoring and Evaluation helped to manage scope, cost and time of projects that
are undergoing
Respondents gave their response to the following monitoring projects related statements on the
questions of agreement or disagreement, the mean of the respondent are shown. The detailed data
is presented in the table below.
37
Table4.5: Organizational Performance
Std. Devia-
Organizational Performance N Mean
tion
3.60
Correlation analysis is a statistical method used to evaluate the strength of relationship between
two quantitative variables. A high correlation means that two or more variables have a strong
relationship with each other, while a weak correlation means that the variables are hardly related.
In other words, it is the process of studying the strength of that relationship with available
statistical data. This technique is strictly connected to the linear regression analysis that is a
statistical approach for modeling the association between a dependent variable, called response,
and one or more explanatory or independent variables.
38
Table4.6: Correlation
Like the demographic factors, the scale typed questionnaire data is entered to the SPSS software
version to process correlation analysis. Based on the questionnaire which was filled, the
following correlation analysis was made. Pearson correlation test was conducted to know the
degree of relationship between the independent variables and the dependent variable i.e. Project
performance. The results of the correlation between these variables are shown in table 4.5.
Evaluation has a very strong positive significant relationship with project performance in
the case of AMSMI (r = 0.801),
39
Lower than the other independent variable, monitoring has a moderate positive
significant relationship with project performance in the case of AMSMI (r = 0.642)
When someone choose to analyze the data using linear regression, part of the process involves
checking to make sure that the data that one wants to analyze can actually be analyzed using
linear regression. Therefore, it is needed to do this because it is only appropriate to use linear
regression if the required data "passes" four assumptions that are required for linear regression to
give a valid result. Let us look at whether the following assumptions are met or not. These
assumptions are multi-collinearity, linearity, homecedasticity, and normality. The assumptions
are checked using SPSS software.
Multi-Collinearity
The researcher has checked if multicollinearity problem exist or not before running the
regression. Multicollinearity refers to the situation in which the independent/predictor variables
are highly correlated. When independent variables are multicollinear, there is “overlap” or
sharing of predictive power. Multicollinearity can be checked using the tolerance and variance
inflation factors (VIF) which are the two Collinearity diagnostics factors.
40
Tolerance is an indicator of how much of the variability of the specified independent variable is
not explained by the other independent variables in the model and is calculated for each variable.
If this value is very small (less than 0.10), it indicates that the multiple correlation with other
variables is very high, suggesting the possibility of multicollinearity. Accordingly, the tolerance
value for all independent variables is greater than 0.1, which implies that there is no
multicollinairity problem in connection with tolerance. Variance Inflation Factor (VIF) which
calculates the influence of correlations among independent variables on the precision of
regression estimates. The VIF factor should not exceed 10, and should ideally be close to one. As
per the above table for all independent variables VIF value is less than 10 and literally closer to
one, which implies there is no multicollinearity problem.
Linearity
Linearity test aims to determine the relationship between independent variables and the
dependent variable is linear or not. The test is a requirement in the correlation and regression
analysis. Good research in the regression model there should be a linear relationship between the
free variable and dependent variable. If the value of sig. deviation linearity is > 0.05, then the
relationship between the independent variables are linearly dependent. The primary supposition
states that the middling value of the errors should be zero. As Sekeran, U. (2003) if the
regression equation includes a constant term, this pre-assumption will never be violated.
Therefore, since from the regression result table the constant term (i.e. βo or, α) was included in
the regression equation; this assumption seizes fine fit for the model.
41
Homoscedasticity
A sequence of random variables is homoscedasticity; if all random variables have the same finite
variance. This is also known as homogeneity of variance. The complementary notion is called
heteroscedasticity. The misconception with the ambiguousness with homoscedasticity and
heteroscedasticity results in unbiased but inefficient point estimates and in biased estimates of
standard errors and may result in overestimating the goodness fit as measured by Pearson
correlation coefficient. Heteroskedasticity is an organized blueprint in the errors where the
variances of the errors are not constant. When the variance of the residuals is constant it is
explained as homoscedasticity, which is desirable. To test for the absence of heteroscedasticity
scatter plot test was used. In this test, if the scatter plot output spot appears diffused and
distributed, it can be concluded that the model doesn't occur to have heteroscedasticity problem.
As presented below, based on the scatter plot output, it appears that the spots are diffused and do
not form a clear specific pattern. This leads to a conclusion that the regression model doesn't
have heteroscedasticity problem.
Normality
An assessment of the normality of data is a prerequisite for many statistical tests as normal data
is an underlying assumption in parametric testing. There are two main methods of assessing
normality - graphically and numerically. Statistical tests have the advantage of making objective
42
judgments of normality. Skewness and Kurtosis descriptive statistics is one of the numerical tests
used to check normality. The value of asymmetry and kurtosis between -2 and +2 are
considered as acceptable in order to prove normal distribution. Hence, as it is depicted in
skewness and kurtosis statistics are within the range of -2 and +2, so that the assumption of
normal distribution is met (George &Marllery, 2010).
The normality of the study is supplemented by the histogram above and the histogram of
standardized residual shows a roughly normal curve when the assumption of regression and most
technique met that error terms are normally distributed. The histogram showed that the
assumption of normally distributed error is met.
Figure 2: Histogram
Source: SPSS Output, 2022
The model summary, in the table below reports the strength of relationship between the
independent and dependent variable. R is a Pearson correlation between predicted values and
actual values of dependent variable, with a value of 0.904, which is a very high value. R² is
multiple correlation coefficients that represent the amount of variance of dependent variable
43
explained by the combination of three independent variables. According to different scholars, the
R square above 0.6 is accepted, conventionally. In this study, the R square resulted is 0.885,
which shows the model is so fit, and then it is highly accepted.
Regression coefficients are estimates of the unknown population parameters and describe the
relationship between a predictor variable and the response. In linear regression, coefficients are
the values that multiply the predictor values. The following table shows the regression
coefficients of the study.
From the table we can say that α is 0.092, and this can be interpreted as meaning that if all the
independent variables were to be zero, the model predicts that there can only be 9.2% of project
performance. We can also read off the value of β from the table and this value represents the
44
slope of the regression line. It is 0.641 for monitoring projection and although this value is slope
of the regression associated with a unit change in the outcome associated with a unit change in
the predictor. Therefore, if monitoring projection variable is increased by one unit, then the
model predicts that 64.1% extra additional value on project performance was experienced.
The same is true for evaluation (71.5%), for which an increase in one unit of these respective
variables can result in an increase in project performance by the percentage shown. This implies
that the M&E has a magnificent impact on project performance as their percentage indicated
above.
Furthermore, the significance level in the table shows the significance level of the independent
variable. Where ever the p value is above 0.05, the variable is considered to have insignificant
impact on the dependent variable.
Monitoring Projects: with p value of 0.022, it can be easily observed that it is greater
than 0.05. This implies that monitoring projects has a moderate significant effect on
projects performance.
Evaluation Projects: has a p value 0.001 which is lesser than 0.05, this implies that
evaluation of projects has a strong positive significant impact on projects performance.
45
CHAPTER FIVE
This chapter briefly presents summary of the objectives, research methodology, key findings of
the model, conclusion and suggests useful recommendations.
5.1 Conclusion
The research was undertaken generally to analyze the effect of monitoring and evaluation on
project performance in the case of AMSMI. The study is designed to conclude the general
objective by coming to a conclusion by assessing the project monitoring and project evaluation
in the case of AMSMI, the effect of project monitoring on project performance in the case of
AMSMI, and examining the effect of project evaluation on project performance in the case of
AMSMI.
The target populations for this study will be all AMSMI employees involved in the Monitoring
and Evaluation core process, including team leaders, M&E experts/officers, and project
implementers. There are 128 employees they are directly participated in the projects. Employing
proper sample size determination the study took 78 samples. The paper adopted quantitative
research strategy and used self-administered questionnaire to collect data from the employees.
Out of the total questionnaires 76 were returned back, which is about 96% of the total
distributed.
The respondent’s proportion shows the largest number of respondents were male with 85.5%,
while female respondents constituted 14.5% of the total respondents. Considering age group and
work experience, respondent’s age range between 40- 49 are only 10.5% of the total respondents,
while the age group of 30 - 39 years of Age was 35.5%. Respondents between 20 - 29 years were
the most respondent’s percentage of the total sample with 53.9% of the total sample population
contribution. The age descriptive frequency is presented in the table above. This implies most of
the respondents were between the ages of 20 to 39, it constitutes about 89% of the total
respondents.
46
Regarding years in the organization of respondents, respondents less than 0 - 2 years in the bank
were 29 constituting 38.2% of the total respondents. The years in the organization between 2 - 5
years amounted to only 17.1% of the total respondents which is 13 respondents from the total
respondents. While the 6 - 10 years in the organization were 25 respondents. It encompassed
32.9% of the total respondents. Only 9 respondents were above 10 years in the organization,
which constitutes 11.8% of the total respondents. This implies most of the respondents worked
less than 10 years in the company; it constitutes about 89.2% of the total respondents.
After dealing with descriptive statistics, of the data collected to measure independent variables.
Correlation was conducted to know the degree of relationship between the independent variables
and the dependent variable. The results of the correlation showed that M&E has a very strong
positive significant relationship with project performance. It is also observed that Evaluation has
a very strong positive significant relationship with project performance in the case of AMSMI.
Lower than the other independent variable, monitoring has a moderate positive significant
relationship with project performance in the case of AMSMI.
Based on analysis of regression, the R is a Pearson correlation between predicted values and
actual values of dependent variable, with a value of 0.902, which is very high. While, R² is
multiple correlation coefficients that represent amount of variance of dependent variables and
explained by the combination of four independent variables. In the study, the R square resulted is
0.885, which implies that it is accepted.
In line with those research questions, investigations were made and the conclusions reached are
arranged with in this section and these implications are presented below.
The results of the study showed that evaluation has a significant effect on project performance.
Ambiguity between agreement and neutrality to two statements raised to set level of agreement
for the evaluation process has always implementation plans. It is strongly agreed that, the
evaluation process gauges the success of the project. The project evaluation has a program in
meeting the objectives. There is effective identification of project phase for evaluation.
47
Monitoring also has a strong positive significant effect on project performance. There is a major
controlling system in the company. There is a consistent monitoring of projects in regards to the
goals of the projects. There is an organized process of overseeing and checking the activities
undertaken in a project. The monitoring process doesn’t suggest continuous corrective actions.
5.2 Recommendations
As the research had indicated, monitoring projects has a moderate significant effect on
AMSMI to make monitoring practice more feasible to have better project performance.
Evaluation projects have a very strong effect on project performance in the case of
Policy makers at the bank level and/or the national bank level should consider to develop
the M&E practice of the organization to have better project performance, and
Researchers may consider taking other independent variables that are dimensions of
M&E, as well as redoing the study on different organizations and industries may result a
48
APPENDIX I: REFERENCES
Babbie, E., & Mouton, J. (2006).The Practice of Social Research. UK: Oxford University.
Cheung, S. O., Henry, C.H.,& Kevin K.W. (2014). PPMS: a Web-based construction Project
Performance Monitoring System." Automation in Construction,13, 361-376.
Cohen, L., Manion, L., & Morison, K. (2008). Research Methods in Education. London:
Routledge Falmer.
Crawford P & Bryce P., (2013). Project Monitoring and Evaluation: A method of enhancing the
efficiency and effectiveness of aid project implementation. International Journal of
Project Management, 21(5): 363 – 37319.
49
Developing Countries. A Handbook for Policy Makers, Managers and Researchers.
Dinnito, D. M., and Due, R. T. 1987. Social Welfare: Politics and Public Policy. New Jersey:
Prentice Hall, Inc.
Ellis, J. (2009). Monitoring and Evaluation in the third sector; meeting accountability and
learning needs.
Fukuda-Parr, S., Lopes, C., & Malik, K. (2002). Capacity for Development: New Solutions to
Old Problems. London, UK: Earth- scan Publications, Ltd.
Goyder, R. (2005). A retrospective look at our evolving understanding of project success. Project
Management Journal, 36(4), 19 – 31.
Houston, D. (2008). Project management in the international development industry: the project
coordinator's perspective. International Journal of Managing Projects in Business 3(1),
61- 93.
International Federation of Red Cross and Red Crescent Societies. 2011. Project/Programme
Monitoring and Evaluation Guide (M and E) guide. Geneva: International Federation of
Red Cross and Red Crescent Societies
Kusek, J., and Rist, R. C. 2004. Ten Steps to a Results Based Monitoring and Evaluation System.
Washington: The World Bank Group.
Lipsey, M. (2011). Multi-country co-operation around shared waters: Role of Monitoring and
Evaluation. Global Environmental Change, 14(1), 5- 14.
Mackay, K. R., & World Bank. (2007). How to build M & E systems to support better
government. Washington, D.C: World Bank.
50
Mark Saunders, Philip Lewis and Adrian Thornhill. (2011). Research Methods for Business
Students. Fifth edition, FT Prentice Hall, 78.
Njuki, J., Kaaria, S., Chetsike, C., &Sanginga (2013).Participatory monitoring and evaluation for
stakeholder engagement, and institutional and community learning. Journal of Academic
Research in Business and Social Sciences.
Nyagah (2015) Application of the Results Based Monitoring and Evaluation System by
Development Organizations in North Rift Region of Kenya
Pawson, R., & Tilley, N. (2004). Realistic Evaluation. London: SAGE Publications.
Rossi, P. H., Lipsey. M. W. and Freeman, H. E.1999. Evaluation: A Systematic Approach. New
Delhi: SAGE Publications, Inc.
Shenhar, A. J. (2011). An empirical analysis of the relationship between project planning and
project success.International Journal of Project Management, 21(20), 89-95.
Turabi, A.E, Hallworth, M., T. & Grant, J. (2011). A novel performance monitoring framework
for health systems; experiences of the National Institute for Health Research . England.
51
UNDP (2012). Handbook on Monitoring and Evaluation for Results. New York: UNDP.
UNICEF. 2003.Programme Policy and Procedures Manual: Programme Operations. New York:
UNICEF.
Valadez, J., and Bamberger, M. 2004. Monitoring and Evaluating Social Programs in
World Bank.
World Bank. 2011. Monitoring and Evaluation Capacity Development. Washington DC: The
Zimmerer, T.W. and Yasin, M.M. (1998), A leadership profile of American project managers,
Project Management Journal, Vol. 29, pp. 31-8.
52
APPENDIX II: QUESTIONNAIRE
Dear respondent, I am an MBA student at St Mary university and I am presently carrying out a
research work for my final thesis, on the topic of the effect of monitoring and evaluation. You
have been carefully chosen as one with capacity to help in gathering this information that will
contribute to the expected results of this research. All the information provided was treated with
utmost confidentiality it deserves and it will strictly be used for academic research.
MisrakKassahun
NB: The information collected is only for academic purpose it could be promised that all
information you provide would be strictly confidential.
A. Male B. Female
A. 20-29 B. 30-39
C. 40-49 D. 50-Above
A. 1 - 2 years B. 2-5years
53
PART II. CLOSE-ENDED QUESTIONAIRE
PROJECT MONITORING 1 2 3 4 5
PROJECT EVALUATION
54
2. PROJECT PERFORMANCE
55