St. Mary'S University School of Graduate Studies: Department of Project Management
St. Mary'S University School of Graduate Studies: Department of Project Management
St. Mary'S University School of Graduate Studies: Department of Project Management
Jun, 2021
DECLARATIONS
I Bezawit Girma registration number ID No SGS/0651/2012A, do hereby declare that
this Thesis work is my original work and that it has not been submitted partially; or in
full, by any other person for an award of degree in any other university/institution.
Declared by
Bezawit Girma Hailemichael
Signature__________ Date___________
iv
CERTIFICATE
This is to certify that Bezawit Girma has carried out her research work on the topic
The Practice of Monitoring and Evaluation in Ethiopian Road: The Case of Federal
Road Projects in partial fulfillment of the requirements for the award of a Masters
Degree in PROJECT MANAGEMENT.
Approved by:
ACKNOWLEDGMENTS
I give special thanks to God for giving me strength to sail through this course and the
opportunity to study, amidst all the hassles of life. My sincere thanks to my supervisor
Dr. Dereje Teklemariam (Associate Professor) for the guidance and encouragement,
your suggestions and corrections gave my research a course that led to it taking a
professional form. I am truly grateful. I would also like to thank ERA staff members
and in particular, the team leaders and project manager for availing information on
monitoring and evaluation.
I am indebted to my family and friends for their support to complete this work.
vi
Table of Contents
DECLARATIONS................................................................................................................................................... iii
CERTIFICATE ....................................................................................................................................................... iv
ACKNOWLEDGMENTS ........................................................................................................................................ v
ACRONYMS .......................................................................................................................................................... ix
ABSTRACT ............................................................................................................................................................. x
CHAPTER ONE ..................................................................................................................................................... 1
1. INTRODUCTION ........................................................................................................................................ 1
1.1. Background of the Study ...................................................................................................................... 1
1.2. Statement of the Problem ..................................................................................................................... 2
1.3. Research Questions .............................................................................................................................. 3
1.4. Objective of the Study ........................................................................................................................... 4
1.4.1. General Objective ................................................................................................................................ 4
1.4.2. Specific Objective ............................................................................................................................... 4
1.5. Significance of the Study ...................................................................................................................... 4
1.6. Scope of the Study ................................................................................................................................ 5
1.7. Limitation of the Study .......................................................................................................................... 5
1.8. Ethical Consideration ............................................................................................................................ 5
1.9. Organization of the Study ..................................................................................................................... 6
CHAPTER TWO .................................................................................................................................................... 7
2. LITERATURE REVIEW ............................................................................................................................. 7
2.1. Theorethial Review ................................................................................................................................ 7
2.1.1. Project .................................................................................................................................................. 7
2.1.2. The Project Life Cycle and the Project Cycle Management ...................................................... 8
2.1.3. Project Management ........................................................................................................................ 10
2.1.4. Monitoring and Evaluation ............................................................................................................. 11
2.1.5. Monitoring and Evaluation in Project Management ...................................................................... 13
2.1.6. Role of Monitoring and Evaluation for Project Success ............................................................... 13
2.2. Empirical Review ................................................................................................................................. 14
2.2.1. Major Challenges in Implementing Monitoring and Evaluation ................................................... 15
CHAPTER THREE ............................................................................................................................................... 19
3. RESEARCH METHODOLOGY ............................................................................................................... 19
3.1. Introduction .......................................................................................................................................... 19
3.2. Research Design .................................................................................................................................. 19
3.3. Data Type and Source ......................................................................................................................... 19
3.3.1. Data Type ........................................................................................................................................... 19
3.3.2. Data Source ....................................................................................................................................... 20
3.4. Target Population and Sample ........................................................................................................... 20
3.4.1. Target Population .............................................................................................................................. 20
vii
List of Tables
Table 4.3 Aspect of a project monitored in the federal road projects …………..………………….27
Table 4.8 Tools and methods used in M&E system in federal road projects …………….……….31
Table 4.9 M&E training for Monitoring and Evaluation staff ……………..……………………...…...32
Table 4.11 Influence of management for the success of M&E systems ……………………..……33
Table 4.12 Strength of Monitoring Team and its Influence to the Performance of M&E …….…34
List of Figures
ACRONYMS
ABSTRACT
Project monitoring and evaluation is generally one of the components for effective
project management. It gives responsibilities, indicates stakeholders' transparency
and promotes corporate training by recording lessons gained in the execution of
projects and applying them in the succeeding project planning and delivery or sharing
experiences with other implementing organizations. The Ethiopian Roads Authority's
Monitoring and Evaluation practice is assessed in this study, as the majority of its
projects experience significant time and cost overruns, as well as quality issues. The
data was obtained using a questionnaire and a key informant interview from the three
stakeholders, and various Authority records. The study design was descriptive, and
the data type was both qualitative and quantitative. The target population consists of
150 people who take part in project planning, implementation, monitoring, and
evaluation. Despite consultants and contractors' claims that the Authority's central
M&E unit does not function as it should, the research revealed that the Authority does
have one. In terms of M&E tools, ERA uses a particular guideline and manual, but it
does not regularly use a specific M&E approach. The M&E results are primarily used
to make decisions. However, there is a communication gap between key staff involved
in the M&E process, as well as members of management and stakeholders, indicating
that the M&E results are not being communicated effectively. Finally, the Authority's
defined challenges include lack of training and skilled M&E unit, communication gap
among stake holders, difficulty using M&E tools and methods, capability gaps, gaps
in implementing effective M&E programs supported by ICT, and employee
perceptions of M&E tasks and environments. As a result, in order to improve
Ethiopian Roads Authority's M&E practice, this study recommends that the M&E unit
be properly staffed and equipped with the appropriate knowledge and skill.
Furthermore, the Authority's decentralized M&E roles will be harmonized centrally
within the M&E work unit. Mechanisms for data triangulation, approval, and
validation should be structured to ensure data consistency. Furthermore, a clearly
specified M&E approach and an appropriate M&E outcome communication plan
should be implemented to optimize the efforts made and improve the efficiency of the
M&E framework.
Monitoring and evaluation is the technique of obtaining and reviewing data over time
to see if progress is being made toward the goals and objectives set out. It's an
essential part of the project cycle and good management (Stockbridge & Smith, 2011).
This is due to the fact that effective M&E practices have a huge impact on project
implementation and delivery (Kissi et., al 2019).
Monitoring and evaluation are essential to the project's progress, and it helps to
promote strategic decision-making and ensure effective project execution by gathering
and evaluating project data in a structured and routine process. Project monitoring and
evaluation refers to the commitment of project monitoring and evaluation teams
(stakeholders) to achieve project goals, as well as problems including project delays,
cost overruns, non-conformity, and environmental concerns (Otieno, 2000).
Monitoring and evaluation is one of the factors contributing to the success of a project,
a key aspect of a project management practice and also identified in several studies.
Among other factors, project success appeared to be improved by monitoring and
evaluating the progress of a project on a regular basis. Monitoring, evaluation and
control are relevant for project scope management, time, cost, quality, human
resources, communication and risk management (Kamau & Mohamed, 2015).
the money. Monitoring and evaluation are two things that will help to ensure these.
Unfortunately, many project owners and administrators are unaware of the value and
effectiveness of these two aspects (Otieno, 2000).
Therefore, this research assess the practice of monitoring and evaluation system of the
Ethiopian federal road project since it has a contribution for the achievement of the
project objectives and this paper set out the roles of both monitoring and evaluation in
successful implementation and delivery of projects and how these can be applied.
While enormous resources are given to execute projects and despite the fact that these
projects play a major role in promoting sustainable development in the community,
there are constraints to monitoring and evaluation, and thus the performance of the
monitoring and evaluation system is not carried out satisfactorily and intervention is
necessary. While very significant for improving results, monitoring and assessment
are often very complex, multidisciplinary and skill-intensive processes. There is also a
need to develop guidelines for the implementation of minimum monitoring and
evaluation criteria for projects that can be used to track progress and effectiveness.
The establishment of a result based M&E system is a prerequisite for increasing
pressure to enhance results, which is also one of the criteria for stakeholders to verify
the productive use of the client funds, impact and benefits brought by the projects (Jha
et al., 2010).
The success of projects is vital to the growth and sustainability of a company. Most
project managers recognize that project monitoring and evaluation are essential to
achieving project goals and success. By providing corrective measures for deviations
from the planned standard, the project monitoring and evaluation exercise adds value
to the overall efficiency of project planning, management, and execution. Project
managers must conduct more thorough monitoring and evaluation of their programs,
as well as establish processes and guidelines for assessing effects (Kahilu, 2010).
For planning, decision-making, and economic policy management, M&E has been a
key performance management method. According to McMillan and Chavis (2001),
most governments around the world are beginning to implement M&E into their
economic governance structures.
3
The new dynamics of governance, globalization, aid lending, and citizen expectations
necessitate a consultative, cooperative, and consensus-building approach, which
means that stakeholders' voices and perspectives should be actively sought (Kusek &
Rist, 2004).
In the context of Ethiopia, road is the most important infrastructure that provides
access to rural and urban areas in the country. Road plays crucial role to reduce
transportation cost and support economic growth in the country (ERA,2019).
Despite all these M&E activities, several of the authority's projects encounter multiple
challenges including time and cost overruns and also quality problems (ERA, 2013).
M&E systems face problems that lead to their inadequacy and therefore do not work
sufficiently, which require intervention. This research looks in to the existing M&E
systems with in Ethiopia Road Authority, the contractors and consultants who take
part in the execution of federal road projects, in regard to the assessment of practice of
Monitoring and Evaluation system.
• What are the main challenges in the implementation of monitoring and evaluating
federal road construction projects?
• Examine the existing monitoring and evaluation practice of Ethiopian Road Authority.
• To analyze the standard bench mark and the current practice of monitoring and
evaluation.
• To assess the main challenges in the implementation of monitoring and evaluating
federal road construction projects.
This research is done to understand the level and strength of monitoring and
evaluation practices in Ethiopian federal road projects. Hence, it increases awareness
about monitoring and evaluation process, practice and role in road projects. It also
seeks to identify the underling challenges faced during execution of M&E. Based on
the findings; this study will provide suggestions on areas that require improvement on
the M&E practice. As result this study will positively contribute for betterment of road
project management in general and the M&E practice in particular. This in turn will
increases the likelihood of project success and improves performance of federal road
sector in contributing to the overall economic development. This study will contribute
to the body of knowledge. This is because it can be used as a reference material by
researchers. The study will also identify areas related to M&E field that will require
more research, hence a basis of further research.
5
The research evaluate federal road project monitoring and evaluation practices by
combining the monitoring and evaluation practices of clients, contractors and
consultants engaged in federal road projects. This study will address road projects
managed by the federal government that concentrate on issues, overall M&E practices.
The study does not consider road projects by regional governments and subsequent
administrative hierarchies. That is the M&E practices done by other government organs
like Ministry of Finance and Economic Cooperation (MoFEC) and National Plan
Commission (NPC) and also donor organizations on federal road projects is not
included in this study.
The most challenging aspect of this study was gathering data from respondents, which
took a long time because the respondents were preoccupied with filling out the
questionnaires. Due to a time constraint, all of the respondents were not able to
participate in the questioner, and it took time for the respondents to assess the data
they were given. COVID 19 was also another challenge especially to collect
information from interviewees.
Despite the above limitations, the findings of this study will shed a light on potential
areas of improvement in the monitoring and evaluation practice and its contribution to
the success of the project in Ethiopian road projects.
The researcher first obtain data collection authorization from SMU and present it to
Ethiopian Road Authority and other concerned bodies (respondents) that are part of
this research. The researcher also gets the full consent from the participants prior to
the study by assuring the confidentiality of their information; they are not asked to
include their names or any other form of identification on the questionnaires. Any
form of communication in relation to the research was done with honesty and
transparency starting from preliminary visit to the office to verbally explain the
6
purpose and importance of the study and predict some challenges that come with data
collection.
This research is organized in to five chapters. The first chapter presents the
introduction where the back ground of the study, statement of the problem, research
questions, research objectives both general and specific, significance of the study,
scope, limitation of the study and ethical consideration are clearly described. The
second chapter deals with review of related literature on monitoring and evaluation
concerns. In this chapter, previously conducted studies are reviewed in order to
explore basic concepts and main practical activities on monitoring and evaluation and
related concerns both at global and local level. The third chapter presents the research
design and methodology that will be administered in the research where the intended
research approach, design, population, sampling and data source, analysis methods
validity and reliability are stated. The fourth chapter deals with the analysis of the data
collected are presented. The final Chapter five makes conclusions from the analysis
and gives recommendation.
7
CHAPTER TWO
2. LITERATURE REVIEW
2.1. Theorethial Literature Review
2.1.1. Project
A project, according to the Project Management Institute, is a temporary venture
undertaken to produce a specific product, service, or outcome. A project's main aim,
like most corporate activities, is to fulfill a customer's need. A project's characteristics
help distinguish it from the organization's other endeavors beyond this basic
similarity. The following are the main characteristics of a project:
• An established objective.
• An outlined life span with a beginning and an end.
• Several agencies and experts are generally involved.
• Doing an operation that has never been done before.
• Specific time, cost, and performance requirements.
Third, tasks usually involve the collective efforts of a number of experts, unlike much
organizational work that is segmented as per functional qualification. Instead of
operating in separate offices under separate supervisors, project members work
closely together under the supervision of a project manager to finish a project,
whether they are engineers, financial analysts, marketing experts, or quality
management professionals.
The fourth attribute of a project is that it is not routine and has certain elements that
are unique. This is not a matter of either/or but a matter of degree. Obviously, it takes
8
Finally, projects are bound by clear time, expense, and performance criteria. Projects
are assessed according to achievement, expense, and time consumed. Such three
restrictions place a greater level of transparency than is usually found in most
occupations. These three also illustrate one of project management's primary roles,
which manages the trade-offs between time, expense, and performance while
essentially satisfying the client (Larson & Gray, 2011).
2.1.2. The Project Life Cycle and the Project Cycle Management
A project's life cycle is the sequence of stages that a project goes through from start to
finish. The names and numbers of the phases are determined by the management and
control needs of the company or organizations involved in the project, as well as the
nature of the project and its implementation area. Functional or partial goals,
intermediate outcomes or deliverables, particular achievements within the overall
scope of work, or financial availability may all be used to break down the phases.
Phases normally have a beginning and an end, as well as a control point. A
methodology can be used to document a life cycle. The specific aspects of the
organization, industry, or technology used may define or shape the project life cycle.
Although every project has a clear beginning and end, the exact deliverables and
activities that occur in between can differ depending on the project. Regardless of the
particular work involved, the life cycle provides the fundamental framework for
project management (PMI, 2013).
PMI, (2017) states that many of the monitoring and control processes are ongoing from
the start of the project, until it is closed out. The Monitoring and Controlling Process
9
Group monitors and controls the work being done within each knowledge area, process
group, life cycle phase, and the project as a whole. It also calls for continuous
monitoring and evaluation during the project lifecycle's four phases. Each stage of the
project life cycle generally requires a different level of management effort. Similarly,
each stage of the project life cycle involves a particular degree of monitoring and
evaluation effort.
The significance of life cycles is that they illustrate the logic that regulates a project.
They also assist us in developing our project implementation plans. They assist us in
making decisions such as when to allocate resources to the project, how to measure its
performance, and so on. Consider the simplified model of the project life cycle shown
in Figure 1, which divides the life cycle into four distinct phases: conceptualization,
planning, execution, and termination (Pinto, 2016).
Figure 2.1 Project Life Cycle (Larson & Gray, 2011, p.7)
According to Lewis (2011) tools, people, and systems are all part of project
management. Work breakdown structures, PERT scheduling, earned value analysis,
risk analysis, and scheduling software are among the resources available (to name a
few). Many companies that want to incorporate project management put great
emphasis on tools. The use of software is an essential but not sufficient condition for
project management performance. The procedures or methods are often more critical,
because if you don't use the right management processes, the tools can only help you
meticulously track your failures.
Monitoring will most definitely be conducted as part of the project team's daily and
monthly activities. A monitoring review will be undertaken every two to three months
to promote communication between the different partners, and the outcomes will be
conveyed to stakeholders via reports. Monitoring may also be described as monitoring
the degree to which planned activities (what should happen) vary from when the plan
is executed (what actually happened), which will assist us in deciding what should
occur (Mani, 2007).
In the other hand, evaluation is usually performed at the completion of a project cycle.
It is a tool for project stakeholders to evaluate the project's success: Was the project
able to achieve its objectives? What were the results? What factors supported or
hindered this process? The information for the evaluation comes from the monitoring
process, stakeholder focus groups, individual interviews and informal discussions
with stakeholders and those impacted, and commentators who specialize in the areas
involved. Putting this information together will assist in deciding if the project is
meeting its objectives. The evaluation compares the trend of the desired outcomes
with the actual results. Depending on the size and nature of the project this could be
for the entire project or for an element component. Ongoing evaluation is also an
essential element of project management. The purposes of evaluation are to:
12
• Determine the scope of achievement of certain project goals (this helps fulfill the
accountability requirement)
• provide an opportunity to step back, think about the conduct of specific projects and
why they are implemented (this fulfils the requirement to judge the state of progress)
• help a project progress by providing the necessary changes indicated in an evaluation
with a clear and concrete direction, to improve project delivery
• significantly increase what you can gain from project implementation experience
• to make all stakeholders, team members and those involved, plus anyone with
relevance to such a project, be aware of the information collected during the
evaluation process and at the end of the project
• identify the costs and benefits to the beneficiaries and those affected
• determine whether the project generated adequate returns on investment with key
stakeholders, particularly funding entities and governments
• provide the project team with feedback on the success of the used strategies, the
unforeseen project factors and the effectiveness of corrective action adopted during
the implementation of the project
• provide guidance on how to plan future activities and assist other groups in the same
field or those wishing to improve the designs of their projects by spreading and
making available evaluation results to the public.
Another disaster is a lack of logic or other design flaws like unrealistic project
objectives for monitoring and evaluation, which will generate meaningless
performance indicators. The development process of the M&E system should be used
to sharpen and clarify the design itself, which means that if the design is too rigid, the
monitoring and evaluation system should be clearly established. The project manager
must determine how much flexibility the plans can be given and incorporate that into
the monitoring and evaluation rationale (Mani, 2007).
13
For different projects, monitoring and evaluating projects can be of great importance.
The main advantage of M&E is that project output is assessed and evaluated at
frequent basis, convenient activities, or when conditions of exception occur to define
and correct variances from the project management plan (PMI, 2017).
The data that we obtain through M&E offers a better foundation for decision-making
for project managers. We can figure out via M&E if the project is running as
originally designed and notify us about the strengths and drawbacks of the
implementation of the project. M&E helps us to recognize unforeseen and unintended
project outcomes and results in order to identify the internal and external factors
affecting the project's success. M&E documents and discusses why project activities
14
are successful or failing and how project planning and execution can be strengthened
in the future (Ravallion, 2008 & Robbins, 1996).
Monitoring and evaluation makes it easier to identify the most efficient use of
resources and provides the information needed to better manage strategic planning,
design, and implementation (Khan, 2015).
The study conducted by Callistus and Clinton, (2018) in “The role of monitoring and
evaluation in construction project management” indicates M&E has been shown to be
a basic management method for the execution of construction projects. Despite the
various different challenges faced in M&E, including the limited financial resources
for M&E, the poor institutional capacity of M&E departments or teams, and the poor
relation between project planning and M&E, when M&E is carried out adequately,
projects are completed with efficiency, expense, schedule, health and safety
regulations and to the satisfaction of stakeholders.
Kissi et al., (2019) published a report in Gana titled "Effects of monitoring and
assessment activities on construction project performance criteria." Their findings
indicate that, functionally, M&E serves as a switch to estimate the project's start date,
progress over time, and the prerequisites and goals represented by the strategies for
executing the project within the client. The study concluded that there is a clear
15
association between the various criteria for project performance and M&E practices,
based on the research. Furthermore, the findings indicate that detailed review of M&E
practices that have a major impact on project success criteria be required. To support
this, Naidoo (2011) emphasized the importance of improving and encouraging M&E
experts in project settings to assist in the strict implementation of M&E practices that
contribute to project success. While successful project completion remains a critical
goal for clients, there is a need to place a premium on the relationship that exists
between these parameters in project execution processes in order to achieve such
results. As a result, practicing M&E on a daily basis helps to ensure that tasks are
completed on schedule and within budget. Since scope management deals directly
with standard procedures, midterm and end assessments of M&E practices, it is to be
anticipated that project scope management reported a good significant relationship
with M&E. This is in line with the results of Papke-Shields et al., (2010), who found
that project scope management is related to M&E activities and remains a
performance criterion for project execution.
Monitoring and evaluation are processes, so synergies with other activities in the
project cycle, such as planning and budgeting, are needed. The ultimate objective of
PM&E will be adversely affected by the weak link between planning and budgeting,
on the one hand, and project monitoring and evaluation, on the other. Identifying any
shortcomings, biases, and risks to the accuracy of the data and analysis is an essential
factor in planning for data collection and analysis. The data management of the M&E
16
framework, which limits time and resource waste, must also be carefully planned
(Chaplowe, 2008).
The type of measures used to assess the monitoring and evaluation of projects affect
the successful implementation of the monitoring and evaluation of projects. Asiedu
(2009) indicates that an issue with the different models of monitoring and assessment
is that most of the experiments can only report outcomes after they had already
occurred. According to (Beatham, Anumba, Thorpe, and Hedges (2004) a conference
of leading members of the group of design and construction companies noted that the
main issues with the Construction Best Practice Program (CBPP) key performance
indicators (KPIs) were that they did not give the chance to enhance and that they were
structured as KPIs after results.
The lack of an effective digital PM&E database system and the development of non-
measurable PM&E goals, which can therefore not be used to evaluate project
performance and milestones or to communicate project outcomes, are obstacles to the
successful implementation of project monitoring and evaluation (Chaplowe, 2008).
Finally, the establishment of project monitoring and evaluation goals that are not
17
compatible with the intended beneficiaries' needs and values, as well as project
activities that do not achieve the desired results economically, are further threats to
project monitoring and evaluation (GNDPC, 2010).
Other factors affecting the effectiveness of M&E system are discussed below.
• Management Role
It is the duty of project management to make decisions and to plan the project
strategically. The M&E framework is also managed by tracking indicators, creating
quarterly project reports and annual strategic reports (IFRC, 2011). The project
manager ensures that the project staffs carry out their jobs effectively (Guijt, 2002).
The project staff does the implementation role where they collect monitoring data and
present it in weekly and quarterly reports (IFRC, 2011). For an M&E to function as a
managing tool, the project management and M&E staff need to identify and act on the
project improvements. Also for the M&E to be more effective it should be coordinated
by a unit within the project management in order to facilitate management’s quick use
of the M&E information (Guijt, 2002). It is the project management also that decides
when project evaluation should be done (Welsh, 2005). If the project management
fails to pay attention to the operations of the M&E, it diminishes its importance to the
rest of the project staff. The M&E process hence provides useful information for
decision-making to all levels of project management (Gaitano, 2011).
18
CHAPTER THREE
3. RESEARCH METHODOLOGY
3.1. Introduction
This chapter outlines various approaches on how the research will be conducted. It
focuses on the research design, data type and source, target population, sampling
techniques, data collection methods and tools and data analysis that will be used in
this study.
According to Kombo and Delno (2009), a descriptive design should be used as a form
of data collection through interviews and questionnaires in a research study that raises
questions. The same Author further explained and quoted Orodho (2003) as defining
descriptive survey as a means of gathering data by interviewing a sample of
individuals or conducting questionnaires. This is definitely what the questions of my
thesis require and is therefore direct my design choice, because the design is intended
for primary data collection. Through desk analysis, the secondary data was obtained.
That is, as attributed to the literature review, internet, magazines, journals, reports, and
textbooks.
Primary data are those that are obtained new and for the first time, and therefore have
a unique appeal. This is the initial data gathered directly from the respondents.
Questionnaires and interviews were used to collect primary data for this research.
M&E practices in federal road construction projects, the influence of monitoring
teams, the role of management in conducting M&E, challenges in implementing M&E
practices, and recommendations to address challenges facing M&E practices in road
construction projects are among the data collected through primary sources. The study
requested that local management be introduced to M&E workers for an interview and
questionnaire, and that they be given enough time to respond.
from the federal road projects among the three stakeholders 5 consulting offices and 5
contractor companies who work on federal road projects with ERA are included.
Therefore, 150 staffs who are engaged in the execution of federal road projects and
are responsible for monitoring and evaluation of the projects are the target population
for this study.
𝑁
𝑛=
1 + 𝑁(𝑒)2
where, n is the desired sample size, N is the population size and e is the level of
precision
Therefore,
150
𝑛=
1 + 150(0.05)2
150
𝑛=
1.375
𝑛 = 109.09
𝑛 = ∼ 109
In terms of data collection from key officials and supervisors, an interview is a data
collection instrument. Interview questions were conducted for 6 selected professionals
from the three stakeholders, 2 team leaders and Director of Planning and Program
Management from the client (ERA), 1 project manager from contractor and 2
supervisors from consultant. The information gathered through interviews
supplements the information gathered through questionnaires. Interviews are a reliable
way of gathering information about respondents' reactions, perspectives, and
impressions of specific circumstances (Khan, 2014).
The questionnaire consist items applying the likert scale with the responses from
strongly agree, agree, disagree and strongly disagree on the rating scale of 1-4 and 1-5.
Also in this study the type of questionnaires employed were close ended questionnaire.
The interview is the most adaptable and efficient way of gathering knowledge from
key respondents. In this study, main informant interviews were conducted with
individuals who are in charge of the authority's general project/program planning and
M&E operation.
23
The research instruments were piloted in ERA, Consultant and Contractor offices. The
same questionnaire was administered one week prior the current study to 18
respondents; this allowed the researcher to check for unclear issues and ambiguities.
The degree to which a research instrument produces reliable results or data after
repeated trials is referred to as its reliability. Reliability refers to measurement
consistency; the more accurate an instrument is, the more consistent the measure. A
pilot study was conducted by randomly administering questionnaires to selected
respondents in ERA, consultant office and contractor office, a field with similar
characteristics to the case under study. It was improved further by making required
changes to the questionnaire based on the pilot analysis. Following that, cronbach's
Alpha was used to perform a reliability analysis. The alpha coefficient has a value
24
ranging from 0 to 1 and was used to characterize the reliability of variables derived
from dichotomous (two possible answers) and/or multi-point formatted questionnaires
or scales. (i.e., rating scale: 1 = poor, 5 = excellent). The higher the rating, the more
reliable the generated scale. Creswell (2012) implies that a reliable research
instrument should have a composite Cronbach Alpha, α of at least 0.7 for all items
under study. Thus, reliability coefficient, α, of 0.7 was considered acceptable.
The researcher conducted a pilot study to pre-test the validity and reliability of the
data gathered using the questionnaire prior to the final assessment. The pilot study
allowed the research instrument to be pre-tested. Table 4.2 provides the findings on
the reliability of the research instrument.
The questionnaire reliability was determined using Cronbach‟s Alpha which internal
consistency measures by determining whether certain items measure the same
structure. The results of the pilot study suggest that the data are reliable, since the
reliability values surpass the 0.7 threshold specified (Mugenda & Mugenda, 2003).
25
CHAPTER FOUR
4. DATA ANALYSIS, PRESENTATION AND
INTERPRETATION
The study has been carried out with the sample respondents of 109 Ethiopian Road
Authority (ERA), five contractors and five consultants in the federal road project staff
for whom questionnaires have been administered. Of the 109 questionnaires, 97 were
completed and six interviews were held, which accounts for 94.5% of the replies.
The questionnaire was directly conducted by the researcher and therefore, as seen in
Table 4.1, it is strong in response (94.5%).
As we can see from Table 4.3, Among the 97 responses 42.3% of the responses were
from the clients and the rest 26.8% of the responses were from contractor group and
consultants 30.9% from each group. Only 2 or 2.1% of respondents have PHD, 33 or
34% respondents have 2nd degree (Masters Degree) and 62 or 63% respondents have
1st degree.
Table 4.2 Demographic Information of Respondents
Description Frequency Percent
Client 41 42.3
Type of Contractor 26 26.8
organization Consultant 30 30.9
PHD 2 2.1
Level of MA/MSc 33 34.0
education BA/BSc 62 63.9
Involvement in Yes 94 96.9
M&E No 3 3.1
26
The research sought to find out how the respondents were distributed with regard to
their participation in M&E of road project. The researcher presented the details in
Table 4.3, majority of the respondents, 96.9% (94) stated that they had worked in the
area where they were exposed to performing monitoring and evaluation of the projects,
while a minority, 3.1% had not conducted monitoring and evaluation of projects in the
federal road projects. These results demonstrate that individuals who worked in
federal road projects had broad expertise in project M&E. Among the respondents
30.9% or 30 respondents have less than 5 years experience, 53% or 52 respondents
have 6 to 10 years of experience, 10.3% or 10 respondents have 11 to 15 years of
experience, and 5.2% or 5 respondent have 16+ years of experience in road
construction as shown in table 4.3.
The costing of the project should be explicit and sufficient to monitor and evaluate
activities. The monitoring and evaluation budget can be clearly defined in the total
project costs in order to provide adequate respect to the monitoring and evaluation
function in the management of the project (Gyorkos, 2003 and McCoy, 2005).
Pretorius et’ al (2012) identified that project management organizations with mature
time management practices deliver more successful projects than those with less
mature time management practices. The total time of the project is determined from
the beginning of the project to the practical completion of the project as the number of
days/weeks. Project execution speed is the relative time (Chan, 2001). From the
secondary data the researcher observed that the other factor that is monitored is
environmental issues.
On the other hand consultants say while Monitoring and Evaluation is the main
concern for ERA as a client so it has a great role on the projects of federal road. They
don’t think it is perceived to the fullest which is why there are always claims on
delays and cost overruns in projects. Contractors interviewees’ point of view is also
the same, the project manager adds that the perception of the federal projects is not
based on the performance evaluation method instead it relays on the functionality of
the functional department which has been managed by the authority. (It lies under the
same functional department and not following the project characteristics). In other
words, project goals are set under the contract, as explained by the interviewee. The
contract clearly shows the start and end time of the project, thus also sets out
continuous objectives in the work program. However, ERA's annual objectives are
defined by Top Management, and is dependent on the budget available for the year
which can contradict the contract and complicate and difficult to monitor and evaluate.
When it comes to specialized unit for M&E both stockholders agree that there is a
counterpart on the ERA controls the consultant and contractors performance on the
project frequently.
The finding regarding the practice 90.43% of respondents claim that continuous
monitoring is used while 63.66% agreed evaluation at the Mid-term and impact
evaluations is common. Project completion and impact evaluation are implemented
relatively less with the response of 59.77% and 44.85% respectively as stated in table
4.4.
29
4.3.3. Tools and Techniques Used Collect, Manage and Analyze Data for M&E
Purposes
Table 4.5 Tools and Techniques used in ERA
Tools and Techniques used in ERA Client Contractor Consultant Average
to collect, manage and analyze
data for M&E purpose N Percent N Percent N Percent N Percent
Observation 25 96.15 30 100 32 98.72
41 100
Community discussion
27 65.85 13 50.00 20 66.67 20 60.84
Document review
25 60.98 22 84.62 29 96.67 25 80.75
The respondents were requested to choose the tools and techniques used to collect,
manage and analyze data for M&E purposes. The result showed that observation
followed by document review are the most commonly used data collection and
management techniques during M&E of federal road projects with 98.72% and 80.75%
respectively. On the other hand, as depicted in table 4.5 above, the use of case study
and questionnaire in ERA are seldom used as tool and techniques for data collection,
management and analysis during the process of M&E.
30
Most respondents (2.78) agree that there are guiding principles for the M&E team,
they also believe there a culture of documentation and information sharing (2.28)
agrees that the lesson learned of M&E are properly incorporated in M&E activities
with a mean of (2.22). Stakeholders are adequately involved at all levels in M&E
activities was given (2.09), there is a strong culture of institutional learning and
knowledge sharing is less (1.90).
Primavera 2
1 2.44 3 11.54 2 6.67 6.88
Excel sheet
25 60.98 19 73.08 25 83.33 23 72.46
Others 1
3 7.32 2.44
Logical Framework 6 8 7
7 17.07 23.08 26.67 22.27
From the findings, 75.8% confirmed that Performance indicators approaches were
widely used in M&E systems, while 55.9% of respondents indicated to have used
Policy and Manual. Only 22.27% respond the use of logical framework.
4.3.7. ERA Provide M&E Training for Monitoring and Evaluation Staff
Foresti, (2007) it means not objective training but a whole series of learning
approaches. From secondments to research institutes and possibilities to work in or
elsewhere to improve impact assessments, to time spent by project personnel on
evaluation and similarly to time taken by evaluators on the ground. Evaluators are also
arguing that this means not training. The assessment should also be independent and
relevant.
32
This study attempt to determine whether ERA provides M&E training to its
employees, and the results shown in table 4.9 show that 50.8% of respondents
reported that they received M&E training, while 32.8% stated that ERA did not
provide them with any M&E training, and the remaining 16.5% stated that while the
authority does provide training, it is not on a regular basis.
95.9% of respondents said that the management impact M&E in the implementing
process was evident from the results in Table 4.10. 89.7% agreed management
influenced M&E systems during the planning process, while 83.4% claimed
Management influenced M&E through its modification. Those who thought that the
management affected S&E processes in design changes in objectives and the
allocation of resources obtained 80.2 and 80.1% respectively.
33
4.4.2. The Extent to Which Management Affects the Success of M&E Systems
Table 4.11 Influence of management for the success of M&E systems
Client Contractor Consultant Average
The findings in table 4.11 are proof that management influence cannot be ignored
when assessing M&E systems, with 56.1% of the respondents who endorsed the large
extent while 47.2% of the respondents agreed to a very large extent to which
management influenced M&E systems in the projects of ERA. A smaller number of
people thought that management impact was minimal, 27.1% while 2.1% of
respondents were confident that management didn't affect structures of M&E. These
results are in accordance with the (UNDP, 2009) Manual for Planning, Monitoring
and Evaluation, which confirms how the management of M&E systems in the
development programs of developing countries (UNDP, 2009).
Sign of strong governance is to support and strengthen an M&E team. Supporting and
strengthening of the M&E team will also play an important role in ensuring that the
M&E team offers value to the company (Naidoo, 2011). Usually, a motivated team
34
performs highly (Zaccaro et’ al, 2002). The stronger a team is, the greater the
organization's performance and added value. This also apply to project management
monitoring and evaluation teams. Interestingly Pretorius et’ al (2012) Note that there
was no meaningful correlation in project management companies between the
maturity of their quality management processes and the success of their projects.
Nevertheless, in order to achieve project success, the researchers believe that
managers should in effect strive to attain quality in all elements, including the quality
monitoring team.
Table 4.12 Strength of Monitoring Team and its Influence to the Performance of M&E
Client
Contractor Consultant Average
Majority of the respondents (3.57) agreed to Very high extent that various aspects
which are used in assessing the strength of monitoring team which is perceived to be
35
one of the factors influencing project success, also respondents agreed to Very high
extent that Providing support and strengthening of M & E team is a sign of good
governance that influence performance of monitoring and evaluation of federal road
Projects, (3.47), Providing support and strengthening of M&E team will also play a
key role in ensuring that the M & E team adds value to the federal road project
performance and A motivated team usually achieves high performance both have the
same mean score to a High extent with a mean of (3.41). Managers should indeed
aspire to achieve quality in all the aspects and processes, including quality monitoring
team, so as to achieve project success (3.29), Low extent.
levels
Limited awareness on the
importance and implication of 4.07 .565 4.65 .485 4.40 .814 4.37 .621
M&E
Delay in providing information
4.12 .510 4.46 .508 4.27 .521 4.28 .513
(data) from different units
Lack of coordination and
4.12 .748 4.27 .667 4.13 .571 4.17 .662
interface among work units
Low quality data from reporting
4.05 .631 4.42 .578 4.30 .837 4.26 .682
units
36
The finding indicates that Knowledge and Experience gap of the available experts
with mean score of 4.41 is the most frequently cited challenge of implementing M&E
in ERA. This is followed by other challenges namely Limited awareness on the
importance and implication of M&E and Delay in providing information (data) from
different units with mean score of 4.37 and 4.28 respectively. Low quality data from
reporting units is another significant challenge with the mean score of 4.26. Lack of
coordination and interface among work units have a 4.17 mean score. Insufficient
support of management at different levels, failure in selecting the correct performance
indicator, the tools and Methods used and resource and/or logistic problem are
accorded as relatively less significant challenges that hamper proper implementation
of M&E in ERA.
As per the interview there are different forms of challenges that the respondents
mentioned, including data/information related, interface/coordination problem as well
as capacity gaps. There are also gaps in implementing efficient M&E systems
supported by ICT. Due to lack of skill, the data that has been collected from the
project may not be complied very well for the decision making process and recoding
system has its own problem. It is often recognized that there is a communication gap
between people involved in M&E practice and stakeholders as well, which may
effectively demonstrate the gap in use of this practice. The use of these M&E findings
for organizational growth is often known as a void.
M&E findings are used in case there is a need for decision making or action to be
taken, since it clearly shows which areas have issues, such as the use of manpower,
equipment, or materials, so that it can be used to make critical decisions. The
interviewees believe that monitoring and evaluation is the backbone when it come to
project success, it help to track the schedule and budget of the project as well as for
the project to be completed with the specified quality and scope which is set under the
project contract.
The study conducted by Abinet Ergando, (2018) on the Assessment of Monitoring and
Evaluation Practice of Federal Road Projects: The Case of Ethiopian Road Authority
shows that most important purposes of M&E are for project improvement which
means to follow up the progress of the project and accountability.
38
CHAPTER FIVE
5. SUMMARY, CONCLUSION AND RECOMMENDATION
The fact that these results are relevant to this study should be noted. In related studies
in current literature, they may affirm or deny results. The results of this study must be
generalized with caution, as various organizations can produce different results. The
findings could therefore only represent the studied organization.
The results of the analysis are described as follows according to the objectives set.
5.2. Conclusion
• The vast majority of respondents said that document review, observation and group
conversation are the three most important approaches used as regards the tools and
techniques used to collect, manage and analyze data. M&E is still an employment-
consuming company using more technologically enabled approaches such as GPS,
ArcGIS, Google Earth, video, audio or mobile data.
• Even if ERA offers training on M&E problems, almost one third of the respondents
don’t think training is provided and some do not feel it is sufficient to conduct M&E
actively.
5.3. Recommendation
• The unit responsible for M&E should have the requisite expertise and skills in place
to improve the operational capacities of the Authority's M&E framework. The M&E
practice that is spread through various work units should be centrally aligned within
the M&E unit.
• Sufficient, precise, reliable, correct and suitable data should be collected for M&E.
Therefore, the data source does not come from a single source to ensure this data
quality. And their mechanisms must be designed for various data sources, such as
visits to the project's site, data sources from different project stakeholders and so on.
• Appropriate utilization of ERA Ms, because this software is very organized and a
great way to evaluate the performance of contractors and consultants. And training
should be given on a regular basis.
• Blindly planning the project without any details and main figures result in
bankruptcy. Therefore, before moving into the details, managers should examine the
practice of previous project information and further use the record data to assist them
to carry out their project progress with minimal resources.
• Organized and archived information for the federal roads in all districts, Knowledge
and Experience sharing between all experts (contractors, consultant and ERA),
Proper communication between the site officers and the head office coordinators.
M&E team training must be taken seriously and their performance should also be
evaluated in a more regular, this is the recommendations of the interviewees from
consultants.
40
REFERENCES
Alhyari, S., Alazab, M., Venkatraman, S., Alazab, M. and Alazab, A. (2013) "Performance
Al-Tmeemy, S. M. H. M., Abdul-Rahman, H., & Harun, Z. (2011). Future criteria for success
337-348
Andersen, E. S., Birchall, D., Jessen, A. S. and Money, A. H. (2006). Exploring Project
Technology.
Beatham, S., Anumba, C., Thorpe, T., Hedges, I. (2004) KPIs: a critical appraisal of their use
10.1108/14635770041062030
Bordens, K.S., & Abbot, B.B. (2011).Research design and methods: a process
Callistus, T. and Clinton, A. (2018). The role of monitoring and evaluation in construction
10.1007/978-3-319-73888-8_89
https://sites.google.com/a/cpwf.info/m-e-guide/background/theory-of-change
ERA. (2013). Road sector development program: Fiften years performance assessment. Addis
ERA. (2015). The road sector development program (RSDP). Addis Ababa: Ethiopian Road
Authority.
ERA. (2019). Road sector development program: Twenty one years performance assessment.
Gaitano, S. (2011). The Design of M&E Systems: A Case of East Africa Dairy Development
2011.
Gorgens, M. & Kusek, J. Z. (2009). Making Monitoring and Evaluation Systems Work.
World Bank.
http://planipolis.iiep.unesco.org/upload/Ghana/Ghana_GSGDA_2010_2013_VolI.pdf
Guijt, I., Randwijk and Woodhill, J. (2002). A Guide for project M&E: Managing for Impact
International Fund for Agricultural Development. (2002). A guide for M&E. Retrieved from
http://www.ifad.org/evaluation/guide/toc.htm
Kahilu, D. (2010). Monitoring and Evaluation Report of the Impact of Information and
Kamau, C. G. & Mohamed, H.B. (2015). Efficacy of Monitoring and Evaluation Function in
Khan, N. A. (2015). Monitoring and evaluation (M&E) manual on construction works: Road,
Khang, D. B., & Moe, T. L. (2008). Success criteria and factors for international development
projects: A life cycle based framework. Project Management Journal, 39(1), 72-84.
https://doi.org/10.5539/ijbm.v9n11p224
Kissi, E., Agyekum,K., Baiden. B. K., Tannor, R. A., Asamoah, G.E., & Andam, E. T (2019).
43
20th Century Literature Review. Journal of Management and Sustainability, 1(1), 64-
81.
Larson, E. W., & Gray, C. F. (2011). Project management: The managerial process. New
Lewis, J. P. (2011). Project planning, scheduling & control: The ultimate hands on guide to
bringing projects in on time and on budget (5ed). New York: McGraw Hill.
Secretariat.
McMillan, D. & Chavis, W. (2001). Sense of community: A definition and theory. Journal
Müller, R., & Turner, R. (2007). The influence of project managers on project success criteria
and project success by type of project. European Management Journal, 25(4), 298-
309.
Nabris, K. (2002). Monitoring and Evaluation. Palestinian Academic Society for the Study of
Naidoo, I. A. (2011). The role of monitoring and evaluation in promoting good governance in
http://www.interaction.org/sites/default/files/Linking%20Monitoring%20and%20Eval
uation%20to%20Impact%20Evaluation.pdf
Pequegnat, W., Stover, E., & Boyce, C. A. (1995). How to write a successful research grant
application. A guide for social and behavioral scientists. New York, USA: Springer
science
Pinto, J. K., & Slevin, D. P. (1988). 20 Critical Success Factors in Effective Project
Columbus: Pearson.
Pretorius, S., Steyn, H., & Jordaan, J. C. (2012). Project management maturity and project
Tashakkori, A., & Teddlie, C. (2009). Foundations of mixed methods research: Integrating
quantitative and qualitative approaches in the social and behavioral science. United
Stem, C., Margoluis, R., Salafsky, N., & Brown, M. (2005). Monitoring and evaluation in
309.
Stockbridge, M., & Smith, L. (eds.). (2011). Project planning and management. London:
University of London
UNDP. (2009). Handbook on planning, monitoring and evaluating for development results.
UNDP, USA.
https://www.endvawnow.org/en/articles/331-why-is-monitoring-and-evaluation-
important.html
Welsh, N., Schans, M. and Dethrasaving, C. (2005). Monitoring and Evaluation Systems
Woodhill, J. (2000). Planning, monitoring and evaluating programs and projects: Introduction
Zaccaro, S. J., Rittman, A. L., & Marks, M. A. (2002). Team leadership. The Leadership
Appendix A QUESTIONNAIRE
Direction
• No need of writing your name;
• Put “X” mark in the appropriate space the choice you select whenever
necessary on the multiple choice sections;
• If you can't make an appropriate choice among the alternatives given,
write your reply; in the space provided for the option ―other, specify
area;
• Consider M&E = Monitoring and Evaluation
1. Type of organization
2. Level of education
A. PhD. ( ) B) MA/MSc ( ) C) BA/BSc ( ) D) Diploma ( )
F. Others Specify____________________
4. Work Experience
D. Above 15 year ( )
5. Have you been involved in conducting monitoring and evaluation of federal road
projects?
A. Yes ( ) B. No ( )
3. What tools and techniques does ERA use to collect, manage and analyze
data for M&E purposes? (multiple responses allowed)
There is a Culture of
5 documentation and information
sharing
5. What are some of the tools and methods used in Monitoring and evaluation
systems at Ethiopian Federal Road Projects?
A. Performance indicators ( )
B. Logical Framework ( )
C. Policy and Manual ( )
D. Others, Specify ____________________________
6. Do you think there is any difficulty experienced in using the M&E Tools and
Methods used in Ethiopian Federal Road Projects?
A. Yes ( ) B. No ( )
7. If yes, what do you think is contributing to the difficulty?
50
8. Does the organization (ERA) provide M&E training for Monitoring and
Evaluation staff?
Using the 4-point scale tick accordingly to show the extent in which the strength of
monitoring team influence success of M&E implementation in Ethiopian federal
road projects.
Using the 5-point scale tick accordingly to illustrate the level of challenge in M&E
implementation in Ethiopian federal road projects 1 strongly disagree, 2 disagree, 3
Neutral, 4 agree and 5 strongly agree.
Level of Challenge
No Possible Challenges Strongly Strongly
Agree Neutral Disagree
Agree Disagree
Knowledge and Experience gap of
1
the available experts
2 The tools and Methods used
Insufficient support of
3
Management at different levels
Limited awareness on the
4 importance and implication of
M&E
Delay in providing information
5
(data) from different units
Lack of coordination and interface
6
among work units
Low quality data from reporting
7
units
8 Resource and/or logistic problem
Failure in selecting the correct
9
performance indicator
53
1. How is M&E practice perceived in the Ethiopian Federal Road projects? Could
you please describe it?
3. Are M&E findings used in case there is a need for decision making or action to be
taken?
4. What is the contribution of M&E for the success of federal road projects?
5. What are the challenges of designing M& E system at project planning stage?
7. Do the results show the failings as well as the achievements of the project?
8. Does the information emerging from M&E fed back into ongoing project design
and future planning? In what way?
9. Are M&E findings well documented and archived as "lessons learnt" for future
use in other implemented projects?