Chapter 3 Research Design

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 119

CHAPTER III

RESEARCH DESIGN

1
Learning Objectives

At the end of this chapter students will be able to:


• To understand the process of research and the
decision making alternatives in each stage
• To explain the difference between research project,
research program, and research design
• To understand the different research design
classifications

2
…..Learning Objectives

• To identify and apply different concepts


relating to research
• To prepare research proposals
• To recognize ethical issues in research by
identifying the responsibilities of the
respondent, the researcher, and the client

3
3.1. Decision Making in Planning Research
Strategies and Tactics
• Decision making is the process of resolving a
problem or choosing among alternative
opportunities.
• The key to decision making is to recognize the
nature of the problem/opportunity, to identify
how much information is available, and to
recognize what information is needed.
• Every business problem or decision-making
situation can be classified on a continuum
ranging from complete certainty to absolute
ambiguity. 4
continued
Certainty
• The decision maker knows the exact nature of the
business problem or opportunity.
Uncertainty
• Uncertainty means that decision makers grasp the
general nature of the objectives they wish to
achieve, but the information about alternatives is
incomplete.
• Predictions about the forces that will shape future
events are educated guesses.
• Effective decision makers recognize potential value
in spending additional time gathering information to
clarify the nature of the decision.
5
continued

Ambiguity
• Ambiguity means that the nature of the
problem to be solved is unclear.

• The objectives are vague and the alternatives


are difficult to define.

• The most difficult decision situation.


• Under conditions of uncertainty or ambiguity,
business research becomes more attractive to
the decision maker.

6
Important Concepts

 Dependent and independent variables


 Extraneous variable
 Confounded relationship
 Experimental and non-experimental hypothesis-
testing research
 Experimental and control groups
 Causation and Correlation
 Validity
 Representativeness

7
Dependent and Independent variables:

• A variable is a concept which can take on different


quantitative values.
• Dependent variable is the effect or the outcome, the
variable being affected by the independent variable.
• An independent variable is the cause or that brings
about a change in the dependent variable
• ‘Continuous variables’ take on quantitatively
different values even in decimal points
• ‘Discrete variables’ can only be expressed in integer
values

8
Extraneous variable:
Independent variables that are not related to the
purpose of the study, but may affect the dependent
variable are termed as extraneous variables.
Control:
• ‘Control’ is used when we design the study minimizing
the effects of extraneous independent variables
• Used to minimize the influence or effect of extraneous
variables(s)
• the term ‘control’ is used to refer to restrain
experimental conditions.
Confounded relationship:
• When the dependent variable is not free from the
influence of extraneous variable(s), the relationship
between the dependent and independent variables is
said to be confounded by an extraneous variable(s).
9
Experimental and non-experimental hypothesis-testing
research:
• When the purpose of research is to test a
research hypothesis, it is termed as
hypothesis-testing research.
• It can be of the experimental design or of the
non-experimental design.
• Research in which the independent variable is
manipulated is termed ‘experimental
hypothesis-testing research’ and
• A research in which an independent variable
is not manipulated is called ‘non-
experimental hypothesis-testing research’.

10
Experimental and control groups:

• In an experimental hypothesis-testing research the


group exposed to usual conditions, it is termed a
‘control group.
• The group exposed to some novel or special
condition, it is termed an ‘experimental group’

Representativeness:
• refers to the question of whether the characteristics
of a
• sample drawn properly represents the characteristics
of the population from which the sample is selected
and about which a conclusion is to be made.

11
Validity
• refers to the problem of whether the data
collected is reflects the true picture of what is
being studied.
Reliability:
• refers to the dependability of the research findings
that they can be
• repeated either by the researcher or by other
researchers using similar research methods or
procedures.

12
Causation and Correlation

• Causation – refers to the relationship between the


dependent and the independent variables.

• Causation involves the direction and/or the


magnitude of change that the independent variable
causes on the dependent variable.

• Correlation – refers to the regular relationship


between two or more variables.

13
Causation and Correlation
• Correlation does not necessarily show cause
and effect relationship between variables or
occurrences.

• Experimentation involves control and


experimental groups.

• Experimental groups are those on which the


intervening variables are applied.
• The control groups are held, as they are with
out applying the intervening variable.
14
Research design: meaning
The research design constitutes the blueprint for
the collection, measurement, and analysis of
data. It aids 'the scientist in the allocation of his
limited resources by posing crucial choices: Is the
blueprint to include experiments, interviews,
observation, and the analysis of records,
simulation, or some combination of these? Are
the methods of data collection and the research
situation to be highly structured? Is an intensive
study of a small sample more effective than a less
intensive study of a large sample? Should the
analysis be primarily quantitative or qualitative?

15
Research design: meaning continued
 Research design is the plan and structure of
investigation so conceived as to obtain answers to
research questions. The plan is the overall scheme
or program of the research. It includes an outline of
what the Investigator will do from writing
hypotheses and their operational implications to the
'final analysis of data. A structure is the framework,
organization, or configuration of . . . the relations
among variables of a study. A research design
expresses both the structure of the research
problem and the plan of investigation used to obtain
empirical evidence on relations of the problem."

16
Basic points in the definitions

 It is a plan for selecting the sources and types of


information relevant to the research question.
 It is a framework for specifying the relationships
among the study's variables.
 It is a blueprint for outlining all of the procedures
from the hypotheses to the analysis of data.

 It provides answers to such questions as:


What techniques will be used to gather data?
 What kind of sampling will be used?
 How will time and cost constraints be dealt with?

17
Problem Discovery Problem Selection of
and Definition discovery exploratory research
technique
Sampling

Selection of
exploratory research
technique Probability Nonprobability

Secondary
Experience Pilot Case Collection of
(historical) Data
survey study study data
data Gathering
(fieldwork)

Data
Editing and
Problem definition Processing
coding
(statement of and
Analysis data
research objectives)

Data
Selection of Analysis
Research Design basic research
method Conclusions
Interpretation
and Report
of
findings
Experiment Survey
Secondary
Laboratory Field Interview Questionnaire Observation
Data Study Report

18
CLASSIFICATION OF RESEARCH DESIGNS
 The degree to which the research problem has
been crystallized (exploratory or formal).
 The method of data collection (observational or
survey).
 The power of the researcher to affect the variables
under study (experimental and the ex-post facto).
 The purpose of the study (descriptive or causal).
 The time dimension (cross-sectional or
longitudinal).
 The topical scope-breadth and depth-of the study
(a case or statistical study).
 The research environment (field experimentation,
laboratory experimentation, simulation).
19
Basic Questions -
Basic Research Design
• What types of questions need to be answered?
• Are descriptive or causal findings required?
• What is the source of the data?
• Can objective answers be obtained by asking
people?
• How quickly is the information needed?
• How should survey questions be worded?
• How should experimental manipulations be
made?

20
3.5. Basic Research Designs
• 3.5.1. Survey Research
• 3.5.2. Observational Research
• 3.5.3. Experimental Research
• 3.5.4. Case Study Research
• 3.6. Sampling

21
3.5.1. Survey Research

22
3.5.1.1. Census versus Survey

Activity 3.1
Which do you think is more appropriate? Survey or
census? Why”
___________________________________________
___________________________________________
________________________________________

23
Census versus Survey
• Census involves collecting all the necessary
data from the whole population under study
to make the conclusion about the population.
• Census often suffers from the following
drawbacks.
• It is cumbersome
• It is time consuming
• It is costly
• It may be less accurate because of
administrative problems
24
Survey is a research technique in which
information is gathered from a sample of
respondents using verbal or written questioning

Sample survey is a formal term for survey.

25
Benefits of Surveys
• Quick
• Inexpensive
• Efficient
• Accurate
• Flexible
Limitations of Surveys
• Poor Design
• Improper Execution

26
Classification of Surveys

C L A S S IF Y IN G
SURVEY
RESEARCH
M E TH O D S

M E TH O D S TR U C TU R E D TE M P O R A L
OF A N D D IS Q U IS E D C L A S S IF IC A TIO N S
C O M M U N IC A TIO N Q U E S TIO N S

27
Types of Surveys – Temporal Classification
Cross sectional surveys
• Data are collected at one point in time from a
sample selected to describe some larger
population at that time.

Longitudinal Surveys
• Surveys of respondents are made at different
points in time
• allows analysis continuity and changes over
time.

28
Longitudinal Survey
• Tracking study - compare trends and identify
changes
– consumer satisfaction

The primary longitudinal survey designs are:


• trend studies,
• cohort studies and
• panel studies.

29
Trend studies
• A given general population may be sampled and
studied at different points in time while the specific
population and the persons studied are different in
each survey

• Each sample represents the same general


population at different points in time.

30
Cohort studies
• Collects data from the same specific
population each time data are collected
although the sample respondents studied may
be different.
• Uses the same specific population to measure
changes at another time

Panel studies
• Involves collecting data from the same sample
of individual respondents over time.
31
Limitations of panel studies
• High attrition
• Switching of respondents
• Lack of willingness to respond
• Lack of appropriate memory
• Time consuming and
• Difficult to effectively administer

32
Choosing the appropriate longitudinal
design
Trend studies -- Longer duration
Cohort studies -- Medium duration
Panel Studies -- Short duration

33
• Surveys are more accurate than census results
because census may require large number of
skilled staff, more time, and more money would
make census very difficult to ensure accuracy.

• The quality of data collection may be


compromised because of difficulties in
administering the process

34
3.5.1.2. Errors in Survey Research

35
36
37
38
Deliberate
Falsification
Interviewer
Bias

39
40
Random Sampling Error
• A statistical fluctuation that occurs because of
change variation in the elements selected for the
sample

• With technically proper random probability


samples, statistical errors will occur be­cause of
chance variation
• Random sampling errors can be estimated using
standard error of estimate or can be considered
in determining confidence intervals in testing the
hypothesis
41
Systematic Error
• Results from some imperfect aspect of the research
design or from a mistake in the execution of the
research
• Sample bias - when the results of a sample show a
persistent tendency to deviate in one direction from
the true value of the population parameter
• These errors or biases are also called nonsampling
errors
• Can be classified under two general categories:
respondent error and
administrative error.

42
Respondent Error
• A classification of sample bias resulting from
some respondent action or inaction
• Nonresponse bias
• Response bias

43
Nonresponse Error
• The statistical differences between a survey
that includes only those who responded all, a
survey that also includes those who failed to
respond
• Nonrespondents –
– people who refuse to cooperate
– Not-at-homes
– Self-selection bias
• Over-represents extreme positions
• Under-represents indifference

44
Response Bias
• A bias that occurs when respondents tend to answer
questions with a certain slant that consciously or
unconsciously misrepresents the truth
Deliberate Falsification
• occasionally some people deliberately give false
answers.
• A response bias may occur when people
misrepresent answers in order to appear intelligent,
to conceal personal information, to avoid
embarrassment, and so on.

45
Unconscious Misrepresentation
• Even when a respondent is consciously trying
to be truth­ful and cooperative, response bias
can arise from question format, question
content, or some other stimulus
• Respondents who misunderstand the
question may unconsciously provide a biased
answer.
• A bias may also occur when a respondent has
not thought about the question "sprung" on
him or her by an interviewer.
46
Response bias
• Five types
– acquiescence bias,
– extremity bias,
– interviewer bias,
– auspices bias, and
– social desirability bias.

47
Acquiescence Bias
• The general tendency to agree or disagree
with all or most questions is particularly
prominent in research on new products, new
programs, or ideas previously unfamiliar to
the respondents.

48
Extremity Bias

• A category of response bias that results


because response styles vary from person to
person; some individuals tend to use extremes
when responding to questions.

49
Interviewer Bias
• A response bias that occurs because the
presence of the interviewer influences
answers.
• Occurs because of interplay between
interviewer and respondent.

50
Auspices Bias
• Bias in the responses of subjects caused by
the respondents being influenced by the
organization conducting the study.

Social Desirability Bias


• May occur, either consciously or unconsciously,
because the respondent wishes to create a favorable
impression
• People may overestimate their recreational activities
because recreation is perceived as a socially
desirable activity.
51
ADMINISTRATIVE ERROR
• The results of improper administration or
execution of the research task are
administrative errors.
• Inadvertently (or carelessly) caused by
confusion, neglect, omission or some other
blunder
• Improper administration of the research task
– Blunders
• Confusion
• Neglect
• Omission
52
Administrative Error
• Interviewer cheating - filling in fake answers
or falsifying interviewers
• Data processing error - incorrect data entry,
computer programming, or other procedural
errors during the analysis stage.
• Sample selection error -improper sample
design or sampling procedure execution.
• Interviewer error - field mistakes because of
negligence or lack of capacity

53
3.5.2. OBSERVATION METHODS
When Is Observation Scientific?
• Scientific observation is the systematic process of
recording the behavioral patterns of people, objects,
and occurrences without asking respondents.

• The researcher utilizing the observation method of


data collection witnesses and records information as
events occur or compiles evidence from records of
past events.

54
Observation becomes a tool for scientific
inquiry when it:
• Serves a formulated research purpose
• Is planned systematically
• Is recorded systematically and related to
more general propositions rather than
being represented as reflecting a set of
interesting curiosities
• Is subject to checks or controls on validity
and reliability

55
What Can Be Observed?

• Physical actions such as work patterns


• Verbal behavior, such as conversations;
• Expressive behavior, such as tone of voice or
facial expressions;
• Spatial relations and locations, such as
physical distance
• Temporal patterns, such as the amount of
time spent
• Verbal and pictorial records, such as people
waiting for service
56
What Can Be Observed?

Phenomena Example

Human behavior or physical Clients’movement


action pattern in a store

Verbal behavior Statements made by


public service clients
who wait in line

Expressive behavior Facial expressions,


tone of voice, and
other form of body
language
57
What Can Be Observed
Phenomena Example
Spatial relations How close visitors at an
and locations art museum stand to paintings

Temporal patterns How long municipal clients


wait for their order to be served

Physical objects Are offices arranged


conveniently

Verbal and Pictorial Bar codes on product packages


Records

58
Categories of Observation

• Human versus mechanical

• Visible versus hidden

59
• Human observation is commonly used when the
situation or behavior to be recorded is not easily
predictable in advance of the research.

• The major advantage of observation studies over


surveys is that the data do not have distortions,
inaccuracies, or other re­sponse biases due to memory
error, social desirability, and so on.
• In many situations the purpose of observation will be to
summarize, systematize, and sim­plify the activities,
meaning, and relationships in a social setting.
• Often, unstructured methods provide the greatest
flexibility to the observer.

60
Observation of Human Behavior
Benefits
• Communication with respondent is not
necessary
• Data without distortions due to self-report
(e.g.: without social desirability) bias
• No need to rely on respondents memory
• Nonverbal behavior data may be obtained

61
Observation of Human Behavior
Benefits
• Certain data may be obtained more quickly
• Environmental conditions may be recorded
• May be combined with survey to provide
supplemental evidence

62
Observation of Human Behavior
Limitations
• Cognitive phenomena cannot be observed
• Interpretation of data may be a problem
• Not all activity can be recorded
• Only short periods can be observed
• Observer bias possible
• Possible invasion of privacy

63
3.5.3. Basic Issues of Experimental Design

• Basic Manipulation of the Independent


Variable
• Selection of Dependent Variable
• Assignment of Subjects (or other Test Units)
• Control Over Extraneous Variables

64
• The experimenter has some degree of control over the
independent variable.
• The variable is independent because its value can be
manipulated by the experimenter to whatever he or she
wishes it to be.
• one variable (the independent variable) is manipulated
and its effect on another variable (the dependent
variable) is measured, while all other variables that may
confound such a relationship are eliminated or
controlled.
• The experimenter either creates an artificial situation or
deliberately manipulates a situation.

65
Manipulation of Independent
Variable
• Classificatory Vs. continuous variables
• Experimental and control groups
• Treatment levels
• More than one independent variable

66
Dependent Variable
• Its value is expected to be dependent
on the experimenter’s manipulation
• Criterion or standard by which the
results are judged
• Selection
– e.g... sales volume, awareness, recall,
• Measurement

67
Experimental Errors

• Sample Selection and Random Sampling Error,


as in other forms of development research,
may occur in experimentation.

• Random sampling error may occur if


repetitions of the basic experiment sometime
favor one experimental condition and
sometimes the other on a chance basis.

68
Controlling Extraneous Variables
• Elimination of extraneous variables
• Constancy of conditions
• Random assignment

69
Field versus
Laboratory Experiments

70
Field Versus Laboratory Experimentation
• A research experiment can be conducted in a natural
setting (field experiment) or in an artificial setting,
one contrived for a specific purpose (laboratory
experiment).

• In a laboratory experiment the researcher has almost


complete control over the research setting.

71
Laboratory Experiment Field Experiment

Artificial-Low Realism Natural-High Realism

Few Extraneous Many Extraneous


Variables Variables
High control Low control
Low Cost High Cost

Short Duration Long Duration


Subjects Aware of Subjects Unaware of
Participation Participation

72
Basic versus Factorial Experimental Designs
Basic experimental designs
• a single independent variable is manipulated
to observe its effect on a single dependent
variable.

Factorial experimental designs


• More sophisticated than basic experimental
designs.
• allow for investigation of the interaction of
two or more independent variables
73
3.5.4. Case Study Research

74
3.5.4. Case Study Research
• detailed and intensive analysis of one case
• e.g. a specific person, event, organization or
community
• often involves qualitative research
• case is the focus of interest in its own right -
location/setting just provides a background
• types of case: critical, unique, extreme, revelatory,
exemplifying
• e.g. Holdaway (1982, 1983): ethnography of
occupational culture in a particular police force

75
3.6.1. Sampling Designs & Sampling Procedures

Sampling Terminologies
• Sample
• Population or universe
• Population element
• Census

76
Sample
• Subset of a larger population

Population
• Any complete group
– People
– Sales territories
– Stores

77
Census
• Investigation of all individual elements that
make up a population

78
Stages in the Define the target population
Selection
of a Sample Select a sampling frame

Determine Sample size

Determine if a probability or nonprobability


sampling method will be chosen

Plan procedure
for selecting sampling units

Select actual sampling units

Conduct fieldwork

79
Target Population
• Relevant population
• Operationally define

Sampling Frame
• A list of elements from which the sample may be
drawn
• Working population
• Mailing lists - data base marketers
• Sampling frame error

80
Sampling Units
• Group selected for the sample
• Primary Sampling Units (PSU)
• Secondary Sampling Units
• Tertiary Sampling Units
Random Sampling Error
• The difference between the sample results and the
result of a census conducted using identical
procedures
• Statistical fluctuation due to chance variations

81
Systematic Errors
• Non-sampling errors
• Unrepresentative sample results
• Not due to chance
• Due to study design or imperfections in
execution

82
Errors Associated with Sampling

• Sampling frame error


• Random sampling error
• Nonresponse error

83
Two Major Categories of Sampling

• Probability sampling
• Known, nonzero probability for every element

• Nonprobability sampling
• Probability of selecting any particular member
is unknown

84
Nonprobability sampling designs
– Convenience sampling
– Quota sampling
– Purposive/Judgment sampling
– Snowball sampling
Probability Sampling Designs
– Simple random sampling
– Stratified random sampling
– Systematic sampling
– Cluster sampling

85
Convenience Sampling

• Also called haphazard or accidental sampling


• The sampling procedure of obtaining the
people or units that are most conveniently
available

86
Judgment Sampling
• Also called purposive sampling
• An experienced individual selects the sample
based on his or her judgment about some
appropriate characteristics required of the
sample member

87
Quota Sampling
• Ensures that the various subgroups in a
population are represented on pertinent
sample characteristics
• To the exact extent that the investigators
desire
• It should not be confused with stratified
sampling.

88
Snowball Sampling
• A variety of procedures
• Initial respondents are selected by probability
methods
• Additional respondents are obtained from
information provided by the initial
respondents

89
Simple Random Sampling
• A sampling procedure that ensures that each
element in the population will have an equal
chance of being included in the sample

90
Systematic Sampling
• A simple process
• Every nth name from the list will be drawn

91
Stratified Sampling
• Probability sample
• Subsamples are drawn within different strata
• Each stratum is more or less equal on some
characteristic
• Do not confuse with quota sample

92
Cluster Sampling
• The purpose of cluster sampling is to sample
economically while retaining the
characteristics of a probability sample.
• The primary sampling unit is no longer the
individual element in the population
• The primary sampling unit is a larger cluster of
elements located in proximity to one another

93
Examples of Clusters

Population Element Possible Clusters in Ethiopia

Ethiopian adult population States/regions


Counties/woredas
kebeles
/Villages/Blocks
Households

94
What is the Appropriate Sample Design?

• Degree of accuracy
• Resources
• Time
• Advanced knowledge of the population
• National versus local
• Need for statistical analysis

95
3.6.2. Determining the sample Size
Sample size is influenced by:

the purpose of the study,


population size,
the risk of selecting a "bad" sample, and
the allowable sampling error

96
SAMPLE SIZE CRITERIA
The three criteria to determine the appropriate
sample size:

the level of precision,


the level of confidence or risk, and
the degree of variability in the attributes
being measured (Miaoulis and Michener,
1976).

97
i) The Level of Precision
• sometimes called sampling error, is the range
in which the true value of the population is
estimated to be.
• This range is often expressed in percentage
points, (e.g., ±5 percent)

• 60% means between 55% and 65%

98
ii) The Confidence Level
• The confidence or risk level is based on ideas
encompassed under the Central Limit Theorem
• when a population is repeatedly sampled, the
average value of the attribute obtained by those
samples is equal to the true population value
• Furthermore, the values obtained by these
samples are distributed normally about the true
value, with some samples having a higher value
and some obtaining a lower score than the true
population value

99
confidence level …..

• 95% confidence level is indicates, 95 out of


100 samples will have the true population
value within two standard deviations of the
true population value (e.g., mean).

There is always a chance that the sample you


obtain does not represent the true population
value

100
confidence level

• This risk is reduced for 99% confidence levels


and increased for 90% (or lower) confidence
levels.

101
iii) Degree Of Variability
• The degree of variability in the attributes
being measured refers to the distribution of
attributes in the population
• The more heterogeneous a population, the
larger the sample size required to obtain a
given level of precision and vice versa
• Because a proportion of 0.50 indicates the
maximum variability in a population, it is often
used in determining a more conservative
sample size,
102
Approaches/strategies to determine the sample size

These approaches include:


 using a census for small populations,
imitating a sample size of similar studies,
using published tables, and
applying formulas to calculate a sample size.

103
i) Using A Census For Small Populations
• Using the entire population as a sample.
although cost considerations make this
impossible for large populations
ii) Using A Sample Size Of A Similar Study
• you may run the risk of repeating errors that were
made in determining the sample size for another
study.
iii) Using Published Tables
• Show actually obtained data
• Assume normal distribution

104
iv) Using Formulas To Calculate A Sample Size

• Sample size is better estimated for a different


combination of:
 levels of precision,
confidence, and
variability.

Further Reading
Sample Size Determination Formulas

105
Sampling in Practice
• Exercice

106
3.7. Ethical Issues in Business Research
What is Ethics?
• Ethics is a state of good or bad
• Ethics Vs Rules and Regulations
• Rules are formal and ethics is societal law
• Research demands ethical behavior from its
participants.
• The goal of ethics in research is to ensure that
no one is harmed or suffers adverse
consequences from research activities.

107
Research Ethics Continued
• Ethical questions are philosophical questions that are based on
the perceptions of a society.
• Societal norms are codes of behavior adopted by a group,
suggesting what a member of a group ought to do under given
circumstances.
• Ethics bridges the gap between laws and actual practices.
• In research ethical issues are concerns of the three major
stakeholders:
– the researcher or investigator
– the subject or respondent, and
– the sponsor or client or financier
• Therefore ethical issues in business research are explained by the
interaction of the rights and obligations of these three
stakeholders.

108
109
Rights and Obligations of the Respondent
Obligation:
• to provide truthful information.
Rights:
i) Privacy
• Collecting and giving out of personal information of
respondents without their knowledge can be a
serious violation
• This involves the subject’s freedom to choose
whether or not to comply with the investigator’s
request.
• Field interviewers indicate their legitimacy by
– passing out business cards,
– wearing nametags, or
– identifying the name of their company.

110
Rights continued
ii) Deception/The right not to be deceived
• Deception occurs when the respondent is told only a
portion of the truth or when the truth is fully
compromised.
• Some suggest two reasons to legitimate deception:
– to prevent biasing the respondents prior to the
survey and
– to protect the confidentiality of a third party (e.g.,
the client).
• However
a) The benefits to be gained by deception should be
balanced against the risks to the respondents.
b) Once the research is completed, the subjects who
were deceived should be “debriefed.”

111
Rights continued

c) Debriefing explains the truth to the participants and


explains why deception was used.
d) Researchers are not expected to create a false
impression by disguising the purpose of the research
iii) The right to be informed
– The right to be informed of all aspects of the research
including: its purpose and sponsorship
– not to exaggerate/Neither overstate nor understate
– Explain to the respondent that their rights and well-
being will be adequately protected and indicate how
that will be done.
E.g. maintaining confidentiality

112
Rights continued

• Ensure that interviewers obtain informed


consent from the respondents.
• Debriefing Respondents - it is a good practice
to offer them follow-up information.
• Consent must be voluntary and free from
coercion, force, requirements, and so forth.
• Respondents must be adequately informed in
order to make decisions.
• Respondents should know the possible risks or
outcomes associated with the research.
projects.

113
Rights and Obligations of the Researcher
• A code of ethics may also be developed by
professional associations.
• Code of ethics is a statement of principles and
operating procedures for ethical practice.
• Points that deserve attention in the efforts of the
researcher in relation to ethics.
i) The purpose of Research is Research
• The purpose should be explained clearly
• The researcher should not misrepresent
himself/herself for the sake of getting admission or
information.
• Research should not be politicized for any purpose.

114
Research ethics continued
ii) Objectivity
• Researchers must not intentionally try to prove a
particular point for political purposes.
• The researcher should not try to select only those
data that are consistent with his/her personal
intentions or prior hypothesis.
iii) Misrepresentation of Research
• To analyze the data honestly and to report correctly
the actual data collection methods.
iv) Protecting the Right to Confidentiality of both
Subjects and Clients
• The privacy and anonymity of the respondents are
preserved.
• Both parties also expect objective and accurate
report from the researcher.
115
Research ethics continued

v) Dissemination of Faulty Conclusions


• Researchers and clients should be reserved from
disseminating conclusions from the research project
that are inconsistent with or not warranted by the
data.
Rights & Obligations of the sponsor (Client/User)
i) Buyer-seller relationship
• The general development ethics expected to exist
between a buyer and a seller
• It is unethical to solicit competitive bids that have no
chance of' being accepted just to fulfill a corporate
purchasing policy stating that a bid must be put out
to three competitors.

116
ii) An Open Relationship with Researchers
• The obligation to encourage the researcher to seek
out the truth objectively.
• This requires a full and open statement of
– the problem,
– ex­plication of time and money constraints, and
– any other insights that may help the supplier,
iii) An Open Relationship with Interested Parties
• Conclusions should be based on the data.
• Violation of this principle may refer to justifying a
self-serving, political position that is not warranted
from the data poses serious ethical questions.

117
iv) Commitment to Research
• This involves requesting research proposals
when there is a low probability that the
research will be conducted.
• Researchers believe that the client has the
obligation to be serious about considering a
pro­ject before soliciting proposals.
v)Pseudo-Pilot Studies
• Tell the researcher that it is a pilot study and
that if a good job is performed during the pilot
study stages there will be an ad­ditional major
contract down the line.

118
vi) Right to Quality Research

• Ethical researchers provide the client with the


type of study he/she needs to solve the
managerial question.
• The design of the project should be suitable
for the problem
• The ethical researcher reports results in ways
that minimize the drawing of false
conclusions.

119

You might also like