Reviewer in RDL2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Unit 5: Quantitative Research Design and 

Methodology 

Lesson 1: Quantitative Research Design  

Quantitative Research Design 

Research design is a systematic procedure implemented by a researcher (Kumar 2011).

It aims to answer the research problem precisely and without bias. Specifically, quantitative  research designs are
empirical, straightforward, and can test their reliability and validity. The information provided in quantitative research
designs is more stringent and rigid for it  to undergo verification using replication studies.        

Quantitative research designs vary in terms of the following (Plano Clark and Creswell 2015):  

● the intent(e.g. testing for causality and effectiveness of an intervention or describing relationships between
variables) 

● use of manipulation (e.g., the researcher decides whether conditions should be manipulated or not)  

● procedures used (e.g., how to select participants, if there are assignments in a group, how to collect and analyze
data) 

Deciding on how you will conduct your study is the next thing to do after you have finalized your research topic. This
can be done when designing quantitative research. Quantitative research studies have different purposes. It can be
implemented to study cause-effect relationships, explain the effects of interventions, explore the relationships
between variables, and even describe different trends such as population. The three most common types of
quantitative research designs are experimental, correlational, and descriptive.

Once the research design of the study is identified, it would be easier for the reader to know the argument or claims
of the research study. 

Experimental Research Design  

The primary purpose of an experimental research design is to find out whether an intervention considered as
the independent variable (e.g., lack of sleep)has an effect on a dependent variable (e.g., academic performance). All
other nuisance variables (e.g., intelligence quotient of the student) are controlled, and the participants are randomly
assigned to different conditions. These variables are also known as extraneous variables, which might interfere with
the interpretation of the findings.      

To be more specific, an experimental design must contain the following elements (Drummond and Murphy-Reyes
2017):  

1. Treatment or intervention. The researcher must manipulate the independent variable to see if it has an effect on
the dependent variable.  

2. Controlling extraneous variables. This ensures that the changes in the dependent  variable are solely because of the
manipulation of the independent variable.  

3. Randomization of participants. The random assignment of the participants by the researchers removes the
selection bias to balance the different groups with different  treatment conditions.  

Remember :

Independent variable (IV) is the variable that is manipulated by the researcher. On the other hand, the dependent
variable (DV) is the   variable being measured to know whether the changes in the IV have   an effect on the DV. 

Quasi-Experimental Design 

Similar to experimental designs, a quasi-experimental design requires the independent variable to be


manipulated. However, it lacks a key element of an experimental design, which is randomization. These designs make
use of intact groups and are used when artificial creation of treatment conditions is not possible. However,its main
weakness is that it limits the design of the study because it is susceptible to the possibility of being influenced by
nuisance variables For example, in a study of a new reading comprehension curriculum for preschoolers, the
researcher may need to use existing classes, and assign one class as the experimental group while another one is the
control group .In this case, randomization is not possible because it  will interfere with the classroom learning of the
students. However, the limitation of the   study is that the experimental and control group might already have varying
levels of  reading comprehension in the first place, and it might affect the results of the experimental study.

Correlational Research Design  

Correlational research is used to study the association between two variables (Leary2012).   In this study, the
researcher is interested if the variables are related to one another. It is  important to keep in mind that this research
design is a non-experimental procedure; it   does not tell about the causality between two or more variables; rather,it
only describes the degree of association. 

In correlational studies, the relationship between variables can either be positive or negative. The table below
summarizes the difference between the two types of relationships:   

Table 1. Differences between positive and negative correlation.  

Type of  Correlation Definition Example 


Positive  Indicates a positive relationship  Higher motivation level of students  is
between variables. As one variable  related to higher academic 
increases, so does the other  variable. performance. 
This is also known as a  direct
relationship.  
Negative  Indicates a negative relationship  The higher stress level of students  is
between variables. As one variable  related to lower academic 
increases, the other variable  performance.  
decreases. This is also known as an 
inverse relationship. 

Descriptive Research Design  

The purpose of a descriptive research design is to study a naturally occurring   phenomenon or subject of interest
(Mertler 2015). It simply describes a specific   characteristic or behaviour of a target population. Descriptive research
does not require hypotheses from the researcher since it only asks about basic information about the group of
interest. Moreover, it does not involve the manipulation of variables and assigning participants to different
conditions. There are three most common types of descriptive research: survey, demographic, and epidemiological
research (Leary 2012).

Survey Research 

Survey research is used to explore the trends in the behaviour, attitudes, characteristics, and opinions of a group of
people representing a population through the use of a survey questionnaire. For example, researchers can use survey
research to study trends such as       opinions of citizens towards a current issue in the country.    

Demographic Research

Life events, such as mortality rate, layoffs, and the number of household members, are   described and understood
using demographic research. Some researchers might be  interested in the underlying processes of these major life
events. An example is that a   sociologist may be interested in demographic variables.   

Epidemiological Research

  Epidemiological research is often used by medical and public health researchers who study differentpatterns of
disease and health.The prevalence of illnesses and death indifferent groups of people is explained by epidemiological
research. For example, a researcher may  study how often dengue fever occurs among children in the Philippines

Lesson 2: Sampling Procedure for  Quantitative Research

Sampling in Quantitative Research 

There are limitations in obtaining participants for a research study. Just like in the example  given earlier, it is not
practical and nearly impossible to recruit the entire Filipino population as your respondents.   

Population can be described as a group of people possessing a similar characteristics: whereas, a sample pertains to
the subgroup or a portion from a population. For example,  we can consider all the students enrolled in your school as
a population while the students  enrolled in the ABM strand as a sample.  
The goal of sampling in quantitative research is to ensure that the sample you have selected   is representative of the
target population. Sampling is defined as obtaining a relatively small amount of sample from a bigger group to
explore unknown information about a certain population (Kumar 2011). Researchers use different strategies to select
their samples. In the next few sections, we will discuss the different approaches in selecting your sample for your
research study.    

Probability Sampling 

In probability sampling, all individuals from the target population have an equal chance of being selected for the
sample. The researchers use a random process to recruit thei  participants. Probability sampling is commonly used if
the researchers have a specific population in mind to study, or infer a particular behaviour or characteristic (Howitt
and  Cramer 2014). 

For example, if your target population is the entire student body in your school, the names of the students enrolled
in your school will be listed, then the researchers will randomly out names from the list. This is an example of random
sampling which is considered   as the most common type of probability sampling. The other more specific methods
are  systematic random sampling and stratified random sampling. 

Simple Random Sampling 

As discussed earlier, simple random sampling is when the researcher randomly selects their  participants from a list of
all the individuals from the target population. In this method, each individual has an equal chance of being selected.
For example, if 50 participants a  required from an entire batch of 200 students, then we should list all the students’
names, and randomly pick 50 individuals from it. There are options in doing this technique (Coolican 2014): 

1. Computer selection - there are some computer programs that can generate a  random set of names for your
sample when you encode the names of the entire  population.  

2. Random number tables - all you have to do is assign a number for each individual from the population. Then,
move through the number table and pick a random number to select your desired number of samples.  

3. Manual selection - this is also known as the fishbowl method. This is convenient to use when the target population
is small. The names of each member of the population are listed down on slips of paper, and put in a box or container.
The  sample will be picked out one by one, and will be reshuffled every time a slip of  paper will be drawn out.     

Systematic Random Sampling

In systematic random sampling, the researcher randomly decides on a starting poin tin the  list of the members of
the target population, and chooses every nth case from the population; where n is a random number decided by the
researcher (e.g., every 10th  person). For example, a researcher secures a list of names of all the students in your
school and randomly selects a starting point in the list. Then, every 10th name will be recruited to be a participant.    

Stratified Random Sampling 

Stratified random sampling pertains to the division of the target population into subgroups and randomly selects
participants from each subgroup.This is to ensure that the sample will represent a proportionate number of
subgroups from the target population. As an example, a researcher obtains a list of all senior high school students
enrolled in your  school, and selects from the subgroups of academic tracks (i.e., ABM, GAS, HUMSS, STEM).     

Non-probability Sampling 

In most cases, conducting a probability sampling is not feasible for researchers due to several limitations, such as
financial and time constraints. An alternative is for the  researchers to conduct a nonprobability sampling. This
method is more practical and  convenient for the researchers since the sample is chosen by the researcher from the  
target population rather than being randomly selected. The most common types of   nonprobability sampling are
convenience, quota, purposive, and snowball sampling.    

Convenience Sampling 

In convenience sampling, the researcher recruits participants who are readily available and accessible to participate
in the research study. An example of convenience sampling is a  college instructor, who is researching, recruits his
students to participate in his study.     
Quota Sampling 

Similar to stratified sampling, quota sampling involves selecting people from different subgroups from the target
population. However, the difference is that random methods are not being employed and the selection from each
subgroup is solely .based on the researcher’s decision For example, the researchers would stop interviewing a group
of  students from each academic track in your school when the quota for the subgroup has   been reached.    

Purposive Sampling 

In purposive sampling, the researcher chooses their participants intentionally because  they are considered as most
suitable in providing information for the research study. The  participants who will be selected are most likely to have
appropriate expertise and experience on the topic. For example, a researcher may intentionally recruit athletes from
different sports for their research on factors affecting the motivation of athletes.    

Snowball Sampling 

Snowball sampling is a technique used when the characteristics of the participants are  uncommon. The researcher
contacts few potential participants, and asks them if they can  refer more participants which have similar
characteristics as them. This technique is  appropriate for some research studies. For example ,if the research topic is
about the effect    of international exchange scholarships, the researcher may ask the participants if they know 
someone who had been an exchange student abroad.            

Sample Size 

A sample size is the actual number of individuals, who participated in the research study and contributed significant
data. The general rule in quantitative research is that the larger  the sample size, the better. However, limitations such
as time and cost of ten get in the way of obtaining a large sample size. There are some considerations in deciding your
sample  size, such as the design of your research study.  Plano Clark and Creswell (2015) recommended the following
sample sizes for a specific research design:  

1. at least 15 participants per group in a true experiment or quasi-experiment; 

2. approximately 30 participants for a correlational study relating variables; and, 

3. a minimum of 350 individuals for a survey study, but varies depending on other considerations such as the number
of the target population.   

Lesson 3: Research Instruments for  Quantitative Research

Research Instruments in Quantitative Research 

In a broader definition, an instrument can be defined as a tool,such as a questionnaire or a survey, that measures
specific items to gather quantitative data. Researchers use  instruments to measure abstract concepts such as
achievement, the ability of individuals, or   personality. It also allows researchers to observe behavior and interview
individuals.(Plano  Clark and Creswell 2015).    

Types of Research Instruments 

According to Plano Clark and Creswell (2015), quantitative studies mostly have five general  types of research
instruments: demographic forms, performance measures, attitudinal measures, behavioral observation checklists,
and factual information document.

Demographic Forms 

Demographic forms are used by the researchers to collect basic information about the participants. Basic information
such as age, gender, ethnicity, and annual income are some  of the information asked in a demographic form. 

Performance Measures 

Performance measures are used to assess or rate an individual’s ability such as   achievement, intelligence, aptitude,
or interests. Some examples of this type of measure is the National Achievement Test administered by the
Department of Education or the college admission tests conducted by the different universities in the country.    

Attitudinal Measures 

Attitudinal measures are instruments used to measure an individual’s attitudes and opinions about a subject. These
instruments assess the respondent’s level of agreement to the statements, which often requires them to choose
from varied responses such as  strongly agree to strongly disagree. The questionnaire on the“Explore”part is an
example of   an attitudinal measure since it requires the participants to what extent they agree or   disagree with a
given statement. 

Behavioral Observation Checklist

Behavioral observation checklists are used to record individuals’ behaviour and are mostly   used when researchers
want to measure an individual’s actual behaviour instead of simply recording a person’s views or perceptions. 

Factual Information Documents 

Factual information documents are accessed to tell information about the participants’ documents such as available
public records. An example of these documents are school records, attendance records, medical records or even
census information. 

Constructing Research Instruments for Quantitative Research 

It is necessary for the researcher to establish first the objectives or research questions that  they aim to answer. This
is a prerequisite for constructing a good quality research   instrument. Kumar (2011) suggests the following procedure
for beginners in constructing a             research instrument for quantitative research: 

  1. State your research objectives. To begin with, all specific objectives, research  questions, or hypotheses that you
aim to explore must be clearly stated and defined. 

2. Ask questions about your objectives. Construct related questions for objective, research question, or hypothesis
that you aim to explore in your study.  

3. Gather the required information. All questions constructed must be taken into   consideration to identify how
these can be answered.  

4. Formulate questions. Finalize all possible questions that you will ask to your   participants to obtain relevant
information about the study.     The figure below demonstrates an example of the guidelines discussed above in 
constructing a research instrument. The sample study is a research that aims to establish a health and wellness
program in a company. 

Assessing the Quality of an Instrument 

Now that you have learned about the different types of instruments used in quantitative   research, it is also
important to ensure that the instruments you will use are of good quality since this will determine the quality of the
data you can collect for your research study. The researcher can either construct their own instrument or use a well-
developed  instrument by another researcher for their study. 

Reliability

  The reliability of a measure can be simply defined as the stability and consistency of an instrument under different
circumstances or points in time. This is true for all the types   of reliability although it differs in the type of consistency
of the measure. Reliability can tell  about the instrument’s internal consistency, stability over time, and alternate
forms.  

  Internal Consistency 

Internal consistency means that any group of items taken out from a specific instrument  will likely bring about the
same results just like when the entire instrument was  administered. It will tell how consistent are the items from are
search instrument measuring a specific concept. The internal consistency of a measure can be obtained through the
following techniques (Howitt 2014):  

● Split-half reliability. The score which resulted from half of the items on the instrument was correlated with the
score on the other half of the instrument.  

● Odd-even reliability. The obtained score of the even-numbered items(e.g.,item2,  4, 6, 8 and so on) was correlated
with the score on the odd-numbered items (e.g.,  items 1, 3, 5, 7 and so on) of the same instrument. 

● Cronbach’s alpha. Also called the alpha reliability. This is obtained by getting the   mean of every possible half of
the items correlated with every possible other half of    the items. In other words, the Cronbach’s alpha gets the
average of all possible     halves into two equal sets of items.
Stability Over Time 

An instrument’s stability overtime is also known as test-retest reliability.This is simply the correlation between the
scores of the participants on an instrument at one point in time   with their scores on the same instrument given at a
later point in time.

Alternate Forms 

To cancel out the effects of remembering the items as discussed above in the test-retest  reliability, another way to
measure reliability is by using alternate forms. Another term for  this type of reliability is called parallel forms
reliability. This requires the researcher to use equivalent versions of the test, wherein the participant’s scores in both
tests are being correlated. For example, a teacher may use alternate versions of math tests(e.g.,SetA and  Set B) for
their students, but it basically measures the same scope or content(e.g.,quadratic   equation).  

Validity 

A general definition of validity is the instrument’s capacity to measure what it is supposed to measure. This means
that the instrument is an accurate measure of the   variable being measured. There are three types of validity: face
and content validity,     criterion validity, and construct validity (Kumar 2011).   

Face and Content Validity 

The face validity and content validity of an item both speak of the instrument’s   measure the intended construct by
evaluating the questions in the instrument in relation to   the research question.    

Face validity is the extent to which an instrument appears to measure what it is supposed   to measure. In this type
of validity, the items are evaluated if it has a logical relationship    between the research’s objectives. This is also
considered as the weakest form of validity   since it merely requires the researcher to look at the instrument and
evaluate whether the  items measure an intended construct based on its appearance. 

Content validity is the ability of the test items to include important characteristics of the     concept that is intended
to be measured. For example, you may take your first periodical    test for the school year and judge immediately
whether the scope of the test is in line with   the lessons your teacher has taught you for the entire first quarter.  

Criterion Validity 

Criterion validity tells whether a certain research instrument can give the same result as   compared to other similar
instruments. There are two types of criterion validity: concurrent and predictive validity (Langdridge and Hagger-
Johnson 2013).    

Concurrent validity can be obtained by correlating two research instruments taken both at the same time. This type
of validity is similar to alternate forms reliability wherein two similar instruments are used to evaluate the quality of
the instrument.    

Predictive validity refers to the ability of an instrument to predict another variable, which is called a criterion. The
criterion should be different from the construct originally being measured. For example,a college entrance exam
composed of different subtests such as reasoning, numerical, and verbal ability has a predictive validity to a student’s
likelihood  to succeed in the university they are applying to.        

Construct Validity 

Construct validity can be assessed by examining whether a specific instrument relates to other measures. This type
of validity is the most sophisticated among all the other type of  validity. The process of obtaining construct validity
involves correlating the scores between     the instrument to be evaluated to other instruments. Construct validity can
be classified to     two types: convergent validity and discriminant validity (Leary 2011).   

Convergent validity is obtained when an instrument correlates with other similar   instruments that it is expected to
correlate with. For example, a scale about self-esteem     can be correlated to other instruments measuring related
constructs such as self  confidence.    

Discriminant validity, on the other hand, is obtained if an instrument does not correlate   with other instruments
that it should not correlate with. For example, a scale about   self-esteem must be correlated to instruments not
related to the construct, such as  intelligence.                     

You might also like