PR2 Handout2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Sampling Plans, Designs and Techniques

Sampling is the process of getting information from a proper subset of


population. The fundamental purpose of all sampling plans is to describe
the population characteristics through the values obtained from a sample
as accurately as possible.

A sampling plan is a detailed outline of which measurements will be taken


at what times, on which material, in what manner, and by whom that support
the purpose of an analysis.

Sampling plans should be designed in such a way that the resulting data
will contain a representative sample of the parameters of interest and
allow for all questions, as stated in the research objectives to be
answered.

The following are the steps involved in developing a sampling plan:


1. Identify the parameters to be measured, the range of possible
values, and the required resolution.
2. Design a sampling scheme that details how and when samples will be
taken.
3. Select sample sizes
4. Design data storage formats
5. Assign roles and responsibilities

For a quantitative analysis, the sample’s composition must accurately


represent The target population, a requirement that necessitates a careful
sampling plan. Among the issues to consider are these five questions.
1. From where within the target population should we collect samples?
2. What type of samples should we collect?
3 What is the minimum amount of sample for each analysis?
4.How many samples should we analyze?
5.How can we minimize the overall variance for the analysis?

Sampling Techniques

Probability Sampling
Probability sampling refers to a sampling technique in which samples are
obtained using some objective chance mechanism, thus involving
randomization.

The use of probability sampling enables the investigator to specify the


size of the sample that they will need if they want to have the given
degree of certainty that their sample findings do not differ by more than
a specified amount from those that a study of a whole population would
yield.

PRACTICAL RESEARCH 2
Chapter 4: Understanding the Data and Ways to Systematically Collect Data
There are commonly used probability sampling techniques which are the
1) Simple random sampling 2) systematic sampling 3) stratified sampling
4) Cluster sampling and 5) multi-stage sampling.

• Simple random sampling is the basic probability sampling design, in


which the sample is selected by a process that does not only give
each element in the population a chance of being included in the
sample but also makes the selection of every possible combination of
the desired number of cases equally likely. The sample is selected
in one of two ways: by means of a table of random numbers or by
using the lottery technique.
• Systematic random sampling is affected by drawing units at regular
intervals from a list. The starting point or the first units to be
taken is a random choice. It differs from one simple random sampling
where each member of the population is not chosen independently.
Once the first member has been selected, all the other members of
the random sample are automatically determined. The population list
in the systematic sampling must be in random order.
• Stratified random sampling is selecting sub-samples proportionate in
size to the significant characteristics of the total population.
Different strata in the population are defined and each member of
the stratum is listed. Simple random sampling is applied to each
stratum. The number of units drawn from each stratum depends on the
ratio of the desired sample in the population (n/N).
• Cluster sampling is a technique in which the unit of sampling is not
the individual but the naturally occurring group of individuals. The
technique is used when it is more convenient to select individuals
from a defined population. It considers a universe divided into N
mutually exclusive sub-groups called clusters. It has simpler frame
requirements. A random sample of n clusters is selected and their
elements are completely enumerated. It is administratively
convenient to implement and its main advantage is saving time and
money.
• Multi-stage sampling refers to the procedure as in cluster sampling
which moves through a series of stages from more inclusive to the
less inclusive sampling units until arriving at the population
elements that constitute the desired sampling.

Non-probability sampling
This is a technique when there is no way of estimating the probability
that each element has of being included in the sample and no assurance
that every element has a chance of being included. The major forms of non-
probability sampling are accidental, purposive, and quota.

• Accidental sample is one which the investigator simply reaches out


and takes the cases that are at hand, continuing the process until
the sample reaches a designated size. This is one of the most common
techniques of sampling. This is also known as "the man on the

PRACTICAL RESEARCH 2
Chapter 4: Understanding the Data and Ways to Systematically Collect Data
street" interviews conducted frequently by television news program
to get quick reading of public opinion. The problem here is the lack
of evidence that they are the representatives of the population you
are interested in generalizing.

• Purposive sampling or judgment sampling is used when you sample with


a purpose in mind. Usually you seek with one or more specific
predefined groups. An example of this is when you run people in a
mall or on the street that are carrying a clipboard and who are
stopping various people and asking if they could interview them.
Most likely they are conducting purposive sample that might be
looking for a Filipino female with long hair between ages 17-25
years old. They size up the people passing by and anyone who looks
into the category they stop to ask if they will participate. One of
the first things they do is to verify that the respondent meets the
criteria for being in the sample.

• Quota sampling is a technique with provision to guarantee the


inclusion in the sample of diverse elements in the population and to
make sure that these diverse elements are taken into account in
proportion in which they occur in the population. In quota sampling,
you select people non-randomly according to some fixed quota. There
are two types of quota sampling: proportional and non-proportional.

In proportional quota sampling you characteristics of the population


by sampling a proportional amount of each.

For example, if you know the population has 70% Women and 30% men,
and that you want a total sample size of 100, you will continue
sampling until you get those percentages and then you will stop. So,
if you've already got the 70 women for your sample, but not the 30%
men, you will continue to sample men but even if legitimate women
respondents come along, you will not sample them because you have
already "met your quota." The problem here is that you have to
decide the specific characteristics on which you will base the
quota. Will it be by gender, age, education race, religion, etc.?

Non-proportional quota sampling is a bit less restrictive. In this


technique, you specify the minimum number of sampled units you want
in each category. You will not be concerned with having numbers that
match the proportions in the population. Instead, you simply want to
have enough to assure that you will be able to talk about even small
groups in the population. This technique is the non-probabilistic
analogue of stratified random sampling. It is usually used to assure
that smaller groups are adequately represented in your sample.

PRACTICAL RESEARCH 2
Chapter 4: Understanding the Data and Ways to Systematically Collect Data
Instrumentation
An important part of the research study is the instrument in gathering
the sample because the quality of research output depends to a large
extent on the quality of research instruments used.
Instrument is the generic term that researchers use for a measurement
device like survey, test, questionnaire, and many others.
To help distinguish between instrument and instrumentation, consider
that the instrument is the device and instrumentation is the course of
action which is the process of developing, testing, and using the
device.

Researchers can choose the type of instruments to use based on their


research questions or objectives. There are two broad categories of
instruments namely; 1) Researcher-completed instruments and 2) subject-
completed instruments. Examples are shown on the following table below:

• Researcher-completed Instruments
Rating scales
Interview schedules/guides
Tally sheets
Flowcharts
Performance checklists
Time-and-motion logs
Observation forms
• Subject-completed Instruments
Questionnaires
Self-checklists
Attitude scales
Personality inventories
Achievement/aptitude tests
Projective devices
Sociometric devices

Validity
Validity refers to the extent to which the instrument measures what it
intends to measure and performs as it is designed to perform. It is
unusual and nearly impossible that an instrument is 100% valid that is
why validity is generally measured in degrees.

As a process, validation involves collecting and analyzing data to


assess the accuracy of an instrument. There are numerous statistical
tests and measures to assess the validity of quantitative instruments
that generally involves pilot testing.

There are three major types of validity. These are content validity,
construct validity and criterion validity which are presented below.
• Content validity looks at whether the instrument adequately
covers all the content that it should with respect to the

PRACTICAL RESEARCH 2
Chapter 4: Understanding the Data and Ways to Systematically Collect Data
variable. In other words, it refers to the appropriateness of the
content of an instrument.

It answers the question Do the measures (questions, observation


logs, etc.) accurately assess what you want to know? “or "Does
the instrument cover the entire domain related to the variable,
or construct it was designed to measure?

Example: In an undergraduate nursing course with instruction


about public health, an examination with content validity would
cover all the content in the course with greater emphasis on the
topics that had received greater coverage or more depth.

A subset of content validity is face validity, where experts are


asked their opinion about whether an instrument measures the
concept intended.

• Construct validity refers to whether you can draw inferences


about test scores related to the concept being studied.

Example: if a person has a high score on a survey that measures


anxiety does this person truly have a high degree of anxiety?

Another example is a test of knowledge of medications that


requires dosage calculations which instead testing the
mathematics knowledge or skills.

There are three types of evidence that can be used to demonstrate


a research instrument has Construct validity:

1. Homogeneity-This means that the instrument measures one


construct.
2. Convergence-this occurs when the instrument measures
concepts similar to that of other instruments. Although if
there are no similar instruments available this will not be
possible to do.
3. Theory evidence -This is evident when behavior is similar
to theoretical propositions of the construct measured in
the instrument.
Example: when an instrument measures anxiety, one would
expect to see that participants who score high on the
instrument for anxiety also demonstrate symptoms of anxiety
in their day-to-day lives.

• Criterion validity. A criterion is any other instrument that


measures the same variable. Correlations can be conducted to
determine the extent to which the different instruments measure
the same variable.

Criterion validity is measured in three ways:

PRACTICAL RESEARCH 2
Chapter 4: Understanding the Data and Ways to Systematically Collect Data
1. Convergent validity-shows that an instrument is highly
correlated with instruments measuring similar variables.

Example: geriatric suicide Correlated significantly and


positively with depression, loneliness and hopelessness.

2. Divergent validity- shows that an instrument is poorly


correlated to instruments that measure different variables.

Example: there should be a low correlation between an


instrument that measures motivation and one that measures
self- efficacy.

3. Predictive validity - means that the instrument should have


high correlations with future criterions.

Example: a score of high self-efficacy related to


performing a task that should predict the likelihood a
participant completing the task.

Reliability
Reliability relates to the extent to which the instrument is consistent.
The instrument should be able to obtain approximately the same response
when applied to respondents who are similarly situated. Likewise, when the
instrument is applied at two different points in time, the responses must
highly correlate with one another. Hence, reliability can be measured by
correlating the responses of subjects exposed to the instrument at two
different time periods or by correlating the responses of the subjects who
are similarly situated.

An example of this is when a participant completing an instrument meant to


measure motivation should have approximately the same responses each time
the test is completed. Although it is not possible to give an exact
calculation of reliability, an estimate of reliability can be achieved
through different measures. The three attributes of reliability are
discussed below.

• Internal consistency or homogeneity is when an instrument


measures a specific concept. This concept is through questions or
indicators and each question must correlate highly with the total
for this dimension.
For example, teaching effectiveness is measured in terms of seven
questions. The scores for each question must correlate highly
with the total for teaching effectiveness.

There are three ways to check the internal consistency or


homogeneity of the index.

1) Split-half correlation. We could split the index of “exposure


to televised news” in half so that there are two groups of two

PRACTICAL RESEARCH 2
Chapter 4: Understanding the Data and Ways to Systematically Collect Data
questions, and see if the two sub-scales are highly
correlated.
That is, do people who score high on the first half also score
high on the second half?
2) Average inter-item correlation. We can also determine the
internal consistency for each question on the index. If the
index is homogeneous, each question should be highly
correlated with the other three questions.
3) Average item-total correlation. We can correlate each question
with the total score of the TV news exposure index to examine
the internal consistency of items. This gives us an idea of
the contribution of each item to the reliability of the index.

• Stability or test-retest correlation. This is an aspect of


reliability where many researchers report that a highly reliable
test indicates that the test is stable over time. Test-retest
correlation provides an indication of stability over time.

Example: when we ask the respondents in our sample the four


questions once in the month of September and again in December.
We can examine whether the two waves of the same measures yield
similar results.

• Equivalence. Equivalence reliability is measured by the


correlation of scores between different versions of the same
instrument, or between instruments that measure the same or
similar constructs, such that one instrument can be reproduced by
the other. If we want to know the extent to which different
investigators use the same instrument to measure the same
individuals at the same time yield consistent results.

Equivalence may also be estimated by measuring the same concepts


with different instruments.

Example: survey questionnaire and official records, on the same


sample, which is known as multiple-forms reliability.

When you gather data, consider readability of the instrument.


Readability refers to the level of difficulty of the instrument
relative to the intended users. Thus, an instrument in English
applied to a set of respondents with no education will be useless
and unreadable.

The student who intends to use an instrument used in an earlier


investigation is well advised to review the contents of the
instrument. If possible, you have to conduct a second run of
validation to make sure that the instruments you are using
possess the criteria mentioned above.

PRACTICAL RESEARCH 2
Chapter 4: Understanding the Data and Ways to Systematically Collect Data

You might also like