Reliability and Validity
Reliability and Validity
Reliability and Validity
Classification of Research
Designs
Research Design
Exploratory Conclusive
Research Research
Design Design
Descriptive Causal
Research Research
Cross –
Longitudinal
Sectional
Design
Design
Single Multiple
Cross – Cross –
Sectional Sectional
Design Design
Types of research designs
• Exploratory research designs: are the simplest, most flexible and
most loosely structured designs. As the name suggests, the basic
objective of the study is to explore and obtain clarity on the problem
situation.
Variations
Single/multiple cross- sectional designs
Cohort(Group) analysis
Example
• A researcher wants to understand the relationship between joggers and
level of cholesterol, he/she might want to choose two age groups of
daily joggers, one group is below 30 but more than 20 and the other,
above 30 but below 40 and compare these to cholesterol levels
amongst non-joggers in the same age categories.
Descriptive research designs
Longitudinal studies:
is also an observational study, in which data is gathered from the same sample
repeatedly over an extended period of time. Longitudinal study can last from a few
years to even decades depending on what kind of information needs to be obtained.
three criteria
Cross-sectional studies are quick to conduct as compared Longitudinal studies may vary from a few years to even
to longitudinal studies. decades.
A cross-sectional study is conducted at a given point in A longitudinal study requires a researcher to revisit
time. participants of the study at proper intervals.
Multiple variables can be studied at a single point in time. Only one variable is considered to conduct the study.
• https://www.cyberralegalservices.com/casestudies.php
Reliability and Validity
• Validity - Validity is the main extent to which a concept, conclusion
or measurement is well-founded and likely corresponds accurately
to the real world.
• Test-retest reliability measures the consistency of results when you repeat the same
test on the same sample at a different point in time. You use it when you are
measuring something that you expect to stay constant in your sample.
• A test of colour blindness for trainee pilot applicants should have high test-retest
reliability, because colour blindness is a trait that does not change over time.
• Why it’s important
• Many factors can influence your results at different points in time: for example,
respondents might experience different moods, or external conditions might affect
their ability to respond accurately.
• Test-retest reliability can be used to assess how well a method resists these factors
over time. The smaller the difference between the two sets of results, the higher the
test-retest reliability.
• How to measure it
• To measure test-retest reliability, you conduct the same test on the same group of
people at two different points in time. Then you calculate the correlation between
the two sets of results.
• Test-retest reliability example
• You devise a questionnaire to measure the IQ of a group of participants (a
property that is unlikely to change significantly over time).You administer the test
two months apart to the same group of people, but the results are significantly
different, so the test-retest reliability of the IQ questionnaire is low.
• Improving test-retest reliability
• When designing tests or questionnaires, try to formulate questions,
statements and tasks in a way that won’t be influenced by the mood or
concentration of participants.
• When planning your methods of data collection, try to minimize the
influence of external factors, and make sure all samples are tested
under the same conditions.
• Remember that changes can be expected to occur in the participants
over time, and take these into account.
Interrater reliability
• Interrater reliability (also called interobserver reliability) measures the degree of agreement
between different people observing or assessing the same thing. You use it when data is collected
by researchers assigning ratings, scores or categories to one or more variables.
• In an observational study where a team of researchers collect data on classroom behavior,
interrater reliability is important: all the researchers should agree on how to categorize or rate
different types of behavior.
• Internal consistency assesses the correlation between multiple items in a test that are
intended to measure the same construct.
• You can calculate internal consistency without repeating the test or involving other
researchers, so it’s a good way of assessing reliability when you only have one data set.
• Why it’s important
• When you devise a set of questions or ratings that will be combined into an overall
score, you have to make sure that all of the items really do reflect the same thing. If
responses to different items contradict one another, the test might be unreliable.
• To measure customer satisfaction with an online store, you could create a questionnaire
with a set of statements that respondents must agree or disagree with. Internal
consistency tells you whether the statements are all reliable indicators of customer
satisfaction.
• How to measure it
• Two common methods are used to measure internal consistency.