Unit-2 - Research Methods IPR
Unit-2 - Research Methods IPR
Unit-2 - Research Methods IPR
Measurement:
Measurement is the process of observing and recording the observations that are
collected as part of a research effort. There are two major issues that will be considered
here.
First, to understand the fundamental ideas involved in measuring. Here we consider
two of major measurement concepts. In Levels of Measurement, the meaning of the four
major levels of measurement: nominal, ordinal, interval and ratio. Then we move on to
the reliability of measurement, including consideration of true score theory and a
variety of reliability estimators.
Second, to understand the different types of measures that you might use in social
research. We consider four broad categories of measurements. Survey
research includes the design and implementation of interviews and
questionnaires. Scaling involves consideration of the major methods of developing and
implementing a scale. Qualitative research provides an overview of the broad range of
non-numerical measurement approaches. And unobtrusive measures presents a
variety of measurement methods that don’t intrude on or interfere with the context of
the research.
LEVELS OF MEASUREMENT
There are different levels of measurement. These levels differ as to how closely they
approach the structure of the number system we use. It is important to understand the
level of measurement of variables in research, because the level of measurement
determines the type of statistical analysis that can be conducted, and, therefore, the type
of conclusions that can be drawn from the research.
Nominal Level
A nominal level of measurement uses symbols to classify observations into categories
that must be both mutually exclusive and exhaustive. Exhaustive means that there must
be enough categories that all the observations will fall into some category. Mutually
exclusive means that the categories must be distinct enough that no observations will fall
into more than one category. This is the most basic level of measurement; it is essentially
labeling. It can only establish whether two observations are alike or different, for
example, sorting a deck of cards into two piles: red cards and black cards.
In a survey of boaters, one variable of interest was place of residence. It was measured
by a question on a questionnaire asking for the zip code of the boater's principal place of
residence. The observations were divided into zip code categories. These categories are
mutually exclusive and exhaustive. All respondents live in one zip code category
(exhaustive) but no boater lives in more than one zip code category (mutually exclusive).
Similarly, the sex of the boater was determined by a question on the
questionnaire. Observations were sorted into two mutually exclusive and exhaustive
categories, male and female. Observations could be labeled with the letters M and F, or
the numerals 0 and 1.
The variable of marital status may be measured by two categories, married and
unmarried. But these must each be defined so that all possible observations will fit into
one category but no more than one: legally married, common-law marriage, religious
marriage, civil marriage, living together, never married, divorced, informally separated,
legally separated, widowed, abandoned, annulled, etc.
In nominal measurement, all observations in one category are alike on some property,
and they differ from the objects in the other category (or categories) on that property
(e.g., zip code, sex). There is no ordering of categories (no category is better or worse, or
more or less than another).
Ordinal Level
An ordinal level of measurement uses symbols to classify observations into categories
that are not only mutually exclusive and exhaustive; in addition, the categories have some
explicit relationship among them.
For example, observations may be classified into categories such as taller and shorter,
greater and lesser, faster and slower, harder and easier, and so forth. However, each
observation must still fall into one of the categories (the categories are exhaustive) but
no more than one (the categories are mutually exclusive). Meats are categorized as
regular, choice, or prime; the military uses ranks to distinguish categories of soldiers.
Most of the commonly used questions which ask about job satisfaction use the ordinal
level of measurement. For example, asking whether one is very satisfied, satisfied,
neutral, dissatisfied, or very dissatisfied with one's job is using an ordinal scale of
measurement.
Interval Level
An interval level of measurement classifies observations into categories that are not only
mutually exclusive and exhaustive, and have some explicit relationship among them, but
the relationship between the categories is known and exact. This is the first quantitative
application of numbers.
In the interval level, a common and constant unit of measurement has been established
between the categories. For example, the commonly used measures of temperature are
interval level scales. We know that a temperature of 75 degrees is one degree warmer
than a temperature of 74 degrees, just as a temperature of 42 degrees is one degree
warmer than a temperature of 41 degrees.
Numbers may be assigned to the observations because the relationship between the
categories is assumed to be the same as the relationship between numbers in the number
system. For example, 74+1=75 and 41+1=42.
The intervals between categories are equal, but they originate from some arbitrary
origin. that is, there is no meaningful zero point on an interval scale.
Ratio Level
The ratio level of measurement is the same as the interval level, with the addition of a
meaningful zero point. There is a meaningful and non-arbitrary zero point from which
the equal intervals between categories originate.
For example, weight, area, speed, and velocity are measured on a ratio level scale. In
public policy and administration, budgets and the number of program participants are
measured on ratio scales.
In many cases, interval and ratio scales are treated alike in terms of the statistical tests
that are applied.
Variables measured at a higher level can always be converted to a lower level, but not
vice versa. For example, observations of actual age (ratio scale) can be converted to
categories of older and younger (ordinal scale), but age measured as simply older or
younger cannot be converted to measures of actual age.
Questionaries & Instruments:
A questionnaire is a research tool featuring a series of questions used to collect useful
information from respondents. These instruments include either written or oral
questions and comprise an interview-style format. Questionnaires may be qualitative or
quantitative and can be conducted online, by phone, on paper or face-to-face, and
questions don’t necessarily have to be administered with a researcher present.
Questionnaires feature either open or closed questions and sometimes employ a mixture
of both. Open-ended questions enable respondents to answer in their own words in as
much or as little detail as they desire. Closed questions provide respondents with a series
of predetermined responses they can choose from.
Advantages of Questionnaires
Some of the many benefits of using questionnaires as a research tool include:
Practicality: Questionnaires enable researchers to strategically manage their
target audience, questions and format while gathering large data quantities on any
subject.
Cost-efficiency: You don’t need to hire surveyors to deliver your survey questions
— instead, you can place them on your website or email them to respondents at
little to no cost.
Speed: You can gather survey results quickly and effortlessly using mobile tools,
obtaining responses and insights in 24 hours or less.
Comparability: Researchers can use the same questionnaire yearly and compare
and contrast research results to gain valuable insights and minimize translation
errors.
Scalability: Questionnaires are highly scalable, allowing researchers to distribute
them to demographics anywhere across the globe.
Standardization: You can standardize your questionnaire with as many
questions as you want about any topic.
Respondent comfort: When taking a questionnaire, respondents are completely
anonymous and not subject to stressful time constraints, helping them feel relaxed
and encouraging them to provide truthful responses.
Easy analysis: Questionnaires often have built-in tools that automate analyses,
making it fast and easy to interpret your results.
Disadvantages of Questionnaires
Questionnaires also have their disadvantages, such as:
Answer dishonesty: Respondents may not always be completely truthful with
their answers — some may have hidden agendas, while others may answer how
they think society would deem most acceptable.
Question skipping: Make sure to require answers for all your survey questions.
Otherwise, you may run the risk of respondents leaving questions unanswered.
Interpretation difficulties: If a question isn’t straightforward enough,
respondents may struggle to interpret it accurately. That’s why it’s important to
state questions clearly and concisely, with explanations when necessary.
Survey fatigue: Respondents may experience survey fatigue if they receive too
many surveys or a questionnaire is too long.
Analysis challenges: Though closed questions are easy to analyze, open
questions require a human to review and interpret them. Try limiting open-ended
questions in your survey to gain more quantifiable data you can evaluate and
utilize more quickly.
Unconscientious responses: If respondents don’t read your questions
thoroughly or completely, they may offer inaccurate answers that can impact data
validity. You can minimize this risk by making questions as short and simple as
possible.
Types of Questionnaires in Research
There are various types of questionnaires in survey research, including:
Postal: Postal questionnaires are paper surveys that participants receive through
the mail. Once respondents complete the survey, they mail them back to the
organization that sent them.
In-house: In this type of questionnaire, researchers visit respondents in their
homes or workplaces and administer the survey in person.
Telephone: With telephone surveys, researchers call respondents and conduct
the questionnaire over the phone.
Electronic: Perhaps the most common type of questionnaire, electronic surveys
are presented via email or through a different online medium.
A research instrument is a tool used to obtain, measure, and analyze data from subjects
around the research topic.
To decide the instrument to use based on the type of study you are conducting:
quantitative, qualitative, or mixed-method. For instance, for a quantitative study, you may
decide to use a questionnaire, and for a qualitative study, you may choose to use a scale.
While it helps to use an established instrument, as its efficacy is already established, you
may if needed use a new instrument or even create your own instrument.
What is sampling?
Sampling is a technique of selecting individual members or a subset of the population to
make statistical inferences from them and estimate characteristics of the whole
population. Different sampling methods are widely used by researchers in market
research so that they do not need to research the entire population to collect actionable
insights.
It is also a time-convenient and a cost-effective method and hence forms the basis of
any research design. Sampling techniques can be used in a research survey software for
optimum derivation.
For example, if a drug manufacturer would like to research the adverse side effects of a
drug on the country’s population, it is almost impossible to conduct a research study that
involves everyone. In this case, the researcher decides a sample of people from
each demographic and then researches them, giving him/her indicative feedback on the
drug’s behavior.
In this blog, we discuss the various probability and non-probability sampling methods
that you can implement in any market research study.
For example, in a population of 1000 members, every member will have a 1/1000
chance of being selected to be a part of a sample. Probability sampling eliminates bias in
the population and gives all members a fair chance to be included in the sample.
helps in saving time and resources, is the Simple Random Sampling method. It
is a reliable method of obtaining information where every single member of a
population is chosen randomly, merely by chance. Each individual has the same
probability of being chosen to be a part of a sample.
For example, in an organization of 500 employees, if the HR team decides on
conducting team building activities, it is highly likely that they would prefer
picking chits out of a bowl. In this case, each of the 500 employees has an equal
opportunity of being selected.
Cluster sampling: Cluster sampling is a method where the researchers divide the
For example, if the United States government wishes to evaluate the number of
immigrants living in the Mainland US, they can divide it into clusters based on
states such as California, Texas, Florida, Massachusetts, Colorado, Hawaii, etc.
This way of conducting a survey will be more effective as the results will be
organized into states and provide insightful immigration data.
the researcher divides the population into smaller groups that don’t overlap but
represent the entire population. While sampling, these groups can be organized
and then draw a sample from each group separately.
For example, a researcher looking to analyze the characteristics of people
belonging to different annual income divisions will create strata (groups)
according to the annual family income. Eg – less than $20,000, $21,000 –
$30,000, $31,000 to $40,000, $41,000 to $50,000, etc. By doing this, the
researcher concludes the characteristics of people belonging to different
income groups. Marketers can analyze which income groups to target and which
ones to eliminate to create a roadmap that would bear fruitful results.
Uses of probability sampling
There are multiple uses of probability sampling:
Reduce Sample Bias: Using the probability sampling method, the bias in the
Create an Accurate Sample: Probability sampling helps the researchers plan and
Four types of non-probability sampling explain the purpose of this sampling method in a
better manner:
Convenience sampling: This method is dependent on the ease of access to
apply when the subjects are difficult to trace. For example, it will be extremely
challenging to survey shelterless people or illegal immigrants. In such cases,
using the snowball theory, researchers can track a few categories to interview
and derive results. Researchers also implement this sampling method in
situations where the topic is highly sensitive and not openly discussed—for
example, surveys to gather information about HIV Aids. Not many victims will
readily respond to the questions. Still, researchers can contact people they
might know or volunteers associated with the cause to get in touch with the
victims and collect information.
budget and time constraints, and some preliminary data must be collected.
Since the survey design is not rigid, it is easier to pick respondents at random
and have them take the survey or questionnaire.
or accuracy.
Identify the effective sampling techniques that might potentially achieve the
research goals.
Test each of these methods and examine whether they help in achieving your goal.
In probability sampling,
there is an underlying
hypothesis before the study In non-probability sampling, the hypothesis is
Hypothesis
begins and the objective of derived after conducting the research study.
this method is to prove the
hypothesis.
Data exploration techniques include both manual analysis and automated data
exploration software solutions that visually explore and identify relationships between
different data variables, the structure of the dataset, the presence of outliers, and the
distribution of data values in order to reveal patterns and points of interest, enabling data
analysts to gain greater insight into the raw data.
Data is often gathered in large, unstructured volumes from various sources and data
analysts must first understand and develop a comprehensive view of the data before
extracting relevant data for further analysis, such as univariate, bivariate, multivariate,
and principal components analysis.
DataExplorationTools
Manual data exploration methods entail either writing scripts to analyze raw data or
manually filtering data into spreadsheets. Automated data exploration tools, such as data
visualization software, help data scientists easily monitor data sources and perform big
data exploration on otherwise overwhelmingly large datasets. Graphical displays of data,
such as bar charts and scatter plots, are valuable tools in visual data exploration.
A popular tool for manual data exploration is Microsoft Excel spreadsheets, which can be
used to create basic charts for data exploration, to view raw data, and to identify the
correlation between variables. To identify the correlation between two continuous
variables in Excel, use the function CORREL() to return the correlation. To identify the
correlation between two categorical variables in Excel, the two-way table method, the
stacked column chart method, and the chi-square test are effective.
Humans process visual data better than numerical data, therefore it is extremely
challenging for data scientists and data analysts to assign meaning to thousands of rows
and columns of data points and communicate that meaning without any visual
components.
Data visualization in data exploration leverages familiar visual cues such as shapes,
dimensions, colors, lines, points, and angles so that data analysts can effectively visualize
and define the metadata, and then perform data cleansing. Performing the initial step of
data exploration enables data analysts to better understand and visually identify
anomalies and relationships that might otherwise go undetected.
Data preparation is often a lengthy undertaking for data professionals or business users,
but it is essential as a prerequisite to put data in context in order to turn it into insights
and eliminate bias resulting from poor data quality.
For example, the data preparation process usually includes standardizing data formats,
enriching source data, and/or removing outliers.
Additionally, as data and data processes move to the cloud, data preparation moves with
it for even greater benefits, such as:
Superior scalability — Cloud data preparation can grow at the pace of the
business. Enterprise don’t have to worry about the underlying
infrastructure or try to anticipate their evolutions.
Accelerated data usage and collaboration — Doing data prep in the cloud
means it is always on, doesn’t require any technical installation, and lets
teams collaborate on the work for faster results.
You Will Know Your Target Customers Better: Data analysis tracks how well
your products and campaigns are performing within your target demographic.
Through data analysis, your business can get a better idea of your target
audience’s spending habits, disposable income, and most likely areas of
interest. This data helps businesses set prices, determine the length of ad
campaigns, and even help project the quantity of goods needed.
Reduce Operational Costs: Data analysis shows you which areas in your
business need more resources and money, and which areas are not producing
and thus should be scaled back or eliminated outright.
You Get More Accurate Data: If you want to make informed decisions, you need
data, but there’s more to it. The data in question must be accurate. Data analysis
helps businesses acquire relevant, accurate information, suitable for
developing future marketing strategies, business plans, and realigning the
company’s vision or mission.
Data Requirement Gathering: Ask yourself why you’re doing this analysis, what
type of data analysis you want to use, and what data you are planning on
analyzing.
Data Cleaning: Not all of the data you collect will be useful, so it’s time to clean
it up. This process is where you remove white spaces, duplicate records, and
basic errors. Data cleaning is mandatory before sending the information on for
analysis.
Data Analysis: Here is where you use data analysis software and other tools to
help you interpret and understand the data and arrive at conclusions. Data
analysis tools include Excel, Python, R, Looker, Rapid Miner, Chartio, Metabase,
Redash, and Microsoft Power BI.
Data Interpretation: Now that you have your results, you need to interpret
them and come up with the best courses of action, based on your findings.
Data analysis, therefore, plays a key role in distilling this information into a more accurate
and relevant form, making it easier for researchers to do to their job.
Data analysis also provides researchers with a vast selection of different tools, such as
descriptive statistics, inferential analysis, and quantitative analysis.
So, to sum it up, data analysis offers researchers better data and better ways to analyze
and study said data.
Prescriptive Analysis: Mix all the insights gained from the other data analysis
types, and you have prescriptive analysis. Sometimes, an issue can’t be solved
solely with one analysis type, and instead requires multiple insights.
Text Analysis: Also called “data mining,” text analysis uses databases and data
mining tools to discover patterns residing in large datasets. It transforms raw data
into useful business information. Text analysis is arguably the most
straightforward and the most direct method of data analysis.
Displaying data in research is the last step of the research process. It is important to display
data accurately because it helps in presenting the findings of the research effectively to the
reader. The purpose of displaying data in research is to make the findings more visible and
make comparisons easy. When the researcher will present the research in front of the research
committee, they will easily understand the findings of the research from displayed data. The
readers of the research will also be able to understand it better. Without displayed data, the
data looks too scattered and the reader cannot make inferences.
There are basically two ways to display data: tables and graphs. The tabulated data and the
graphical representation both should be used to give more accurate picture of the research. In
quantitative research it is very necessary to display data, on the other hand in qualitative data
the researcher decides whether there is a need to display data or not. The researcher can use
an appropriate software to help tabulate and display the data in the form of graphs. Microsoft
excel is one such example, it is a user-friendly program that you can use to help display the
data.