Handbook of Media Management and Economics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/240633536

Handbook of media management and economics

Article · January 2006

CITATIONS READS

141 8,624

2 authors:

Alan B. Albarran Sylvia Chan-Olmsted


University of North Texas University of Florida
92 PUBLICATIONS 730 CITATIONS 104 PUBLICATIONS 2,842 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

A Research Agenda for Media Economics View project

A Research Agenda for Media Economics View project

All content following this page was uploaded by Sylvia Chan-Olmsted on 12 March 2015.

The user has requested enhancement of the downloaded file.


HANDBOOK
OF
MEDIA MANAGEMENT
AND ECONOMICS

Editor

Alan B. Albarran
University ofNorth Texas

Co-Editors

Sylvia M. Chan-Olmsted
University ofFlonda
Michael o. Wirth
University ofDenver

LAWRENCE ERLBAUM ASSOCIATES, PUBLISHERS


2006 Mahwah, New Jersey London
CHAPTER

23
Quantitative Methods in Media
Management and Economics

Randal A. Beam
Indiana University

What factors motivate consumers to subscribe to pay-per-view television? What is the


relationship between a newspaper's subscription price and its circulation? What are the
characteristics of radio stations that rely on satellite-delivered programming? Have hor­
izontal mergers led to concentration in the book-publishing industry? What factors
influence the commercial success of films?
During the past 15 years, scholars have used quantitative research methods to answer
these and many other research questions about media management and media eco­
nomics (Borrell, 1997; Greco, 1999; LaRose & Atkin, 1991; Lewis, 1995; Litman & Kohl,
1989). Indeed, quantitative methods appear to be the most common approach used for
research in these fields. Almost 60% of the articles published in the Journal ofMedia Eco­
nomics (]ME) and the Internationaljournal on Media Management (]MM) have been based in
whole or in part on quantitative research.! The approaches taken in these articles varied
considerably. Some researchers collected the data themselves through experiments, con­
tent analyses, or surveys. Others relied on commercial or institutional information, such
as Nielsen television ratings or government economic statistics. Still others used both.
Research questions focused on television, newspapers, books, movies, radio, telecommu­
nications, the Internet, media concentration, economic theory, advertising, and dozens
of other topics. For some studies, results were presented as simple tables of averages or
percentages. For others, the findings were the product ofsophisticated economic models

I An analysis conducted for this chapter and the chapter about qualitative methods found that about 46% of

the articles used primarily quantitative methods and another 12% used a mixrure of quantitative and qualitative
methods.

523
524 BEAM

or complicated regression analyses. These differences, although considerable, should not


mask important similarities among this research. In turning to quantitative methods, aU
the authors of these studies were embracing a particular philosophy about how to un.
derstand the world. This philosophy assumes that researchers can systematically observe
and measure social phenomena, and that what they discover can be replicated by others
following similar procedures. In other words, these authors shared a similar philosophy
about social science.
This chapter is an introduction to quantitative approaches to research on media man­
agement and media economics. It will provide an overview ofthe quantitative techniques
used most widely in these fields, will present the kinds of research questions for which
these techniques are appropriate, will define key concepts and principles associated With
these techniques, and will offer examples that demonstrate how these techniques have
been applied in research. The chapter begins by discussing briefly the basic assumptions
that underlie quantitative research methods. It then proceeds to an overview of concepts,
principles, and data-collection methods used in quantitative research. It concludes with a
report on research trends based on the analysis of more than 300 articles published since
1988 in]ME and]MM, the two leading journals on media management and media eco­
nomics. 2 The goals of the chapter are to provide a general understanding of quantitative
approaches to research and to demonstrate how those approaches have been applied in
recent research on media management and media economics.

EXAMINING ASSUMPTIONS ABOUT QUANTITATIVE RESEARCH

In an introduction to Sociological Paradigms and Organisational Analysis, Burrell and Morgan


(1979) point out that all social scientists either explicitly or implicitly embrace certain
assumptions about the nature of the social world and the means by which it can be inves­
tigated. These assumptions, which guide research, are related to the issues of ontology,
epistemology, human nature and methodology.
Assumptions about ontology come first. They speak to beliefs about the essence of
the phenomena under investigation. 3 At one extreme, ontologically speaking, is the nom­
inalist. Nominalists assume that the social world is inherently the creation of individuals'
cognitions and envision a social world "made up of nothing more than names, concepts
and labels which are used to structure reality" (Burrell & Morgan, 1979, p. 4). Pure nomi­
nalists believe that social reality is constructed by individuals and does not exist outside of
individual consciousness. Alternately, realists subscribe to a belief in a "real" social world
that is as concrete as the natural world. For realists, the social world existed long before
they were born and will continue to exist long after they are gone.
Epistemology is concerned with assumptions about ways in which social scientists
acquire knowledge. Positivist epistemologies emulate the approaches taken in the natural

. 2 Although other journals of economics, management and mass communication publish studies on media

management and media economics, thejouT>lal of Media Economics and the InternationaljouT>lal on Media Management
are the two oldest journals devoted exclusively to research on these topics.
JThis discussion is based on Burrell and Morgan (1979, pp. 1-37).
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 525

tId not Subjectivist Objectivist


approach approach
,ds, all
to un­

~
Nominalism ( Ontology ) Realism
)serve
Jthers
I Anti-positivism ~ Epistemology ~ Positivism
sophy I
I Voluntarism ~ Human nature ~ Determinism
I
man­
liques I Ideographic ~ Methodology ~ Nomothetic
which I
I with FIG. 23.1. A scheme for analyzing assumptions about social science. Adapted from Burrell and
; have Morgan (1979, p. 3).
Jtions
cepts,
Nitha sciences. Positivists embrace the belief that knowledge is real\ and objective; is capable
since of being acquired and exchanged with others; and is built gradually through a long,
1 eco­ cumulative process of inquiry. Positivists subscribe to the role bf a neutral observer, and
tative they believe that they create knowledge by offering and testing hypotheses about the
iedin social world in a search for underlying regularities or causal relationships. Antipositivists
think it's foolish to engage in a search for" objective" social knowledge because objective
knowledge doesn't exist. To antipositivists, knowledge, by its very nature, is subjective.
They dismiss the idea that a social scientist can ever be a neutral observer and assert
that the search for knowledge is fundamentally an individualistic pursuit. Antipositivists
"understand from the inside rather than outside" (Burrell & Morgan, 1979, p. 5).
Jrgan Assumptions about human nature speak to the presumed relationship between hu­
Ttain mans and their environment. At one extreme are those who adopt a deterministic per­
nves­ spective. Pure determinism assumes that humans are the products of their environment
llogy, and that their actions are dictated by the social circumstances in which they find them­
selves. At the other extreme, a voluntaristic view of human nature argues that humans
ce of are creatures of free will whose activities are largely unaffected-at least they are not
nom­ determined--by the social world in which they exist.
[uals' In describing key assumptions about ontology, epistemology and human nature,
cepts Burrell and Morgan have staked out the ends of three continua that are associated
omi­ with subjectivist and the objectivist approaches to social science (see Fig. 23.1). Taken
deaf together, these assumptions about ontology, epistemology and human nature influence
{orld the choices that scientists make for gathering information about the social world-that
::fore is, they influence their methodology. Social scientists who embrace a subjectivist approach
would be inclined toward ideographic methods, Those methods emphasize obtaining
ltists firsthand knowledge of a subject through close, detailed, comprehensive investigation.
tural Social scientists who embrace an objectivist approach would be inclined toward nomoth­
etic methods, which emulate the research processes followed in the natural sciences. It
is the nomothetic method that is associated with quantitative techniques for collecting
media
and analyzing data.
rement
In choosing to use quantitative methods, then, researchers are embracing-either
knowingly or naively-a set of fundamental assumptions about the social world and
526 BEAM

the ways one learns about it. Researchers using quantitative methods are asserting that
there's an enduring, tangible social world out there that can be studied effectively by
following systematic processes for gathering and analyzing information., Further, the
researchers' purpose typically is to look for regularities or causal relationships within
that social world, with the ultimate goal being to predict or explain social phenomena.

Quantitative Versus Qualitative Research: Two Examples


Though quantitative research methods comprise a powerful arsenal ofdata collection and
analysis techniques, they aren't appropriate for every scholar or for every research ques­
tion. The decision about whether to use quantitative methods begins at a philosophical
level with the ontological and epistemological assumptions that researchers are willing
to accept. Beyond that, the researchers confront other choices that will help determine
which method will be most useful in examining social phenomena.
One important choice relates to whether the researchers want to generalize their
findings beyond the entities-the firms, the nations, the people, the articles, the
advertisements-that they examine. Typically, researchers are forced to decide whether
it's more critical to gain a rich, detailed, textured understanding about a few things or to
draw broader, more generalizable conclusions about a relatively large number of things.
A comparison of two recent research projects about the television industry illustrates
this trade-off between depth and breadth.
Both projects had as a goal trying to better understand strategic competition within the
cable television industry. Shrikhande's (2001) study of CNNI and BBC World examined
how these two all-news channels sought to establish a presence in Asia. She conducted a
qualitative case study in which she collected data by examining news articles and industry
studies about those channels. She also interviewed key officials ofboth organizations, ob­
taining information that no other researcher had collected before. Ultimately, Shrikhande
was able to offer a detailed, textured account of how these channels developed the com­
petitive strategies that they used to start telecasts in Asia. Presumably Shrikhande hoped
her study would offer insights into the competitive strategies of TV companies seeking
to do business outside their home country. But it's impossible to know how idiosyncratic
the experiences of CNNI and BBC World might have been. If Shrikhande could have
examined 50 companies using the same method-a daunting task given the amount of
detailed information that she collected-it might have enhanced her ability to draw gen­
eralizations about strategies that companies follow in situations like this. But the time
and cost of such an undertaking would have been prohibitive, and the complexity of the
findings would have been almost too overwhelming to present. Shrikhande chose instead
to learn a lot about two cable channels. She chose depth over breadth,
Chan-Olmsted and Li (2002) also conducted a study of strategic competition within
the cable industry, though they used quantitative techniques. They wanted to know how
the different strategies of video programmers related to performance. Their data were
obtained primarily from an analyst'S report on the cable industry. That report included
numerical measurements ofa dozen or so characteristics-variables-that Chan-Olmsted
and Li believed would be relevant to their research questions. Those quantitative variables
included such things as organizational size, product-pricing practices, and operating
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 527

ng that effic~ency. The analyst's report supplied this information for 59 cable channels, though the
vely by absence of complete data for 14 of those channels ultimately caused them to be removed
ler, the from the study. The Chan-Olmsted and Li findings were based on cluster analysis and
within analysis of variance, two techniques used to analyze quantitative data. Their findings
mena. were, in effect, statistics that let them detect patterns within the cable television industry
and evaluate relationships among their variables. These statistical findings didn't provide
the detail and nuance that Shrikhande offered in her qualitative analysis. But the Chan­
Olmsted and Li results were based on information from 45 cable channels, not two.
ionand Though they didn't have extensive information about the channels in their study, the size
h ques­ of their data set enhanced their ability to draw generalizations about the relationships
ophical among the variables that they studied. They chose breadth over depth.
willing These examples illustrate appropriate uses of qualitative and quantitative research
ermine methods. Qualitative methods were an appropriate choice for Shrikhande because the
goal ofher research was to see what could be learned from a detailed, comprehensive ex­
;e their amination of the strategies of two organizations. Quantitative methods worked well for
es, the Chan-Olmsted and Li because their goal was to search for relationships between strategic
'hether choices and performance. Their intent was to be able to make statements about relation­
~s Or to ships among key variables related to strategy and performance and to be able to argue that
things. those relationships applied widely to organizations like the ones that they had studied.
lstrates

hin the CONCEPTS AND PRINCIPLES IN QUANTITATIVE RESEARCH


lmined
xteda In conducting their study on strategic choice and performance, Chan-Olmsted and Li
ldustry followed a widely accepted process for collecting and analyzing quantitative data. Un­
Ins,ob­ derstanding quantitative research is very much about understanding the concepts and
mande principles that guide that research process. This section gives an overview of those con­
e com­ cepts and principles and provides examples to illustrate them.
hoped
eeking
Concepts and Variables
ncratic
d have The building blocks of quantitative research are concepts and variables. Concepts and
mnt of variables are the things that researchers study. Though scholars differ on the precise
wgen­ meanings of those terms, they generally concur that concepts are abstract and variables
e time are concrete. Chaffee (1991, p. 1) calls concepts words or labels that represent things that
. of the people observe or imagine. They are abstractions formed by generalizing from particulars
nstead (Kerlinger, 1986, p. 26). "Job satisfaction" and "personal income" are examples ofconcepts
found in research on media management and economics. One cannot see, feel, or touch
within "job satisfaction" or, absent a pile of cash in a sack, "personal income." But researchers
whow have found ways to define and measure both of those concepts by creating variables that
a were are concrete indicators of them.
eluded The process of moving from a label (i.e., job satisfaction) to a clear definition of a
lmsted concept and then to a system for measuring that concept is called explication. 4 Concept
riables
~rating 'For a discussion ofthe explication process, see Chaffee (1991).
528 BEAM

explication begins with the preliminary identification of or labeling of a state or process


that researchers believe might be useful in their work. It then moves toward the statement
of a conceptual definition that provides a precise and meaningful verbal des,cription of the
concept. The process ends with the creation and evaluation of an empirical definition
which specifies the way that the concept will be measured in a research study. '
'Job satisfaction" is an abstract label that has an intuitive meaning to those who are
interested in the management of media organizations. But what exactly is it, and how
can it be measured in an employee? Those are questions that concept explication seeks to
answer. Those who have done research on job satisfaction have puzzled over exactly how
to define the concept, and they haven't always agreed on the best way to do so. But deciding
on a useful conceptual definition is essential to conducting quantitative research. One of
those who tackled this problem was Kalleberg, who defined job satisfaction conceptually
as "an overall affective orientation on the part of individuals toward work roles which
they are presently occupying" (1977). Kalleberg's definition is still abstract. It is not yet a
system for measuring the level ofjob satisfaction in a worker, but it is more specific than
the broad label that he began with, and it provides the foundation for developing one or
more concrete indicators of the concept.
Concepts are not always inherently variable. For example, "capitalism" and "democ­
racy" are concepts but they are not variable. But to be useful in quantitative research,
concepts must be transformed into variables. Variables are concrete representations of
concepts to which numerical values can be attached. Those numerical values vary across
the entities being studied, hence the name variable. In quantitative research, concepts are
measured via one or more variables. For example, to assess an individual's level ofjob
satisfaction, a researcher might ask a question like this: "Overall, how much pleasure do
you get from the work that you currently do? Would you say you get a lot of pleasure.
some pleasure, a little pleasure, or no pleasure at all?" Numerical values can be attached
to each possible response, with a "4" indicating that an individual gets a lot of pleasure
from work and a "I" indicating that the individual gets no pleasure at all. Presumably
the response to the "how much pleasure" question varies from individual to individual
because some get more satisfaction from their work than others. The variable, then, is
the set of responses that all individuals have given when asked the question, "Overall,
how much pleasure do you get from the work that you currently do?" It is the concrete
indicator for the abstract concept ofjob satisfaction.
The rules of the research game don't require that a single variable be used to measure
a single concept. In fact, for a multifaceted concept such as job satisfaction it's better to
use a group of related variables that can tap into the different aspects of the concept.
Researchers studying job satisfaction might ask a series of questions about attitudes
toward pay, fringe benefits, working conditions, feedback from supervisors and so forth.
Together, the entire set of responses to those questions-the set of variables-is used to
assess the degree of satisfaction that an individual has with a job.
The operational definition of job satisfaction will spell out exactly how those individ­
ual variables are to be used, collectively, to measure job satisfaction. Two common
approaches are to create a composite scale or an index. A composite scale is a measure
composed ofseveral variables that, taken as a group, have a logical structure (Babbie, 1992,
p. G7). Those variables produce an ordinal measurement of the concept. A Guttman scale
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 529

lte or process . .is one type of composite scale (Babbie, 1992, pp. 183-186). A Guttman scale to assess the
:he statement profit orierttation of media companies might include these four true / false statements:
ription of the
~cal definition, 1. At Company X, the management seeks to operate profitably over a period of
ldy. 3 fiscal years.
10se who are 2. At Company X, the management seeks to operate profitably over a period of
s it, and how 1 fiscal year.
Ition seeks to 3. At Company X, the management seeks to operate profitably during every fiscal
r exactly how quarter.
. But deciding 4. At Company X, the management seeks to operate profitably during every monthly
~arch. One of accounting period.
conceptually
~ roles which In creating a Guttman scale, the assumption would be that if Statement 4 were a truthful
It is not yet a description of the profit orientation of Company X, Statements 1 through 3 would be
specific than true as well. Further, it would be assumed that Company X would have a stronger profit
oping one or orientation than Company Y, for which only Statements 1 and 2 were true.
The simple summated index is another kind ofcomposite measure that also combines
and "democ­ two or more variables in an effort to provide a better indicator of a concept. In a simple
:ive research, additive index, however, the variables have no logical structure to them and all are ofequal
sentations of importance as indicators of a concept (Babbie, 1992, p. G4). For example, a researcher
~s vary across wanting to measure the profit orientation of daily newspaper firms might ask managers
concepts are to respond to these four true / false items:
s level of job
1 pleasure do 1. Your newspaper firm has been profitable in each of the last four fiscal quarters.
t of pleasure, 2. For the last fiscal year, the profit margin at your newspaper firm was higher than
1 be attached the average margin for newspapers owned by publicly held corporations.
It of pleasure 3. Within the last fiscal year, your newspaper firm has reduced staff size in an effort
Presumably to meet its profit goals.
to individual 4. At your newspaper firm, the profit goal for this fiscal year is higher than the profit
able, then, is goal for last fiscal year.
:m, "Overall,
the concrete In creating a simple additive index, the researcher might assign a value of "I" for each
"true" answer and a value of "0" for each false answer. The values would be added, and
d to measure the total for each newspaper firm would become its value on the index-a new composite
it's better to variable created from the four original variables. The index value would be the indicator
the concept. of the strength of a newspaper firm's profit orientation. The Guttman scale and a simple
Jut attitudes summated index are two popular options for creating composite variables, but there are
and so forth. others. For a more complete discussion on scales or indices, consult a text on research
s-is used tv methods or on scale and index construction (Babbie, 1992; Wimmer & Dominick, 2003).
In everyday discussion, researchers tend to move back and form loosely between the
Gose individ­ terms concept and variable. Often, they don't bother distinguishing between the terms
TVO common if the concept is relatively concrete. The concept of personal income, for example, can
is a measure be defined as "me financial compensation received by persons from participation in
3abbie, 1992, production" (U.S. Bureau of Economic Analysis, 2004). In me United States, personal
uttman scale income is almost certainly measured as the number of dollars that someone is paid over
530 BEAM

a specific period of time for his or her work. Though "personal income" is the concept
and the "dollars earned per year for work" is the variable, it's likely that researchers Will
simply refer to personal income as a variable because it is a relatively concrete concept.

Unit of Analysis, Level of Measurement

The final stages of explication have as their goal producing an appropriate, reliable, and
valid way of measuring a concept for use in quantitative data analysis. The compOSite
measures discussed in the previous section are examples of two approaches to creating
variables that become measures or indicators of a concept. On the path toward that
goal, researchers must confront other important issues related to measurement. One
of the first is to determine the unit of analysis for a study. The unit of analysis is the
thing-the individual, the collectivity, the object, the event-being studied and about
which data are being collected (Babbie, 1992, p. G8). Ultimately; many if not all of the
variables in a data set will be characteristics of the unit of analysis. In research on media
management and economics, typical units of analysis are the individual, the firm, the
market, the industry; the nation, the household, the article, the television program, or
the film. Occasionally; an event such as a merger, a transaction, or a complaint is used as
a unit of analysis.
Table 23.1 shows how often different units of analysis were used in the primarily
quantitative studies in The Journal of Media Economics and The International Journal on

TABLE 23.1
Percentages of Units of Analysis

Unit ofAnalysis Percent

Firm 37
Market 18
Individual 12
Industry 9
TV program 7
Household 3
Print article 2
Print publication 2
Nation 1
Organizational collectivity. pair 1
Year 1
Movie. film 1
Other. not categorized 4

Note: Figures apply to articles that used pri­


marily quantitative methods. Figures do not
add to 100 because of rounding. (N = 150.)
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 531

~" is the CClUCept Media Management. The firm was the most common unit of analysis. Typical variables
researchers will used' to characterize firms include such things as the number of individuals the firm
Illcrete concept. employs, the firm's ownership structure, the number of years it has been in business, the
firm's revenues, and the firm's profit margins. A study by Demers (1996) on ownership
structure and profit goals is one example of research that used the firm as a unit of
analysis. In that study, Demers collected information on the characteristics of 223 daily
ate, reliable, and newspapers. Those characteristics included the degree to which the newspaper firm
The composite exhibited traits ofa corporate form oforganization and the degree to which the newspaper
lches to creating firm emphasized profits as its most important organizational value. Demers found that the
'ath toward that more a newspaper firm exhibited traits of corporate organization, the less it emphasized
asurement. One . profits as its central goal. His variables tracked the characteristics of firms, and his findings
If analysis is the drew generalizations about firms-about his unit of analysis. It's worth noting the unit
Idied and about of analysis is not always the same as the entity from which data are collected. Demers
. if not all of the gathered information from people-from 409 employees at the newspapers in his study.
search on media At some newspapers, he obtained data from more than one person. But he wanted to
aI, the firm, the generalize about the 223 newspapers in his study, not about the 409 individuals working
ion program, or at those papers. In cases where he obtained information from more than one person at
Iplaint is used as a paper, he combined their responses so that on any given variable (degree of corporate
organization, emphasis on profits, organization size and so forth) he had only one value
in the primarily for each unit of analysis-for each newspaper firm.
tional Journal on A second important consideration in measurement is the level at which a variable
measured. Four levels ofmeasurement exist-the nominal level, ordinal level, interval level,
and ratio level. In some cases, the level of measurement is determined by an inherent
characteristic of the concept being studied (Wimmer & Dominick, 2003, pp. 50-52). In
other cases, the level ofmeasurement is determined by choices that the researcher makes
during concept explication.
The lowest level of measurement is the nominal level. At the nominal level, the condi­
tions or responses for a variable have no inherent ordinal ranking. Gender is a nominal
variable, with the conditions of the variable being "male" and "female." In most quan­
titative research studies neither gender condition is considered inherently higher than
the other. In research on media management and media economics, common nominal
variables would be type of media firm (newspaper, television, magazine), type of tele­
vision program (news, entertainment, sports), or type of ownership structure (public,
quasi-public, private).
The other three levels of measurement are associated with an ordinal ranking-with
responses or conditions that can be ordered from low to high. The most rudimentary of
those levels of measurement is the ordinal level. For ordinal-level variables, responses or
conditions ofa variable can be ranked from low to high, but the differences between those
responses or conditions are not uniform. "Job satisfaction" is an example of a variable
measured at the ordinal level. To assess job satisfaction among U.S. journalists, Weaver,
Beam, Brownlee, Voakes, and Wilhoit (2003) surveyed 1,500 news workers, asking this
question: "Overall, how satisfied are you with your current job? Would you say you very
satisfied, fairly satisfied, somewhat dissatisfied or very dissatisfied?" Clearly, the journalists
who replied "fairly satisfied" rated their job more favorably than the journalists who
replied "somewhat dissatisfied," and the journalists who said "somewhat dissatisfied"
532 BEAM .··1

rated their job more favorably than the journalists who said "very dissatisfied." But
nothing can be known about the distance between each of those responses. It's not
known whether the individual who was "very satisfied" was three times, five times, or 20
times happier than the individual who was "fairly" satisfied. All that's known is that some
responses represented a higher condition of satisfaction than others. Variables measured
at the nominal and ordinal levels are examples of discrete variables. Discrete variables are
those that take on a finite set of values that cannot be meaningfully broken into smaller,
equal categories (Wimmer & Dominick, 2003, p. 461).
Variables measured at the interval level solve the distance problem that plagues ordinal
variables. At the interval level, the distances between response categories are equal.
Research methods texts invariably cite temperature as the classic interval-level variable.
The distance between 32 degrees Fahrenheit and 33 degrees is the same as between 55
and 56 degrees or between 100 and 101 degrees. In research on media management and
media economics, many variables meet the criterion of equidistance between response
categories. The Dow Jones Industrial Average for stocks is an interval-level variable. At
any given time, any 100-point difference in the Dow average is equivalent to any other
100-point difference.
The highest level ofmeasurement is the ratio level. The properties ofinterval-level and
ratio-level measures are the same with this exception: Ratio-level measures have a true
zero point. Again, many ratio-level variables are used in research on media management
and economics. Revenues, profits, marginal costs, and sales are all examples of variables
for which a true zero point could (and sometimes does) exist. Variables measured at the
interval and ratio levels are examples of continuous variables (Kerlinger, 1986, pp. 35-36).
For a continuous variable, response values can be broken into increasingly smaller, equal
categories and still have meaning. Time is a continuous variable. Hours can be broken
into minutes, minutes into seconds, seconds into fractions of seconds.
The level of measurement of variables has both substantive and practical implications
for researchers. Variables must be both reliable and valid indicators of a concept to be
useful in quantitative research. Attempting to measure a variable at an inappropriate
level threatens both validity and reliability, two terms discussed more fully later. In most
instances, for example, it would be inappropriate to try to treat race, gender, or nation
as an ordinal-, interval-, or ratio-level variable. That would be inconsistent with the
inherent nature of the concept. Similarly, researchers should use caution in imposing
interval- or ratio-level measures on concepts that don't naturally lend themselves to that
kind of treatment. It might be possible, for example, to ask an individual to rate his
level ofjob satisfaction on a scale of zero to 100, with zero indicating "no satisfaction"
and 100 indicating "maximum satisfaction." Though that produces a ratio-like set of
responses, the researcher still doesn't know whether the distance between 10 and 20 is
truly equivalent to the distance between 70 and 80 on this "yardstick" for assessing job
satisfaction. Still, there are practical benefits to using the highest level of measurement
possible. Many of the most powerful statistical techniques assume that data have been
collected at the interval or ratio levels. Often researchers fudge on these assumptions and
treat ordinal-level data as ifit were interval-level data. Statisticians disagree about whether
this is a serious deviation from rigorous research practice (Wimmer & Dominick, 2003,
p.52).
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 533

issatisfied:" But Reliability and Validity


ponses. It's not
Discussions about reliability and validity center around the adequacy of a system used
five times, or 20
to measure a concept (Babbie, 1992, pp. 127-135; Bryman, 2001; Stamm, 2003, pp. 134­
IWll is that some
140; Wimmer & Dominick, 2003, pp. 56-60). Reliability is perhaps the easier term to
iables measured
understand. It refers to the ability of a system of measurement to consistently produce
~te variables are
the same result for the same phenomenon every time the system is used. For example,
:en into smaller,
most households have meters that measure water use. If a meter is reliable, it will register
the same usage each time an identical quantity of water passes through it. If the meter
plagues ordinal
said that filling a bathtub took 30 gallons of water this morning, a reliable meter would
Jries are equal.
also say that it took 30 gallons to fill the tub tomorrow morning and 30 gallons the
II-level variable.
morning after. An unreliable meter would be less consistent. This morning it might
~ as between 55
report 30 gallons, tomorrow 28 gallons and perhaps 33 gallons the day after.
anagement and
A reliable water meter is not necessarily an accurate water meter, however. Suppose
tween response
:vel variable. At that by laboriously carting water in a certified 5-gallon can, a homeowner determined
that it actually took 35 gallons to fill the bathtub. Though the meter has proved to be
nt to any other
a reliable system for measuring water-day in and day out, it registered 30 gallons-it
has not provided a valid measure of water use. Validity speaks to the "truthfulness" of
terval-Ievel and
Ires have a true the measuring system. Is what's being measured actually what the researcher thinks is
being measured? Recall one of the systems-one of the questions-used to measure job
ia management
)les of variables satisfaction: "Overall, how much pleasure do you get from the work that you currently
neasured at the do? Would you say you get a lot of pleasure, some pleasure, a little pleasure, or no
pleasure at am" When confronting the issue of validity, the researcher must ask him­
986, pp. 35-36).
'{ smaller, equal self: Does this question truly measure job satisfaction? Or is it tapping into something
else?
can be broken
Quantitative research proVides statistical tools for helping assess the reliability of sys­
tems of measurement. Validity is another story. Though research methods books outline
:al implications
l concept to be several strategies for assessing validity, whether a system of measurement is a valid indi­
I inappropriate cator of a concept is a judgment call. The researcher must be able to make a case for the
y later. In most validity of her approach to measuring a concept.
nder, or nation
.stent with the Description, Prediction, and Explanation
m in imposing
mselves to that The fundamental goals of most quantitative research in media management and media
ual to rate his economics can be characterized as efforts to describe, to predict, or to explain. Though
10 satisfaction"
description is sometimes considered the most rudimentary goal of research, providing
atio-like set of data that accurately describe something can be ofenormous value. 5 Each day, for example,
:n 10 and 20 is Nielsen Media Research describes the size and composition of audiences for television
,r assessing job programs. Nielsen's descriptions strongly influence advertising rates in the multi-billion­
measurement dollar commercial television business.
iata have been
sumptions and 'Descriptive research should not be confused with descriptive statistics, though the product of quantitative
descriptive research typically is presented using descriptive statistics. Descriptive statistics characterize or summarize
about whether
observations from a sample, either for a single variable or for a relationship involving two or more variables.
ominick,2003, Descriptive statistics can be contrasted with inferential statistics, which are used to draw inferences about a population
based on a sample. For a fuller explanation, see Babbie (1992, pp. 432 and G3).
534 BEAM

Descriptive studies often provide the foundation for research that seeks to predict
or explain. Whereas descriptive research focuses on determining the characteriStics of
an entity under study, research that has prediction or explanation as its goal eXamines
relationships among concepts or variables. In the early 1990s, the u.s. Federal CommUni_
cations Commission changed the way that it regulated telecommunication services. Uri
(2003) wanted to know whether this change was associated with differences in service
quality within the telecommunications industry. A study was conducted to examine the
relationship between the two concepts. One ofthe concepts was the "regulation system"
imposed on the telecommunication industry, which as a variable had two conditions-a
rate-of-return system and an incentive system. The second concept was "service quality."
Four variables were designated as indicators of service quality. Uri proposed that the
relationship between the system of regulation and service quality was causal-that is,
that a change in the system ofregulation from rate-of-return to incentive caused a change
in the indicators of service quality. In causal relationships, variables that determine or
influence a phenomenon are called independent variables or predictor variables. Variables
that are affected or influenced by an independent or predictor variable are called depen­
dent variables. Uri's study demonstrated that there was, indeed, a relationship between
the system of regulation and service quality. When the system changed from rate-of­
return to incentive, service quality declined. Uri also concluded that this relationship
was causal-that the change in the regulatory system was responsible for the decline in
service quality, not just associated with it. Uri's research could be considered an attempt
to predict or explain in this sense: When the FCC changes its system of regulation from
rate-of-return to incentive, it's possible to "predict" what will happen to service quality,
other things being equal. Another way to think about that relationship is that under
certain conditions, declines in service quality can be "explained" by changes in FCC
regulations.

Univariate, Bivariate, and Multivariate Statistics


In quantitative research, conclusions such as those that Uri reached are dependent on
statistical analyses of data. Dozens of statistical techniques are available to help re­
searchers make sense of quantitative data. In choosing from among these techniques,
researchers should consider the characteristics of the data collected, the goals of the
research, and their understanding ofthe techniques. The last consideration is sometimes
overlooked. If research is done with the goal of improving the understanding of a phe­
nomenon, it's important that researchers workwith tools that they understand how to use
properly.
The techniques used for analysis of quantitative data can be grouped into three broad
categories-univariate analysis, bivariate analysis, and multivariate analysis (Babbie, 1992,
pp. 389-408). Univariate analysis focuses on the examination ofthe distribution of answers
for a single variable. Descriptive research relies heavily on univariate analysis. Results of­
ten are presented in the form offrequencies or percentages. Table 23.1 showed the results
ofa univarite analysis in which the findings were presented as percentages. In addition to
frequencies and percentages, univariate analysis also produces measures of central ten·
dency (mean, median, and mode), measures ofdispersion (range and standard deviation)
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 535

seeks to predict TABLE 23.2


haracteristics of Job Satisfaction by Gender
s goal examines
leral Communi­
ion services. Uri Job SatisfiUtion Men(N = 768) Women (N = 379)

ences in service
1to examine the
:ulation system" Very satisfied 34.1% 31.7%
Fairly satisfied 52.5 47.0
o conditions-a
Somewhat dissatisfied 11.8 19.5
service quality." Very dissatisfied 1.6 1.8
Jposed that the
causal-that is,
caused a change Note: From Weaver, Beam, Brownlee, Voakes, and Wilhoit (2003).
it determine or
iables. Variables and measures of distribution (skewness, kurtosis). For continuous variables such as age
ire called depen­ or income, measures of central tendency, dispersion and distribution often are the most
Jnship between meaningful deSCriptive statistics. For discrete variables, frequencies and percentages typi­
~d from rate-of­ cally are most appropriate.
his relationship In bivariate analyses, researchers examine the relationship between two variables. Con­
>r the decline in tingency tables-sometimes called crosstabs-are a classic example of bivariate analysis.
~red an attempt Table 23.2 is a contingency table from Weaver et aI.'s (2003) study of u.s. journalists. It
regulation from expresses the level ofjob satisfaction by gender, showing the percentages of men and of
service quality, women for each condition ofjob satisfaction. The percentages sum down the columns,
p is that under to 100. Organizing the table this way helps the researcher see the relationship between
hanges in FCC job satisfaction and gender. Casual inspection of the table suggests that although the
majority of both men and women are satisfied with their jobs, satisfaction tends to be
somewhat higher among men than women.
Another common bivariate analysis is to compute a correlation coefficient, which assesses
the strength of the association between two variables. Correlation coefficients can be
~ dependent on computed between variables at all levels of measurement, with different coefficients
ble to help re­ appropriate for variables at different levels ofmeasurement. One of the mostly commonly
ese techniques, used correlation coefficients, the Pearson product-moment coefficient, is appropriate for
he goals of the two variables measured at the interval or ratio levels. This coefficient ranges from -1.0 to
m is sometimes +1.0. The sign of the coefficient (plus or minus) describes the nature of the relationship
nding of a phe­ between two variables. A positive value for the coefficient indicates a positive association
tand how to use between the variables-as the value of one variable rises, so does the value of the other
variable. A negative value indicates an inverse relationship-as the value of one variable
[lto three broad increases, the value of the other variable declines. The magnitude of the coefficient
s (Babbie, 1Y92, indicates the strength of the relationship between two variables, with values of + 1
Ition of answers and -1 indicating the strongest magnitudes. In their national study of u.s. journalists
ysis. Results of­ (Weaver et aI., 2003), a Pearson coefficient was computed for the ages of the journalists
)wed the results and the number of years of professional experience that they had. The coefficient was
s. In addition to .86, indicating a strong positive association between those variables. As the age of the
; of central ten­ journalist increased, the number of years of professional experience increased, too, just
dard deviation) as one might expect.
536 BEAM

As was the case with univariate analyses, the frequencies and percentages in contino
, gency tables and the Pearson correlation coefficient are examples of descriptive statistics.
In these examples, however, the statistics are used to describe a relationship between
two variables rather than a distribution of cases for a single variable. Under some cir.
cumstances, inferential statistics also can be used in bivariate analysis. Inferential statistics
allow a researcher to draw conclusions about a population of interest based on the char.
acteristics of a sample (Babbie, 1992, pp. 447-456). For example, inferential statistics can
help determine how likely it is that the difference found in the sample between men's and
women's job satisfaction will also be found in the entire population of u.s. journalists. v
The contingency-table and correlational analyses are only two ofseveral inferential tech. a
niques appropriate for bivariate analysis. Other frequently used techniques include tests n
of differences between means, one-way analysis of variance, and simple (two-variable) l
regression analysis.
The most powerful statistical techniques used in quantitative research on media man­
agement and economics involve the analysis, simultaneously, of more than two variables. d
Many ofthe most popular multivariate techniques for data analysis are extensions ofthose a
statistical tools used in bivariate analyses. Multivariate analysis is appropriate for the fol­ (
lowing circumstances: \
a
• Situations in which it's necessary to control for one or more variables to get a y
true sense of the relationship between variables of interest. Controlling for a variable u
means removing its effect on a relationship of primary interest. For example, researchers d
studying gender and job satisfaction among U.S. journalists also might believe salary to
be associated both with gender and with job satisfaction. To accurately understand the \
relationship between gender and job satisfaction it would be necessary to remove or to
hold constant the influence ofsalary on this relationship ofprimary interest. Multivariate f(
techniques allow the influence of a variable such as salary to be controlled. S
• Situations in which the researcher wants to look at the impact of two or more a
independent variables on a dependent variable. Salary and gender are not the only factors
that might influence a journalist's job satisfaction. Other influences might include age,
the type of assignments the journalist is given, the degree of autonomy the journalist has
in her work, and so forth. Multiple-regression analysis, analysis ofvariance, and analysis of
covariance are multivariate techniques that allow a researcher to better understand how 1
a group of independent variables affect a dependent variable. e
• Situations in which the researcher wants to look at changes across time. A cousin tl
of multiple regression, time-series analysis is appropriate for analyzing data collected at u
multiple time periods for the same variable. A study involving changes in advertising
expenditures over many years would be a candidate for time-series analysis. to
• Situations in which a researcher would like to use a set of variables to predict mem­
bership in a group. Employee turnover is a concern of media managers. Discriminant II
analysis, logistic regression, and cluster analysis are techniques that could be used to under­
jl
stand the most important factors associated with an employee's membership in one of
two groups, those who stay in a job and those who leave a job. o
• Situations in which a large group of variables believed to be indicators of a concept c­
or concepts need to be reduced to a more manageable number. Factor analysis and d
Z3. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 537

tages in cantill­ multidimensional scaling are multivariate techniques used for data reduction-for creating
riptive statistics. a relatively small number of composite variables from a relatively large number of
)nship between individual measures. Those techniques look for statistical associations among individual
Jnder some cir­ indicators. Perhaps a researcher interested in job satisfaction has developed 50 measures
:rential statistics to tap into different aspects of that concept. Factor analysis could be used to group those
sed on the char­ indicators into a smaller number of composite items reflecting different dimensions of job
:ialstatistics can satisfaction.
ween men's and • Situations in which the researcher wants to examine systems or models in which
u.s. journalists. variables may act simultaneously both as independent and dependent variables. LISREL,
inferential tech­ an acronym for linear structural relations, and path analysis can be used to test such
les include tests models. A researcher testing a complicated model of consumer behavior might turn to
= (two-variable) LlSREL or techniques like it.

on media man­ Other statistical techniques are available for the multivariate analysis of quantitative
n two variables. data. Most research methods and statistics texts describe the statistical techniques for
ensions ofthose analysis of quantitative data in more detail than is possible here (Babbie, 1992; Cohen &
riate for the fol­ Cohen, 1983; Jaeger, 1990; Marascuilo & Serlin, 1988; McClendon, 1994; Wonnacott &
Wonnacott, 1969). Other useful sources of information about quantitative data analysis
are manuals, guidebooks, and online support for the computer software used in data anal­
jables to get a ysis. SAS/STAT and SPSS are two comprehensive statistical-analysis software packages
g for a variable used widely in the social sciences. The basic SAS/STAT and SPSS programs can compute
pIe, researchers descriptive and inferential statistics for data sets of the size typically used in academic
'elieve salary to research. In addition, the companies sell data-analysis software for specialized purposes.
understand the Versions of SPSS and SAS/ STAT are available for personal computers. The companies
:0 remove or to sell manuals that describe how to use their programs. Commercial pu~lishers also of­
~st. Multivariate fer guides to the software that often are cheaper than the manuals. The Web sites for
ed. SAS/ STAT (www.sas.com) and SPSS (www.spss.com) provide information about buying
If two or more and using their statistical software packages.
the only factors
~ht include age,
le journalist has OBTAINING QUANTITATIVE DATA
, and analysis of
lOderstand how Those who use quantitative research methods to study media management and media
economics most often rely on one or more of five approaches to obtain their data­
time. A cousin they conduct a survey, they do a lab experiment, they execute a content analysis, they
ata collected at undertake a case study, or they use data obtained from institutional sources or other
s in advertising research studies. This section provides brief discussions about the value ofeach approach
·sis. to data collection.
o predict mem­
rs. Discriminant
Institutional and Secondary Sources
~ used to under­
~rship in one of Almost 60% ofthe primarily quantitative articles in JME and JMM were based on analyses
of data not collected specifically for the research project for which they were being used
)rs of a concept Table 23.3). Occasionally those data came from other scholars who prOVided access to
:or analysis and data that they had collected for previous studies. More often, those analyses were of
538 BEAM

TABLE 23.3
Frequencies of Data-Collection Methods

Data-Collection Method Percent

Secondary data 57
Survey 24
Content analysis 13
Model specification, simulation 3
Experiment 2
Not categorized 2

Note: Figures add to more than 100 percent be­


cause some studies use multiple methods.

data collected by commercial organizations, trade organizations, and governmental or


quasi-governmental agencies. In those instances, the data for these secondary analyses 6
were made available free or for a fee. In either situation, researchers have not collected
or directed the collection of the data that they are using in their study. That is what
distinguishes secondary analysis from primary analysis, which is analysis of original data
collected by researchers for a specific research purpose.
Secondary analyses are popular because the data are comparatively inexpensive, they
can be obtained quickly, and they frequently are ofhigh quality. The expense of collecting
the data either has been borne by someone other than the researcher or is shared among
a large number ofindividuals and organizations, lowering the cost to any single user. Eco­
nomic data from the U.S. decennial census or from Eurostat are examples ofinformation
from government or quasi-governmental organizations that are made available for free
or for a nominal fee. Ratings for television programs or circulation figures for newspapers
are examples of data that have been collected by commercial organizations and sold to
users. The appendix to this chapter lists some sources for secondary data used in research
on media management and media economics. Although cost, ease of access, and quality
are important benefits of conducting secondary analyses of data, this approach also has
drawbacks. Perhaps the most important is the inability of researchers to control precisely
what information is collected and from whom. For example, researchers who want to use
data from the U.S. Bureau of Labor Statistics to study the income of reporters and editors
are "stuck with" the bureau's definition of that occupation group. The researchers must
make do with what's available, even if it's not ideally suited for the research questions
they are trying to answer.

"Definitions of secondary analysis vary. Heaton (2004) defines secondary analysis as research that uses existing
data collected for a prior study. Becker (2003) defines it as the reuse of social data after they have been put aside
by the original researcher. The definition used in this chapter is broader. It encompasses not only data collected
by researchers for previous studies, but also data collected by governmental, quasi-governmental. and commerCial
organizations that are made available for general use in research, free or for a fee.
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 539

Typical examples of secondary analysis are Borrell's (1997) research on satellite­


delivered programs at u.s. radio stations andJayakar and Waterman's (2000) study of the
film industry, both of which were published in]ME. Borrell's interest was in determin­
ing what kinds of U.S. radio stations used satellite-delivered programming. Information
about satellite-delivered programming was originally collected by a commercial orga­
nization and published in the M Street Radio Directory. Borrell sampled the directory's
list of 12,600 stations and harvested data for about 500 stations. Those data became the
basis for his study, which identified station characteristics associated with use of satellite­
delivered programming. Though Borrell used a single source of secondary data in his
study, it's common to draw information from several sources. That was the case with
theJayakar and Waterman research on international trade in theatrical films. Their study
married national-level economic data with information on box-office receipts collected
by a private publication. Their analyses tested economic models that helped explain why
u.S.-produced films were popular in foreign markets. In that study, the secondary anal­
yses were done to assist with theory development. That is often a hallmark of research
involving secondary analysis. Generally, the interest is less in the data as a way to describe
a population than in their value in developing and testing a theory.
governmental or
~condary analyses 6
lave not collected Experiments
ldy. That is what Experiments are a relatively uncommon way of collecting data for research on media
is of original data management and media economics. Of the roughly 150 studies in ]ME and ]MM that
used primarily quantitative methods, only about 2% relied on data from experiments.
inexpensive, they One reason that experiments are rare is that the unit of analysis is often the individual (or
ense ofcollecting some other animate object). Research questions about media management and media
r is shared among economics tend to require units ofanalysis above the individual level, such as workgroups,
f single user. Eco­ organizations, markets, or industries. Indeed, individuals were the unit of analysis in only
es ofinformation about 12% ofthe]ME and]MM quantitative studies.
available for free Experiments are usually conducted in a controlled setting-a laboratory-rather than
~s for newspapers in the natural environment. 7 The researcher is typically interested in the impact of one or
tions and sold to more manipulated factors on participants in the experiment, who are called subjects. The
1 used in research designs of experiments can be quite complicated, so a reader interested in experimental
:cess, and quality research should consult the classic treatment of that subject by Campbell and Stanley
pproach also has (1963) or a text on experimental design. The simplest true experiment consists of two
control precisely groups of subjects, preferably with identical numbers of subjects assigned randomly to
.who want to use each group. Random assignment of subjects renders each group equal, for the purpose
>rters and editors of the research, as the experiment begins. A pretest is conducted of the two groups, and
researchers must then a stimulus-a manipulated factor-is applied to one group while the other group
search questions is left alone. 8 Those receiving the stimulus constitute the experimental group or treatment

'Field experiments seek to combine the strength of experimental design with a naturalistic selling. For a
irch that uses existing description of field experiments, see Hair, Babin, Money & Samouel (2003, pp. 65-67).
f have been put aside BUnder this design, the pretest also allows the researcher to check the assumption of equality made as a result
)t only data collected ofrandom assignment of subjects. If the groups are equal (within reasonable limits) in terms of the variables of
ntal, and commercial interest for the experiment, that can be confirmed by comparing pretest results for the experimental and control
groups.
540 BEAM

group and those left alone constitute the control group. A posttest is administered to both
groups, and the results ofthe posttests are analyzed to determine if the stimulus appeared
to have any effect. If it did, that effect should be evident in the experimental group but
not the control group.
Though rarely used in the study of media management or economics, experiments
have an important advantage over other quantitative methods when it comes to establish_
ing causal relationships. Because researchers administer the stimulus themselves, they are
able to establish that, without question, a potential cause ofa phenomenon occurred prior
to the presumed effect of that phenomenon. That is critical in distinguishing between a
causal relationship and a simple association between two variables. Though researchers
can take advantage of many different kinds of experimental and quasi-experimental de­
signs, all have in common an attempt to assess the effect of one or more manipulated
variables on groups ofsubjects. That was the case in a study in which Maxwell (2003) used
a quasi-experimental design to determine how price differences affected the likelihood
that students would buy textbooks from an online vendor versus a traditional campus
bookstore. Her experimental subjects were 72 undergraduate students to whom she pre­
sented different purchase scenarios. In those purchase scenarios, both pricing levels and
the kind of bookseller (online vs. traditional bookstore) were manipulated. Those con­
stituted the key independent variables in her experiment. The dependent variables were
likelihood that the students would buy books from a particular bookseller and attitudes
that the students had toward those booksellers. From the experiment, she concluded that
price differences influenced purchase intent but not necessarily other attitudes toward
the booksellers.

Survey Research
Surveys are the most common way that researchers obtain data for quantitative studies on
media management and media economics. About a quarter of the primarily quantitative
]MB and]MM studies used primary or original survey data-data collected by the author
with a specific research purpose in mind. In addition, many other quantitative studies
relied on secondary analysis of survey data from governmental, quasi-governmental, or
commercial organizations.
Surveys seek to obtain information from relatively large numbers of individuals
or collectivities through interviews, written questionnaires, or direct monitoring of
behavior. Interviews of respondents-individuals or collectivities that complete survey
questionnaires-ean be conducted by telephone or in person. Respondents also can be
asked to complete written questionnaires by themselves. These self-administered ques­
tionnaires are handed out, mailed, or distributed via the Internet. Sometimes they are
published in a newspaper or magazine with a request that readers fill out the question­
naire and send it back to the sponsor. Direct monitoring of respondents is relatively
rare in survey research and usually is done as part of a broader data-collection process
that also includes an interview or self-administered questionnaire. For example, some
Nielsen television ratings are based on a system that electronically monitors the programs
that individuals in Nielsen households watch. Those viewing data are then matched to
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 541

stered to DOth information collected previously about the individuals living in those households. That
lulus appeared permits Nielsen to provide clients with age, gender, income, and other information about
Ital group but viewers of particular TV programs.
Surveys are useful ways to obtain both qualitative and quantitative data. However, the
I, experiments analysis of qualitative survey data can be cumbersome if the number of respondents is
es to establish­ large. Weaver et a1.'s (2003) survey of 1,500 U.S. journalists produced more than 1,200
:elves, they are pages of responses to just 10 open-ended questions-questions for which there were no
JCcurred prior predetermined response options. Open-ended questions provide more flexibility to the
ing between a respondent, of course, but typically lengthen the data-analysis portion of a research
~h researchers project. Open-ended answers usually must go through an intermediate step of being
>erimental de­ read and categorized so that they can be converted into numeric form and analyzed
~ manipulated using statistical software. Or, only a small sample of the responses is analyzed, which
ell (2003) used means that a substantial amount of the information collected is discarded.
the likelihood On the other hand, ifthe researcher chooses to use fixed-response questions that produce
:ional campus quantitative data, information from thousands of survey respondents can be easily and
vhom she pre­ quickly analyzed using statistical software such as Excel, SPSS, or SAS/STAT. Fixed­
:ing levels and response questions can seek specific quantitative information, such as the respondent's
d. Those con­ age at her last birthday, or they can ask the respondent to choose from a relatively small
lariables were number of potential answers that the researcher has chosen in advance. Those answers
. and attitudes can be assigned numeric values. For example, the response options to a question about
oncluded that job satisfaction might be coded "4" for "very satisfied," "3" for "fairly satisfied," "2" for
itudes toward "somewhat dissatisfied," and "1" for "very dissatisfied." Many survey research centers
enter information directly into computers as respondents are being interviewed or they
scan responses to self-administered questionnaires directly into a computer data file.
This allows almost instantaneous data analysis, assuming that the responses have been
rendered in quantitative form. That is a powerful incentive to ask fixed-response questions
tive studies on if the researcher is looking for quick results. 9
y quantitative Rigorous, standardized procedures are critical in survey research, particularly when it
by the author comes to creating and administering items in questionnaires. (Questionnaires are used
itative studies in other forms of data collection, too, but are the primary tool for collecting information
~rnmental, or in surveys of individuals and collectivities.) The most successful researchers work hard
on concept explication so that the variables that they create measure their key concepts
)f individuals as preCisely and usefully as possible. If the tool for data collection is a questionnaire for
nonitoring of individuals, every effort is made to make sure that respondents can easily understand
nplete survey the items in it. If an item is ambiguous, there's no certainty that Respondent A and
ts also can be Respondent B will agree on the information that the item is seeking. Survey interviewers
inistered ques­ are trained to present the items precisely as written so that all respondents are being
imes they ;lr~ told or asked exactly the same thing. Items are worded carefully so that they do not
the question­ predispose respondents to choose a particular answer or leave respondents unable to find
s is relatively appropriate answers from among those that are offered. All this care is taken so that
:ction process answers from different respondents to the same item can be compared. Careless wording
~ample, some or inconsistency across interviewers or inappropriate response options can undermine
the programs
n matched to 9These are sometimes called closed-ended questions_
542 BEAM
,
,:~
,

that goal, creating problems with both the validity and reliability ofinformation obtained
through the survey.
Using survey research to search for evidence of causal relationships <;an be mOre
challenging than in a tightly controlled laboratory setting. Whereas it is relatively easy
to use survey data to determine if two variables are correlated, it is often more difficult
to meet other conditions for establishing that a relationship between those variables is
causal-that Variable X caused Variable Y. That is particularly true if the survey data
are cross-sectionaL-that is, collected at a single point in time. If the survey data are
longitudinal-collected at two or more points in time-it's sometimes possible to mimic
the administration of a manipulated variable, as in an experiment. That can help establish
evidence of a causal relationship.
A key consideration in survey research is how to select the respondents to be surveyed.
One approach is to conduct a census, which is an attempt to gather information about
everyone or everything in a population ofinterest. An example is the decennial u.s. census
in which the government tries to survey every household in the country. Conducting a
census would be unusual in fields such as political science, public opinion, or sociology,
where the populations being studied are typically large, and fluid, and therefore costly
to contact. But a census often is not as daunting in research on media management and
media economics. lO A researcher undertaking a study of a nation's daily newspapers
or commercial broadcast stations could readily obtain list of all those firms and, with
relatively little money, collect data about each one of them. Van Kranenburg (2002) did
just that in his study of market structure in the Dutch daily newspaper market. He
used annual circulation data that had been collected from all editorially independent
newspapers in The Netherlands after 1950.
Most researchers, however, are satisfied with a sample from a population, rather than a
census. Sampling is usually cheaper and easier than conducting a census. Samples fall into
two broad categories-probability and nonprobability samples. Nonprobability samples are
also called informal samples, convenience samples, or model samples, which Kish defines as "a
sampling based on broad assumptions about the distribution ofsurvey variables in a pop­
ulation" (Kish, 1995, p. 18). Less technically, nonprobability samples rely on something
other than probability theory to determine which potential respondents-called sample
elemellts11-to include. Sometimes it's serendipity, as in the case of the TV call-in poll
that asks who should be the next coach of the local football team. Other times it might
be the researcher's hunch that the women in the church auxiliary would be good people
to survey about the effectiveness of a new laundry soap or floor cleaner. One common
nonprobability sample is the quota sample. In a quota sample, the researcher decides in
advance what percentage of respondents should have particular characteristics-men
and women; professionals and nonprofessionals; Republicans, Democrats, and indepen­
dents. Respondents are recruited or chosen until those quotas a filled. The quota sam­
ple is but one type of nonprobability sample. Readers who want to learn about other
kinds should consult a basic social-science research methods text such as Bryman (2001,
pp: 83-104).

IOKish argues that a census can be considered one kind of sample. See Kish (1995, pp. 17-18).
II Sample elements are the entities from which information is obtained, See Babbie (1992, p. 232).
~".'."
.)'
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 543
i
I
nobtained A drawback of anonprobability sample is that it's not possible to estimate how rep­
resentative the sample is of a population of interest. That's a significant limitation if
n be mOre the goal of research is to draw generalizations about social phenomena in a popula­
ltively easy tion. Still, management research often lends itself to nonprobability sampling because
)re difficult it can be impractical to use more complicated-and often more expensive-probability
variables is sampling techniques. In his book When MBAs Rule the Newsroom, Underwood (1993) used
urvey data a nonprobability sample of 12 West Coast newspapers to assess whether market-oriented
:y data are journalism was changing traditional values within the newspaper industry. Underwood
e to mimic and a colleague had managed to elicit the cooperation of those 12 dailies, which were
lp establish relatively close to their homes in Seattle, so they conducted their research using surveys
of more than 400 journalists at those newspapers. Were the journalists at those 12 or­
~ surveyed. ganizations representative of all US. daily newspaper journalists? It's impossible to say.
tion about With nonprobability sampling, the degree to which the sample is representative of the
US. census population of interest can't be estimated. But a probability sample of either journalists
nducting a or newspapers would likely have included respondents scattered across the country or
. sociology, would have involved newspapers reluctant to cooperate with this research. The 400-plus
fore costly journalists in Underwood's study probably gave a reasonable sense ofjournalists' attitudes
ement and toward market-driven journalism. Even so, it was technically not pOSSible to estimate how
lewspapers representative that sample might have been of the population of all US. journalists.
: and, with Though more difficult to execute, probability samples often are worth the extra effort
, (2002) did because they allow researchers to make stronger claims about the representativeness
:1arket. He of the sample. A probability sample is one where any single element of a population
dependent potentially has a known, nonzero probability of being included (Kish, 1995, p. 20). A simple
random sample, for example, is one well-known type of probability sample. In a Simple
ther than a random sample, each element in a population has an equal chance of being included in
ks fall into the sample. If the population of interest is 10,000 television programs, a simple random
samples are sample of 500 shows means that any single show, chosen at random, has a 1-in-20 chance
efines as "a ofbeing included. With that information in hand, researchers would be able to estimate
es in a pop­ how likely it is that the findings based on the sample of500 reflect the real values that exist
something in the population of 10,000. The estimates are based on probability theory, which can
lIed sample be employed as a tool in data analysis if the researcher has used a probability sample. If
call-in poll the probability sample is drawn properly, it's higWy likely to produce extremely accurate
es it might estimates of the characteristics that the researcher wants to measure in the population of
Jod people interest. 12 In other words, the sample is likely to be "representative" of that population.
e common Probability samples are excellent ways to gather accurate information about large
. decides in populations-firms, programs, advertisements, individuals-without having to talk to
,tics-men or examine every element in the population. Chyi and Lasorsa (2002) used a probabil­
.d indepen­ ity sample of Austin, Texas, residents to estimate how much overlap existed between
iuota sam­ readership of online and print editions of newspapers and to determine which of those
bout other two formats was preferred. Because the 818 participants in their telephone survey were
man (2001, selected using probability sampling, they also were able to use inferential statistics to
analyze the data that they collected from the survey.

l2More precisely, the researcher can estimate how often a sample such as the one that was drawn is likely to
). produce estimates of the population that are accurate within specified limits.
544 BEAM

Content Analysis
,
More so than experimental research, survey research and secondary analyses of existing (

data, content analysis is a technique used widely in both qualitative and quantitative
research. Content analysis is a formal, systematic effort to discern patterns or relation_ ]
(
ships within a set of symbols-within oral, written or visual content (Riffe, Lacy, &
Fico, 1998, pp. 18-32). Qualitative content analyses emphasize a search for underlying
meanings within the content, and they pay particular attention to the context in which F
the content is produced (Altheide, 1996). Quantitative analyses focus more on mani­ L
fest content and devote substantial effort to the reliable measurement of the symbols. c
Many definitions of quantitative content analysis describe it as an "objective" process, by iJ
which it's meant that the researcher can directly observe the symbols and that the sym­ n
bols' meanings are shared widely by those within particular community or culture. The
presence of these symbols is recorded in numeric form and then subjected to statistical c
analysis to detect patterns or relationships among the content variables that have been c
measured.
As with survey research, sampling is an issue in content analysis. Here again, re­
searchers have three broad choices for selecting the symbols to be analyzed. They can q
conduct a census or draw a probability or nonprobability sample. As with sampling in fr
survey research, a probability sample brings with it greater assurance that the content is aJ
representative ofthe population from which it came and the opportunity to use inferential G
statistics to test hypotheses about relationships among variables.
In quantitative content analysis, perhaps the key methodological challenge is achieving C<
consistency in the coding of the content. Coding is the process of assigning numerical g.
values to the content variables that the researcher considers relevant, such as the topic IT
of a newspaper article or the length of a television story. Achieving consistency across Sl
several coders is an issue of reliability in measurement. High reliability implies that al
identical content is coded in identical ways, regardless of who or what is doing the d,
coding. Sometimes coding of symbols is straightforward, which makes high reliability
easy to achieve. Ifthe research calls for coding the number ofwords in an article or number ql
of references to a specific company, there's little chance for disagreement among coders.
Indeed, if the content is in digital form, computer programs can accurately produce is
estimates for variables such as those, so achieving consistency is seldom a problem. On ql
the other hand, ifthe goal is to code the number of unfavorable representations of a firm gf
on television news shows, it's easier to imagine how disagreements among coders might ca
arise. If a report characterizes a company's marketing practices as "aggressive," is that a in·
compliment or a criticism? ab
Researchers conducting content analyses follow procedures to minimize differences fie
among coders. But generally speaking, the more complex the coding scheme, the lower re:
the reliability across coders. Because intercoder agreement is a challenge for those work­ pl.
ing with quantitative content data, special statistics have been developed to assess reli­ ex
ability. Riffe, Lacy, and Fico (1998, pp. 104-134) prOVide a comprehensive discussion of m.
reliability in content analysis as well as a summary of common statistical techniques used co
to estimate reliability. When it comes to other analyses of quantitative content data, most pro
of the techniques used in survey and experimental research are appropriate for content the
analysis, too. Contingency-table analysis appears to be the most popular technique. Even a\\
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 545

so, it's not unusual to use bivariate correlational analysis, multiple regression, and analysis
of variance, among other techniques.
Tses of existing
Content analyses accounted for roughly 15% of the primarily quantitative studies in
,d quantitative
]ME and ]MM. In some cases, the goal of the research was to describe characteristics of
ns or relation­
the content-television shows, magazine articles, advertisements. In other cases, content
Riffe, Lacy, &
was examined to provide insight about the organizations, the market conditions, or the
for underlying
policy decisions that shaped it. A typical example of the latter is the longitudinal study by
ntext in which
Li and Chiang (2001) in which they found that as the Taiwan TV market became more
lore on mani­
competitive, programming diversity declined. Media content was the dependent variable
f the symbols.
in that study, and that is often the case for content analyses in media management or
.e" process, by
media economics research.
I that the sym­
>r culture. The
Case Studies
:d to statistical
:hat have been Case studies are research projects that examine a single case or, perhaps, make a comparison
of a handful of cases. The latter are called comparative case studies. A case might be a group,
!ere again, re­ an organization, a nation, a publication, a situation, an event, or even an individual. Most
zed. They can quantitative research studies include many cases, none ofwhich draw particular attention
th sampling in from the researcher. Rather, the researcher is focused more on understanding the concepts
: the content is and variables that characterize each of the cases. In case studies, the reverse is true. The
.use inferential case itself becomes the center of attention.
Case studies tend to be associated with qualitative research because qualitative data­
1ge is achieving collection methods are common in case studies. It wouldn't be unusual to find an or­
ling numerical ganizational case study in which data collection included unstructured interviews with
ch as the topic managers, focus groups with employees or customers, historical analyses of the firm's
,istency across successes and failures, and a qualitative content analysis of articles in trade publications
y implies that about the organization or its products. But case studies can also make use of quantitative
It is doing the data, either as part of a multimethod research project that that combines both qualita­
high reliability tive and quantitative data collection or as a single-method project that relies solely on
icle or number quantitative data.
among coders. What distinguishes a quantitative case study from other kinds of quantitative research
:ately produce is its limited potential for producing generalizations. Indeed, a study-quantitative or
a problem. On qualitative - that confines itself to characteristics of a single case shouldn't be used to draw
ltions of a firm generalizations about other similar cases. There's no way to know how representative the
g coders might case under examination is of the other cases-the other firms, the other nations, the other
ssive," is that a individuals-in the population. Despite their limited capacity to produce generalizations
about a population, quantitative case studies have made invaluable contributions to the
.ize differences fields of media management and media economics. One of the best-known management
~me, the lower research projects was a case study done at the Western Electric Co. Hawthorne Works
or those work­ plan in Chicago in the 1920s and 1930s. Researchers conducted a field experiment-an
I to assess reli­ experiment conducted in a natural setting-in which they varied the level oflight in the
e discussion of manufacturing plaQt (Roethlisberger & Dickson, 1939). They discovered, through the
~chniques used collection of quantitative data, that an increase in lighting was associated with greater
tent data, most productivity at the plant. But so was a decrease in lighting! This finding became known as
ate for content the "Hawthorne effect," which suggests that behavior can be influenced by individuals'
:chnique. Even awareness that they are being studied (Frey, Botan & Kreps, 2000, p. 121).
546 BEAM

CHARACTERISTICS OF QUANTITATIVE RESEARCH

The previous sections have provided overviews of the key assumptions thatpnderlie
quantitative research and of the ways data are typically obtained and analyzed in quanti­
tative studies. This final section summarizes some characteristics ofquantitative research
published in]ME and]MM during the past 15 years.
Quantitative research was the dominant form of inquiry during that period, but that
is increasingly less the case today. The examination of309 articles in]ME and]MM found
that about 46% were quantitative studies, about 24% were qualitative studies, about 12%
used mixed methods, and the remaining 18% were essays, bibliographies, or conceptual
discussions in which few or no data were collected. The distribution of quantitative and
qualitative studies has changed substantially since 1999 with the launch of]MM, ajournal
edited and published in Europe. ]ME, which was founded in 1988 and is edited in the
United States, has tended to publish quantitative research, whereas]MM has been more
inclined toward qualitative studies. Over the years, about 60% of]ME articles have relied
primarily on quantitative techniques and 24% primarily on qualitative techniques, with
another 16% being conceptual articles or essays. That distribution has varied Widely
year to year (Fig. 23.2). In]MM, about 48% of the articles have been based primarily on
qualitative research and about 25% primarily on quantitative research. The remaining
27% were conceptual articles or essays. Only in its first year ofpublication, when just five
articles were printed, was quantitative research predominant in]MM (Fig. 23.3).

1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 Total

• Quantitative • Qualitative Other

FIG. 23.2. Articles using primarily quantitative. primarily qualitative. and other methods inJournal
ofMedia Economics, 1988-2003 (N = 216). Figures are percentages.
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 547

100

90
IS that underlie
lyzed in quanti­
80
titative research
70

period, but that


60
and ]MM found
dies, about 12% 50
;, or conceptual
luantitative and 40

}MM, ajournal
30
is edited in the
has been more 20
des have relied
:chniques, with 10

; varied widely
~d primarily on 1999 2000 2001 2002 2003 Total
The remaining • Quantitative II Qualitative Other
, when just five
~. 23.3). FIG. 23.3. Articles using primarily quantitative, primarily qualitative. and other methods in Inter­
nationalJournal on Media Management, 1999-2003 (N = 93). FIgures are percentages.

Scholars publishing quantitative research in]ME and]MM have been more inclined
toward economic research than management research. This finding reflects, in part, that
]ME has been publishing about a decade longer than]MM and is focused more directly on
economics. But even in]MM, there's slightly more quantitative research on economics
than on management. Over all, about 60% of the studies in these two journals have
focused on economics. Of those, about half have dealt with market issues, primarily
the structure-conduct-performance model or industrial-organization economics. Studies
relying on management theory, communication theory, or no theory at all represent much
smaller percentages of the body of quantitative research. Because economics research
often looks at change across time and because many institutional sources collect data
about the same way year in and year out, a substantial percentage of this quantitative
research-close to 46%-is longitudinal. As might be expected given the subject matter,
virtually all ofthis research was conducted above the individual level of analysis. In terms
of industry focus, twO segments-the television and newspaper industries-received the
lion's share of attention. About 37% of the primarily quantitative research has been on
. television and about 21 % on newspapers. Cross-industry or multiple-industry studies
constituted the next-highest category, at only 7%, followed by the film and the new
2002 2003 Total
media categories, each at 6%. A smattering of studies on other industries-advertising,
books, magazines,' radio, telecommunications, broadband-made up the rest. None of
those categories accounted for more than 5% of the total.
lads inJournal As mentioned previously, for quantitative research in media management and media
economics, the most common practice has been for researchers to analyze data collected
548 BEAM

by someone else, with about 60% of the quantitative studies making use of data from
secondary sources. About half of the studies rely on secondary data exclusively, and
another 10% use both primary data and secondary data.
That is a broad description of the quantitative research that has been published in]MB
andJMM during the past 15 years. The description suggests several fruitful directions for
future research:

• The management of media organizations needs more attention. Management research


comprised only 15% of the quantitative studies in these fields. The decline of tradi­
tional mass media such as daily newspapers and broadcast television; the growth of
niche and ethnic media; the convergence of staffs for print, video, and Web platforms
under one organizational roof; and the growing commercial pressures facing organi­
zations offer ample challenges to those who manage media companies. Those trends
similarly offer ample opportunity for scholarship that seeks to understand how they
may affect the ways that media organizations are managed and what that may imply
for the news, entertainment, and services that they deliver to the communities they
serve.
• Content deserves more attention, particularly in studies about management. Within ]ME and
]MM, only one content analysis examined a question about the impact of management
on media content. It's the case, of course, that studies examining the link between
management and content have been published in other venues. Still, understanding more
about how organizational change, organizational culture, or different management styles
may influence content seems to be a fertile area for further research.
• It is time to broaden the scholarly focus beyond the newspaper and television industries.
Almost 60% of the quantitative studies in ]ME and]MM examined those industries, with
the majority of the studies about newspapers concentrating on the daily sector. Although
these remain large, crucial segments of the media industry, other important vehicles for
delivering news and information to society-radio, magazines, and the Internet-have
been neglected.
• More comparative research is needed. A book such as this, focused as it is on media
management and media economics, frames the media as "special businesses" worthy of
study in their own right. Certainly that is true, but it is difficult to adequately understand
the ways in which media organizations differ from other kinds ofcommercial enterprises
without comparing them to other kinds of organizations.

APPENDIX

Secondary data sources used in recent research on media management and media eco­
nomics:

Bacon's Information: Bacon's (http://www.bacons.com) publishes directories of news­


papers, magazines, and newsletters for North American and some Central American
markets. Commercial source.
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 549

g use of data from BIA Financial Network: BIAfn (http://www;bia.com) supplies a wide range of infor­
Ita exclusively, and mation about the newspaper, radio, and television industries to investors and others
interested in the media, communications and related industries. Commercial source.
n published in ]MB Editor&- Publisher: E&:P (http://www.editorandpublisher.com).adivision ofthe VNU
Iitful directions for Media Group, publishes an annual directory of North American daily and weekly
newspapers, as well as a guide to city, county; MSA, and non-MSA markets. The di­
rectory includes partial stafflistings, advertising rates, circulation data and production
lagement research requirements. Commercial source.
e decline of tradi­ Eurostat: Eurostat (http://europa.eu.int/comm/ eurostat/) collects economic data
on; the growth of about countries in the European Union. Quasi-governmental source.
nd Web platforms Federal Communications Commission: The FCC maintains databases on broadcast li­
Ires facing orga~i­ censees available through commission's Media Bureau section on its Web site (http://
mes. Those trends www.fcc.gov / mb). Government source.
lerstand how they International Telecommunication Union: The World Telecommunications Indicators
at that may imply database contains information about telecommunications systems from about 200
communities they countries (http://www.itu.int / home). Quasi-governmental source.
Moody's Investors Sernce: Moody's (http://www.moodys.com) rates bonds and provides
Within ]ME and
'It. current and historical data on yields from bonds. Commercial source.
ct of management National Centerfor Education Statistics: The NCES (http: // nces.ed.gov), a federal agency,
the link between gathers information related to education, including data on telecommunications and
Iderstanding more Internet access in schools. Government source.
nanagement styles Newspaper Association ofAmerica: A trade organization for the US. newspaper indus­
try, the NAA (http://www.naa.org) collects circulation and other information about
elevision industries. weekly and daily newspapers. Trade association.
,se industries, with Securities and Exchange Commission: The SEC's Edgar database (http://www.sec.gov)
y sector. Although has information about publicly held US. companies. SEC filings contain financial and
ortant vehicles for managerial data, including owners, directors, and executive compensation.
he Internet-have SRDS Media Solutions: SRDS (http://wwwsrds.com) databases or publications have
information about advertising rates, circulation, share, and lifestyle information for
I as it is on media print and broadcast media outlets. Commercial source.
lnesses" worthy of State newspaper directories: Many state press associations publish directors of newspa­
uately understand pers. Trade association.
nercial enterprises Thomson Corp.: Thomson (http://www.thomson.com).aninternational information
services company, maintains databases about business activity worldwide, including
mergers and acquisitions and joint ventures. Thomson also publishes the Dealmaker's
Journal, which covers mergers and acquisitions. Commercial source.
Tribune Media Sernces: TMS (http://tms.tribune.com) produces English and Span­
ish data related to television schedules and movie schedules for North America.
nt and media eco­ It also provides channel lineups for US., Canadian and UK markets. Commercial
source.
U.S. Bureau ofLabor Statistics: The BLS has information on inflation, consumer spend­
rectories of news­ ing, wages, earnings, benefits, and other data related to employment, including
Central American some international data. Much of the information is available at the BLS Web site
(http://wwwbls.gov). Government source.
550 BEAM

u.s. Census Bureau: The Census Bureau collects information about individuals, house­ Deme
, holds, and businesses in the United States. Much of the information is available at the £Co;
bureau's Web site (http://www.census.gov). Government source. Frey, L
(2n,
Value Line: Value Line is an information service that produces print and electronic
Greco
products about publicly held corporations, including commentaries from analysts U.S.
and data about corporate ownership. Commercial source. Hair,]
IN:
Heato
Other Places to Find Information 200,
Jaeger,
Many research libraries subscribe to a wide variety of print and electronic products ]ayaka
on business and economics. Library home pages often contain special lists of such anal
sources. Often access to these databases is restricted to individuals affiliated with the Kalleb,
4Z(1
university. The Media Management and Economics Division of the Association for
Kerlin!
Education in Journalism and Mass Communication maintains a list of resources for Kish, L
research on media management and media economics at its Web site (http://www. LaRosl
miami.edu / mme / resources.htm). Med:
Lewis,
Econ
Li, S. S.
ACKNOWLEDGMENTS inT,
Litman
The author would like to acknowledge the contributions of his collaborators on the ofM
content analyses of the Journal of Media Economics and the Internationaljournal on Media Marasc
WIc
Management: Dr. C. Ann Hollifield and Amy Jo Coffee of the University of Georgia, and
Maxwe
Bozena Izabella Mierzejewska of the University of St. Gallen. Econ,
McCler
Riffe, I
medic
REFERENCES Asso,
Roethli:
Altheide. D. L. (1996). Qualitative media analysis. Newbury Park, CA: Sage. Cond
Babbie, E. (1992). The practice ofsocial research (6th ed.). Belmont, CA: Wadsworth. Shrikha
Becker, L. B. (2003). Secondary analysis. In G. H. Stempel III, D. H. Weaver, & G. C. Wilhoit (Eds.), Mass in As
communication research and theory (pp. 252-266). Boston: Allyn & Bacon. Stamm,
Borrell, A. J. (1997). Radio station characteristics and the adoption of satellite-delivered radio programming. Mass
journal of Media Economics, 10(1), 17-28. Underv.
Bryman, A. (2001). Social research methods. Oxford: Oxford University Press. Uri,N.1
Burrell, G., & Morgan, G. (1979). Part I: In search of a framework. In Sociological paradigms and organisational State1
analysis (pp. 1-37). London: Heinemann. U.S. Bur
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Boston: 11,2C
Houghton-Mifflin. Van Kra
Chaffee, S. H. (1991). Explication. Newbury Park, CA: Sage. journc
Chan-Olmsted. S. M., & Li, J. c. C. (2002). Strategic competition in the multi-channel video programming Weaver,
market: An intraindustry strategic group study ofcable programming networks.Journalof Media Economics, the Zl
15(3),153-174.. annu~

Chyi, H. 1., & and Lasorsa, D. L. (2002). An explorative study on the market relation between online and Wimme
print newspapers. journal of Media Economics, 15(2), 91-106. Thoa
Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences (2nd ed.). Wonnac
Hillsdale, NJ: Lawrence Erlbaum Associates.
23. QUANTITATIVE METHODS IN MEDIA MANAGEMENT 551

viduals, hbuse­ Demers, 0. P. (1996). Corporate newspaper structure, profits and organizational goals. journal of Media
available at the Economks, 9(2), 1-23.
Frey, L. R, Botan, C. H., & Kreps, G. L. (2000). Investigating communication: An introduction to research methods
(2nd ed.). Boston: Allyn and Bacon.
and electronic Greco, A. N. (1999). The impact of horizontal mergers and acqUisitions on corporate concentration in the
from analysts U.S. book publishing industry, 1989-1994.}ournal of Media Economics, 12(3), 165-180.
Hair,]. F. Jr., Babin, B., Money, A. H & Samouel, P. (2003). Essentials ofbusiness research methods. Indianapolis,
IN: Wiley.
Heaton,]. (1998). Secondary analysis of qualitative data (Social Research Update, 22). Retrieved August 29,
2004, from the University of Surrey Web site, http://www.soc.surrey.ac.ukl sru I SRU22.html
Jaeger, R. M. (1990). Statistics: A spectator sport (2nd ed.). Newbury Park, CA: Sage.
ronic products Jayakar, K. P, & Waterman, D. (2000). The economics of American theatrical movie exports: An empirical
11 lists of such analysis. journal ofMedia Economics, 13(3), 153~169.
iated with the Kalleberg, A. (1977). Work values and job rewards: A theory ofjob satisfaction. American Sociological Review,
42(1), 124-143.
~ssociation for
Kerlinger, F. N. (1986). Foundations ofbehavioral research (3rd ed.). Fort Worth: Holt, Rinehart and Winston.
C resources for
Kish, L. (1995). Survey sampling. New York: Wiley.
(http://www. LaRose, R., & Atkin, 0. (1991). Attributes of movie distribution channels and consumer choice. Journal of
Media Economics, 4(1), 3-17.
Lewis, R. (1995). Relation between newspaper subscription price and circulation, 1971-1992.journal of Media
Economics, 8(1), 25-41.
Li, S. S., & Chiang, C. C. (2001). Market competition and programming diversity: A study on the TV market
in Taiwan. Journal of Media Economics, 14(2), 105-119.
Litman, B., & Kohl, L. S. (1989). Predicting financial success of motion pictures: The '80s experience.Journal
)fators on the of Media Economics, 2(2), 35-50.
urnal on Media Marascuilo, L. A., & Serlin, R. C. (1988). Statistical methods for the social and behavioral sciences. New York:
W H. Freeman.
f Georgia, and
Maxwell, S. (2003). The effects of differential textbook pricing: Online versus in store. Journal of Media
Economics, 16(2), 87-95.
McClendon, M. J. (1994). Multiple regression and causal analysis. Itasca, lL: F. E. Peacock.
Riffe, D., Lacy. S., & Fico, F. G. (1998). Defining content analysis as a social science tool. In Analyzing
media messages: Using quantitative content analysis in research (pp. 18-32). Mahwah, Nj: Lawrence Erlbaum
Associates.
Roethlisberger, F. J., & Dickson, W J (1939). Management and The Worker: An Account ofa Research Program
Conducted by Western Electric Company, Hawthorne Works, Illinois. Cambridge, MA: Harvard UniverSity Press.
Shrikhande, S. (2001). Competitive strategies in the internationalization of television: CNNI and BBC World
lhoit (Eds.). Mass in Asia.journal ofMedia Economics, 14(3),147-168.
Stamm, K. R. (2003). Measurement decisions. In G. H Stempel III, D. H Weaver. & G. C. Wilhoit (Eds.),
io programming. Mass communication research and theory (pp. 129-146) Boston: Allyn & Bacon.
Underwood, 0. (1993). When MBAs Rule the Newsroom. New York: Columbia University Press.
Uri, N. D. (2003). The impact ofincentive regulation on service quality in telecommunications in the United
and organisational States. Journal ofMedia Economics, 16(4), 265-280.
U.S. Bureau of Economic Analysis (2004). Personal income and per capita personal income. Retrieved August
research. Boston: 11, 2004, on the World Wide Web from the Fedstats database.
Van Kranenburg, H. (2002). Mobility and market structure in the Dutch daily newspaper market segments.
Journal ofMedia Economics, 15(2), 107-123.
eo programming Weaver, 0. H, Beam, R., Brownlee, B., Voakes, P, & Wilhoit, G. C. (2003, August). The Americanjournalist in
,Media Economics, the 21st century. Mini-plenary for the Association for Education in Journalism and Mass Communication
annual convention, Kansas City, MO.
ween online and Wimmer, R. D., & Dominick, J. R. (2003). Mass media research: An introduction (7th ed.). Belmont, CA:
Thomson Wadsworth.
sciences (2nd ed.). Wonnacott, T. H, & Wonnacott, R. J. (1969). Introductory statistics. New York: Wiley.

View publication stats

You might also like