0% found this document useful (0 votes)
461 views24 pages

A Moving Target - Evolution of HCI

This document provides a historical overview of the evolution of human-computer interaction (HCI) from the 1940s to the present. It discusses how different disciplines like human factors, ergonomics, computer science, and information systems have contributed to and influenced the field over time. Key events and developments discussed include the emergence of graphical user interfaces in the 1980s, the rise of the internet in the 1990s, and how the field has increasingly embraced design and cognitive approaches. The document also considers future trajectories for different HCI-related disciplines and how the rapid pace of technological change impacts the study of HCI history.

Uploaded by

umbral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
461 views24 pages

A Moving Target - Evolution of HCI

This document provides a historical overview of the evolution of human-computer interaction (HCI) from the 1940s to the present. It discusses how different disciplines like human factors, ergonomics, computer science, and information systems have contributed to and influenced the field over time. Key events and developments discussed include the emergence of graphical user interfaces in the 1980s, the rise of the internet in the 1990s, and how the field has increasingly embraced design and cognitive approaches. The document also considers future trajectories for different HCI-related disciplines and how the rapid pace of technological change impacts the study of HCI history.

Uploaded by

umbral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Introduction

A MOVING TARGET: THE EVOLUTION OF HCI


Jonathan Grudin
Microsoft Corporation, USA

CHI and Human Factors Diverge . . . . . . . . . . . . . . . . . . 12


Workstations and Another AI Summer . . . . . . . . . . . . . . 12
19851995: Graphical User Interfaces Succeed . . . . 13
CHI Embraces Computer Science . . . . . . . . . . . . . . . . . 14
Human Factors and Ergonomics Maintains
a Nondiscretionary Use Focus . . . . . . . . . . . . . . . . . . . . 14
IS Extends Its Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Collaboration Support: OIS Gives Way to CSCW . . . . . . 15
19952005: The Internet Era Arrives . . . . . . . . . . . . . 16
The Formation of AIS SIGHCI . . . . . . . . . . . . . . . . . . . . 16
Human Factors and Ergonomics Embraces
Cognitive Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
CHI Evolves, Embraces Design . . . . . . . . . . . . . . . . . . . . 17
Looking Back: Cultures and Bridges . . . . . . . . . . . . . 18
Effects of Varying Discretion . . . . . . . . . . . . . . . . . . . . . . 18
Academic, Linguistic, and Generational Cultures . . . . . 18
Looking Ahead: Trajectories . . . . . . . . . . . . . . . . . . . . 19
DiscretionNow You See It, Now You Dont . . . . . . . . 19
Ubiquitous Computing, Invisible HCI? . . . . . . . . . . . . . 19
Human Factors and Ergonomics . . . . . . . . . . . . . . . 19
Information Systems . . . . . . . . . . . . . . . . . . . . . . . . 20
Computer-Human Interaction . . . . . . . . . . . . . . . . . 20
Information Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Conclusion: The Next Generation . . . . . . . . . . . . . . . 20
Appendix: A Few Personal Observations . . . . . . . . . . 20
1970: A Change in Plans . . . . . . . . . . . . . . . . . . . . . . . . . 20
1973: Three Computing Professions, Only One
of Them Hands-on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1975: Joining the First Profession of Discretionary
Hand-on Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1983: A Chilly Reception for an Early Paper on
Discretion in Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Preamble: History in a Time


of Rapid Obsolescence . . . . . . . . . . . . . . . . . . . . . . . . . 2
Definitions: HCI, CHI, Human Factors,
Ergonomics, IS, IT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Human-Tool Interaction at the
Dawn of Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Three Roles in Early Computing . . . . . . . . . . . . . . . . . . . 3
19451955: Managing Vacuum Tubes . . . . . . . . . . . . . 3
Grace Hopper, Liberating Computer Users . . . . . . . . . 3
19551965: Transistors, New Vistas . . . . . . . . . . . . . . . 3
Supporting Operators: The First Systematic
HCI Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Visions and Demonstrations . . . . . . . . . . . . . . . . . . . . . . . 4
Vannevar Bush and the MEMEX . . . . . . . . . . . . . . . . . 4
J.C.R. Licklider at BBN and ARPA . . . . . . . . . . . . . . . . 4
John McCarthy, Christopher Strachey, Wesley Clark . . . 4
Ivan Sutherland and Computer Graphics . . . . . . . . . 5
Douglas Engelbart, Augmenting Human Intellect . . . 5
Ted Nelsons Vision of Interconnectedness . . . . . . . . 5
Conclusion: Visions, Demos, and Widespread Use . . . . . 5
19651980: HCI Before Personal Computing . . . . . . . 6
Human Factors and Ergonomics Embraces Computers . 6
Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Programming: Subject of Study, Source of Change . . . . . 7
Computer Science: A New Discipline . . . . . . . . . . . . . . . 7
Computer Graphics: realism and interaction . . . . . . 7
Artificial Intelligence: winter follows summer . . . . . . 8
19801985: Discretionary Use Comes
Into Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Discretion in Computer Use . . . . . . . . . . . . . . . . . . . . . . . 9
Minicomputers and Office Automation . . . . . . . . . . . . . 10
The Formation of ACM SIGCHI . . . . . . . . . . . . . . . . . . . 10

INTRODUCTION

1984: First Encounters with IS, Human


Factors, and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1985: The GUI Shock . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1986: Beyond The User: Group and
Organizational Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

PREAMBLE: HISTORY IN A TIME


OF RAPID OBSOLESCENCE
What is a typewriter? my six-year-old daughter asked.
I hesitated. Well, its like a computer, I began.

A paper widely read 15 years ago concluded with the advice


to design a word processor by analogy to something familiar to
everyone: a typewriter. Even then, one of my Danish students
questioned this reading assignment, noting, The typewriter is a
species on its last legs.
Why study the history of human-computer interaction (HCI)?
For most of the computing era, interaction involved 80-column
punch cards, paper tape, line editors, 1920-character displays,
1-megabye diskettes, or other extinct species. Are the interaction issues of those times relevant today? No.
Of course, aspects of the human side of HCI change more
slowly if at all. Much of what was learned about perceptual, cognitive, social, and emotional processes in interacting with older
technologies applies to emerging technologies. However, with
human behavior, what was learned is often more important
than how it came to be learned. Other chapters of the Handbook lay out this knowledge.
Nevertheless, in an apparent paradox, the rapid pace of
change could make knowledge of the fields history particularly
useful for several reasons:
1. Several disciplines are engaged in HCI research and application, but most people are exposed to only one. By seeing
how others evolved, we can identify possible benefits and
challenges in expanding our focus.
2. The recognition of past visionaries and innovators is part of
building a community and inspiring future contributors, even
when specific past achievements are difficult to appreciate today.
3. Some visions and prototypes were quickly converted to
widespread application, some took decades, and some remain unrealized. By understanding the reasons for different
outcomes, we might assess todays visions more realistically.
4. Crystal balls are notoriously unreliable, but anyone planning
or managing a career in a rapidly changing field must consider the future, for one thing is certain: It wont resemble
the present. Our best chance is to find trajectories that extend from the past through the present.
Accordingly, this account emphasizes disciplinary issues, early
visions, and when technologies and practices became widely used.
The latter makes this more of a social history than an engineering
history that emphasizes firsts. I point out possible trends and trajectories that you might download into your crystal balls.

1989: Discovering Contexts of Development,


a Major CHI-IS Differentiator . . . . . . . . . . . . . . . . . . . . . 21
1990: Just Words: Terminology Can Matter . . . . . . . . . . 21
2006: Reflecting on Bridging Efforts . . . . . . . . . . . . . . . 22
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

A historical account is a perspective. It emphasizes some


things, and de-emphasizes or omits others. A history can be
wrong in details, but it can never be right in any final sense. Your
questions and your interests will determine whether a perspective is useful to you. More information on some topics can be
found in prior accounts.
The blueprint for intellectual histories of HCI was established by Ron Baecker in the opening chapters of the 1987 and
1995 editions of Readings in Human-Computer Interaction,
followed by Richard Pews wonderfully written chapter in the
2003 version of this Handbook. Brian Shackels (1997) account
of European contributions and specialized essays by Brad Myers
(1998) on HCI engineering history and Alan Blackwell (2006) on
the history of metaphor in design include additional references.
Recent years have seen a wave of popular books covering the
history of personal computing (e.g. Hiltzik, 1999; Hertzfeld,
2005; Markoff, 2005). This chapter expands on Grudin (2005;
2006). Additional information on some topics can be found in
the Timelines column in ACM Interactions from March 2006.
These writers and I are not trained historians. We lived
through much of the computing era as participants and witnesses. Our objectivity can be called into question. I have no
personal contributions to defend (self-citations are only to papers that include historical or demographic data), and I conducted many interviews, but we have biases.
Personal experiences can illustrate points and enliven an account by conveying human consequences of changes that otherwise appear abstract or distant. Some readers welcome them,
but others find them irritating. I have attempted to satisfy both
groups by moving several personal examples to a short Appendix, akin to deleted scenes on a DVD.

Definitions: HCI, CHI, Human Factors, Ergonomics, IS, IT


Exploring the HCI literature is complicated by differences in
the ways that dozens of simple terms are used. Some of these
differences are identified later in this chapter. Here I explain
how I use several key terms. HCI is used very broadly to cover
work in several disciplines. CHI (computer-human interaction) refers to one narrower focus, associated mainly with
computer science, an Association for Computing Machinery
Special Interest Group (ACM SIGCHI) and its annual CHI conference. I use human factors (HF), ergonomics (E), and HF&E
interchangeablysome writers attach more specialized
meanings to ergonomics, which is a term favored in Europe,
but there is little agreement among writers and no clear benefit in differentiating them. The Human Factors Society (HFS)
became the Human Factors and Ergonomics Society (HFES)
in 1992. I use IS (information systems) to refer to the man-

A Moving Target: The Evolution of Human-Computer Interaction

agement discipline that has also been labeled data processing (DP) and management information systems (MIS). The
latter was more widely used until recently. I follow common
parlance in referring to organizational information systems
specialists as IT professionals (IT pros). Information systems is to be differentiated from information science, an old
field with a new digital incarnation arising in transformed library schools.

HUMAN-TOOL INTERACTION
AT THE DAWN OF COMPUTING
A century ago, Frederick Taylor (1911) employed new technologies and methodsmoving pictures and statistical analysisto improve work practices. Time-and-motion studies were
most successful with assembly-line manufacturing and other
manual tasks. Despite some uneasiness with Taylorism, as reflected in Charlie Chaplins popular satire Modern Times, science and engineering would remain committed to the pursuit
of efficiency.
The World Wars accelerated efforts, focused on matching
people to jobs, training them, and then designing equipment
and jobs to be more easily mastered. Simple flaws in the designs
of World War II aircraft controls (Roscoe, 1997) and escape
hatches (Dyson, 1979) led to aircraft losses and thousands of casualties. Engineering psychology was born during the war; afterwards, American aviation psychologists created the HFS. Two
legacies of the conflict were a greater awareness of the potential
of computers and an enduring interest in behavioral requirements for design. For more on this period, see Roscoe (1997)
and Meister (1999, 2005).
Early tool use was not discretionary, whether by an assemblyline worker or a pilot. If training was necessary, workers were
trained. One research goal was to reduce training time, but
much more important was to increase the speed of reliable
skilled performance.

evolution of computers and our interaction with them, the


research spectrum today reflects aspects of this early division
of labor.

19451955: MANAGING VACUUM TUBES


Reducing operator burden was a key focus of early innovation.
Major strides were reducing the time spent replacing or resetting vacuum tubes and the invention of stored-program computers, which could be loaded from tape rather than manually
with cables and switches. These endeavors were consistent with
the post-war knobs and dials human factors or ergonomics approaches. Before long, one computer operator could do work
that previously required a team.

Grace Hopper, Liberating Computer Users


As computers became more reliable and capable, programming became a central activity. To improve programmers interfaces to computers meant to develop languages, compilers, and constructs such as subroutines. Hopper, a pioneer in
these areas, described her goal as freeing mathematicians to
do mathematics (Hopper, 1952; see also Sammet, 1992).
These words are echoed in todays usability goal of freeing
users to do their work.

19551965: TRANSISTORS, NEW VISTAS


Early forecasts that the world would need few computers reflected the limitations of vacuum tubes. More powerful and reliable solid-state computers, first available commercially in 1958,
changed everything. As they were acquired to support scientific
and engineering tasks outside research settings, less computersavvy operators needed better interfaces, and people envisioned uses of computers that had been unimaginable for barnsized machines of limited capability.

Three Roles in Early Computing


Early computer projects employed people in three roles:
(a) management, (b) programming, and (c) operation. Managers oversaw design, development, and operation, specifying
the programs to be written and distributing the output. A small
army of operators was needed. ENIAC, arguably the first general-purpose electronic computer in 1946, was 10 feet tall, covered 1,000 square feet, and consumed as much energy as a small
town. Once a program was written, several people loaded it by
setting switches, dials, and cable connections. Despite a design
innovation that boosted vacuum-tube reliability by operating
them at 25% normal power, 50 spent tubes had to be found and
replaced on an average day. Large computers failed every several
minutes, and some were repaired by people continuously
wheeling shopping carts full of vacuum tubes down aisles seems
unnecessary.
Computer operation, management, and programming each
eventually became a major focus of HCI research. Despite the

Supporting Operators: The First Systematic HCI Research


In the beginning, the computer was so costly that
it had to be kept gainfully occupied for every
second; people were almost slaves to feed it.
Brian Shackel (1997)

Almost all computer use of this period involved programs


and data that were read in from cards or tape. A program then
ran without interruption, produced printed, or punched, or
tape output, and terminated. This batch processing restricted
HCI to basic operation, programming, and use of the output,
of which only operation involved hands-on computer use.
Low-paid computer operators set switches, pushed buttons,
read lights, loaded, and burst printer paper, loaded and unloaded
cards, magnetic tapes, and paper tapes, and so on. Teletypes supported direct interaction, commands typed by the operator interleaved with computer responses and status messages. Even-

INTRODUCTION

tually, the paper that scrolled up one line at a time yielded to


glass ttys (glass teletypes), called visual display units (VDUs) or
video display terminals (VDTs), or CRTs for cathode-ray tubes
(CRTs), which also scrolled operator commands and computergenerated messages one line at a time. A monochrome terminal
restricted to displaying alphanumeric characters cost $50,000 in
todays dollars: expensive, but a small fraction of the cost of a
computer. A large computer might have one such console, used
only by the operator.
Improving the design of buttons, switches, and displays was a
natural extension of human factors/ergonomics. Experts in this
field authored the first HCI papers. In 1959, British researcher
Brian Shackel published the article, Ergonomics for a Computer,
followed in 1962 by Ergonomics in the Design of a Large Digital
Computer Console. These described the redesign of the consoles
for the EMIac and EMIdec 2400 analog and digital computers. The
latter was the largest computer at the time (Shackel, 1997).
In the United States, the HFS formed in 1957 and focused on
improving the efficiency of skilled performance, reducing errors
in skilled performance, and training people to achieve skilled
performance. Sid Smiths (1963) Man-Computer Information
Transfer marked the start of a career in the human factors of
computing.

Visions and Demonstrations


As transistors replaced vacuum tubes, a wave of imaginative
writing, conceptual innovation, and prototype building swept
through the research community. Although some of the language now seems dated, notably the use of male generics, many
of their key concepts resonate today.
Vannevar Bush and the MEMEX. In one way or another, Vannevar Bush set much of this in motion. He advised
U.S. Presidents Franklin Roosevelt and Harry Truman and
served as Director of the Office of Scientific Research and Development. His influential 1945 essay, As We May Think, outlined a multimedia device that he called the MEMEX. Although
Bush initially envisioned it as an improbable mechanical device
for information storage and retrieval on microfilm, he anticipated many computer capabilities. Bush emphasized the linking
of information, which he called associative indexing,
the basic idea of which is a provision whereby any item may be caused
at will to select immediately and automatically another. This is the essential feature of the MEMEX. . . . Any item can be joined into numerous trails. . . . New forms of encyclopedias will appear, ready made with
a mesh of associative trails, [which a user could extend].
The lawyer has at his touch the associated opinions and decisions of
his whole experience and of the experience of friends and authorities.
The patent attorney has on call the millions of issued patents, with familiar trails to every point of his clients interest. The physician, puzzled by a patients reactions, strikes the trail established in studying an
earlier similar case and runs rapidly through analogous case histories,
with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with
trails following the analogies of compounds and side trails to their physical and chemical behavior.

The historian, with a vast chronological account of a people, parallels


it with a skip trail which stops only on the salient items, and can follow
at any time contemporary trails which lead him all over civilization at a
particular epoch. There is a new profession of trailblazers, those who
find delight in the task of establishing useful trails through the enormous mass of the common record.

Sixty years later, professionals make discretionary hands-on


use of computers in just these ways. However, 20 years after
Bush wrote his essays, hands-on computer use was largely restricted to operators and data-entry personnel who were, in
Shackels words, almost slaves tending costly machines.
J.C.R. Licklider at BBN and ARPA. Between 1960 and
1965, there was an outpouring of ideas and systems tied to the
newly realized potential of computers. Licklider, a psychologist,
played a dual role similar to that of Bush. He wrote influential
essays and backed some of the most important research in
computer-science history as a manager at Bolt Beranek and
Newman (BBN) from 19571962 and director of the Information Processing Techniques Office (IPTO) of the Department of
Defense Advanced Research Projects Agency (called ARPA and
DARPA at different times) from 19621964.
BBN conducted extensive computer-related work funded
by the government and employed dozens of influential researchers, including John Seely Brown, Richard Pew, and
many who also worked at MIT (e.g., John McCarthy, Marvin Minsky, Licklider himself ). IPTO funding was crucial in launching
computer-science departments and establishing artificial intelligence as a field in the 1960s, in addition to its best-known
accomplishment, giving birth to the Internet.
In 1960, Licklider described man-machine symbiosis, noting, there are many man-machine systems. He continued, At
present, however, there are no man-computer symbioses
answers are needed. Licklider saw the computer as a fast information-retrieval and data-processing machine, but he
stressed its capability to play a larger role: One of the main
aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems (pp. 45).
This would require more rapid real-time interaction than
batch systems of the time supported. In 1962, Licklider and
Wes Clark outlined the requirements of a system for on-line
man-computer communication. They identified capabilities
that were ripe for development: timesharing of a computer
among many users; electronic input-output surfaces for the display and communication of symbolic and pictorial information;
interactive, real-time support for programming and information processing; large-scale information storage and retrieval
systems; and facilitation of human cooperation. They were also
perceptive in recognizing that other desirable technologies,
such as speech recognition and natural language understanding, would be very difficult to achieve.
In 1963, Licklider identified his ARPA colleagues as the members and affiliates of the Intergalactic Computer Network, anticipating the Internet that ARPA would be instrumental in developing
(Pew, 2003). His 1965 book Libraries of the Future summarized
and expanded this vision. Lickliders role in rapidly advancing
computer science and HCI is detailed in Waldrop (2001).

A Moving Target: The Evolution of Human-Computer Interaction

John McCarthy, Christopher Strachey, Wesley Clark.


McCarthy and Strachey worked out details of timesharing, a crucial step in enabling interactive computing (Fano & Corbato,
1966). Apart from a small number of researchers who could use
computers built without concern for cost through military funding, no one could use a computer interactively as long as that required exclusive access. Timesharing allowed several (and later
dozens) of simultaneous users at terminals. Languages were developed to facilitate online control and programming of timesharing systems (e.g., JOSS in 1964).
Clark was instrumental in building the TX-0 and TX-2 at MITs
Lincoln Labs to demonstrate timesharing and other innovative
concepts. These machines, which cost on the order of $10 million, were factors in establishing the Boston area as a center for
computer research (all prices are in 2007 U.S. dollars). A CHI05
panel discussion of this period that includes Clark and Ivan
Sutherland can be viewed online (ePresence, 2006). The TX-2 was
the most powerful and capable in the world at the time, but to put
it in perspective, it was less powerful than a smartphone is today.
Ivan Sutherland and Computer Graphics. Sutherlands 1963 PhD thesis, describing the Sketchpad system built
on the TX-2 with the intention of making computers more approachable, is arguably the most impressive and influential
document in the history of HCI. A nice version edited by Alan
Blackwell and Kerry Rodden is available at http://www.cl.cam.ac
.uk/TechReports/UCAM-CL-TR-574.pdf. Sketchpad launched
computer graphics, setting into motion a field of research that
would have a decisive impact on HCI 20 years later.
Sutherland demonstrated iconic representations of constraints, copying, moving, and deleting of hierarchically organized objects, object-oriented programming concepts, interaction techniques, and approaches to animation. He supported
picture construction using a light pen and facilitated visualization by separating the coordinate system used to define a picture from the one used to display it. Sutherland noted the potential for animated graphics in the movie industry. His frank
description of what did not workwhat he called a big flop,
when engineers found Sketchpad too limited for computerassisted design (CAD)enabled others to make rapid progress.
After completing his PhD, Sutherland succeeded Licklider
as the director of IPTO in 1964. Among those whose work he
funded in this capacity was Douglas Engelbart at the Stanford
Research Institute.
Douglas Engelbart, augmenting human intellect. In
1963, Engelbart published a conceptual framework for the augmentation of mans intellect, and over the next several years he
built systems that made great strides toward realizing this vision.
He also supported and inspired engineers and programmers
who subsequently made major independent contributions.
Echoing Bush and Licklider, Engelbart saw the potential for
computers to become congenial tools that people would
choose to use interactively. He wrote:
By augmenting mans intellect we mean increasing the capability of a
man to approach a complex problem situation, gain comprehension to
suit his particular needs, and to derive solutions to the problems. . . . By
complex situations we include the professional problems of diplomats,

executives, social scientists, life scientists, physical scientists, attorneys,


designers. . . . We refer to a way of life in an integrated domain where
hunches, cut-and-try, intangibles, and the human feel for a situation
usefully coexist with powerful concepts, streamlined terminology and
notation, sophisticated methods, and highly powered electronic aids.

Engelbart used his ARPA funding to develop and integrate an


extraordinary set of prototype applications into his NLS system
in the span of a few years. In doing so, Engelbart conceptualized
and implemented the foundations of word processing, invented
or refined input devices including the mouse and multikey control box, and made use of multidisplay environments that integrated text, graphics, and video in windows. In 1968, these
would be demonstrated in a sensational 90-minute event at the
Fall Joint Computer Conference in San Francisco (http://sloan
.stanford.edu/MouseSite/1968Demo.html). The focal point for
interactive systems research in the United States appeared to
have moved from the East Coast to the West Coast.
Engelbart, an engineer by training, believed in careful human
factors testing of systems. He focused on improving efficiency
and reducing errors in skilled use, including concern with effects of fatigue and stress. Engelbart also considered training to
be a key component, feeling that people should be willing to
tackle a difficult interface if it delivered greater power once mastered. The relative importance of optimizing for skilled vs. initial
use later became a source of contention and still surfaces in discussions of interaction.
Ted Nelsons vision of interconnectedness. In 1960,
while a graduate student in sociology, the inventor of the term
hypertext founded Project Xanadu to create an easily used computer network. In 1965, he published a paper titled A file structure for the complex, the changing, and the indeterminate. Nelson continued to produce works (often self-published) with
stirring calls for systems to democratize computing through a
highly interconnected, extensible network of digital objects (e.g.,
Nelson, 1973). Xanadu was never fully realized. Nelson did not
see the early World Wide Web as an adequate realization of his
vision, but features of lightweight technologies such as weblogs,
wikis, unstructured collaborative tagging, and search enable
many of the activities Nelson envisioned.
Nelson also foresaw the significance of intellectual property
issues that would arise in digital domains. Although his solutions were again not fully implemented, they drew attention to
the issues.

Conclusion: Visions, Demos, and Widespread Use


Progress in HCI can be understood in terms of the inspiring
visions, conceptual advances needed to realize and demo aspects of the visions in working prototypes, and the evolution
of widespread practices. Tying these together and enabling the
visions to be realized in practice are the remarkable, relentless
hardware advances that led to the production of modern personal computers today, which are many millions of times more
powerful than the expensive systems used by pioneers.
At the conceptual level, much of the basic foundation for todays graphical user interfaces was in place by 1965, although it

INTRODUCTION

would be a mistake to underestimate the technical invention and


detailed work that followed. However, most computer use of that
time bore no resemblance to what could be accomplished by personal use of a $10 million custom-built machine. Pew (2003) described the breakthrough 1960 Digital Equipment Corporation
(DEC) PDP-1 as truly a computer with which an individual could
interact. The PDP-1 came with CRT display, keyboard, light pen,
and paper-tape reader. It cost about $1 million and had the capacity of a Radio Shack TRS 80 20 years later. It required considerable technical and programming support. It was a machine
used by a few fortunate computer-savvy researchers.
Lickliders man-computer symbiosis, Engelbarts augmenting human intellect, and Nelsons conceptual framework for
man-machine everything described a world that did not exist, in
which attorneys, doctors, chemists, designers, and other professionals were hands-on users of computers out of choice. The
reality was that for some time to come, most hands-on use would
continue to be routine, nondiscretionary operation. As for the visions, today some of the capabilities are taken for granted, some
are just being realized, and others remain elusive.

19651980: HCI BEFORE


PERSONAL COMPUTING
Control Data Corp. launched the transistor-based 6000 series
computers in 1964. In 1965, the first commercial computers
based on integrated circuits arrived with the IBM System/360.
These powerful systems, later called mainframes to distinguish
them from minicomputers, brought computing into the business realm. Each of the three roles in computingoperation,
management, programmingbecame a significant profession.
Operators interacted directly with computers to perform
routine maintenance, load and run programs, handle printouts,
and so on. As timesharing spread, this hands-on category expanded to include data entry and other repetitive tasks.
Managers oversaw hardware acquisition, software development, operation, routing, and the use of output. They were usually not hands-on users, but people who relied on printed output and reports considered themselves computer users.
Apart from those working in research settings, programmers
were rarely direct users until late in this period. Many flowcharted
and wrote programs on paper forms. Keypunch operators then
punched the program instructions onto cards. These were sent to
computer centers for computer operators to run. Printouts and
other output were picked up later. Many programmers would use
computers directly when they could, but the cost of computer
use generally dictated an efficient division of labor.

Human Factors and Ergonomics Embraces Computers


In 1970, Brian Shackel founded the Human Sciences and Advanced Technology (HUSAT) center at Loughborough University in the UK, which was devoted to ergonomics research emphasizing HCI. Sid Smith and other human factors engineers
examined a range of input and output issues, notably the rep-

resentation of information on displays (e.g., S.L. Smith, Farquhar, & Thomas, 1965) but also computer-generated speech
(e.g., S.L. Smith & Goodwin, 1970). In 1972, the Computer Systems Technical Group (CSTG) of the HFS formed; soon it was
the largest technical group in the society.
Leading publications were the general journal Human Factors and, starting in 1969, the computer-focused International
Journal of Man-Machine Studies (IJMMS).
The first widely read HCI book was James Martins 1973 Design of Man-Computer Dialogues. A comprehensive survey of
interfaces for operation and data entry, it began with an arresting opening chapter that described a world in transition. Extrapolating from declining hardware prices, Martin wrote,
The terminal or console operator, instead of being a peripheral consideration, will become the tail that wags the whole dog. . . . The computer
industry will be forced to become increasingly concerned with the usage of people, rather than with the computers intestines. (pp. 34)

In the mid-1970s, U.S. government agencies responsible for


agriculture and social security initiated some large-scale dataprocessing system development efforts, described in Pew (2003).
Although these efforts did not succeed, they led to methodological innovation in the use of style guides, usability labs, prototyping, and task analysis.
In 1980, three significant HF&E books were published: two
on VDT design (Cakir, Hart, & Stewart, 1980; Grandjean &
Vigliani, 1980) and one of general guidelines (Damodaran, Simpson, & Wilson, 1980). German work on VDT standards, first published in 1981, provided an economic incentive to design for
human capabilities by threatening to ban noncompliant products. Later in 1981 a corresponding ANSI standards group
formed for office and text systems.

Information Systems
Beginning in 1967, the journal Management Science published
a column titled Information Systems in Management Science.
Early definitions of IS included an integrated man/machine system for providing information to support the operation, management, and decision-making functions in an organization and
the effective design, delivery and use of information systems in
organizations (G. B. Davis, 1974; Keen, 1980; cited in Zhang,
Nah, & Preece, 2004). A historical survey of IS research (Banker &
Kaufmann, 2004) identified HCI as one of five major research
streams. This stream began with a paper on challenges in dealing with computer-generated information (Ackoff, 1967).
Companies acquired expensive business computers to address major organizational concerns. At times, the principal
concern was to appear modern (Greenbaum, 1979), but when
computers were used, managers could be chained to them
almost as tightly as operator and data-entry slaves. That said,
operator or end-user resistance to using a system was a major management concern. The sociotechnical approach to system design was one response; it educated representative
workers in technology possibilities and involved them in design in part to increase acceptance of the resulting system
(Mumford, 1971).

A Moving Target: The Evolution of Human-Computer Interaction

Cognitive style, a major topic of early IS research, focused on


difficulties that managers had communicating with people knowledgeable about computers. IS researchers published HCI articles
in management journals and also in the human factors-oriented
IJMMS. The latter, which became International Journal of Human-Computer Studies in 1994, was the 23rd most influential IS
journal according to Mylonopoulos and Theoharakis (2001, with
their calculation amended in Grudin, 2005, p. 60 note 17).

Programming: Subject of Study, Source of Change


In the 1960s and 1970s, more than 1,000 research papers on
variables affecting programming performance were published
(Baecker & Buxton, 1987). Most viewed programming in isolation, independent of organizational context. Gerald Weinbergs
landmark The Psychology of Computer Programming appeared
in 1971. In 1980, Ben Shneiderman published Software Psychology, and in 1981 Beau Sheil reviewed studies of programming notation (conditionals, control flow, data types), practices
(flowcharting, indenting, variable naming, commenting), and
tasks (learning, coding, debugging).
Programmers changed their own field through invention. In
1970, Xerox Palo Alto Research Center (PARC) was founded to
advance computer technology by developing new hardware,
programming languages, and programming environments. It
drew researchers and system builders from the labs of Engelbart and Sutherland. In 1971, Allen Newell of Carnegie Mellon
University proposed a project to PARC, launched three years
later: Central to the activities of computingprogramming,
debugging, etc.are tasks that appear to be within the scope
of this emerging theory (a psychology of cognitive behavior)
(quoted in Card & Moran, 1986).
Like HUSAT, also launched in 1970, PARC had a broad research charter. HUSAT focused on ergonomics, anchored in
the tradition of nondiscretionary use, one component of which
was the human factors of computing. PARC focused on computing, anchored in visions of discretionary use, one component of which was also the human factors of computing. Researchers at PARC and a few other places extended the
primarily perceptual-motor focus of human factors to higherlevel cognition. HUSAT, influenced by sociotechnical design,
extended human factors by considering organizational factors.

Computer Science: A New Discipline


Computer science departments emerged in the mid-1960s.
Some arose out of engineering. Computer graphics was one
such specialization of particular relevance to HCI; software engineering came later. Other computer science departments
originated as applied mathematics; this background was shared
by many early artificial intelligence researchers.
Early machines capable of interesting work in these areas
were expensive. They were uniformly funded without regard
to cost by branches of the military, for which technical success
was the sole criterion (Norberg & ONeill, 1996). ARPA funding
under the direction of Licklider, Sutherland, and their successors played a major role. Reliance on massive funding meant

that researchers were concentrated at a few centers. These environments bore little resemblance to the batch and timeshared
environments of business computing. Hands-on users in research settings were technically savvy. They had less need for
low-level interface enhancements.
The computer graphics and AI perspective that developed
in these centers differed from that of the HCI researchers of
the time, who focused on less expensive, less capable systems
that could be studied in many more settings. To HCI, hardware advances meant greater computing capability at a relatively fixed low price. Computer graphics and AI required
high levels of computationhardware advances meant declining cost for a relatively fixed level of computation. Only
later would widely available machines be able to support
graphical interfaces and AI programming. Nevertheless, between 1965 and 1980 some computer science researchers focused on interaction, which had been part of Ivan Sutherlands initial vision.
Computer Graphics: realism and interaction. In
1968, Sutherland joined David Evans to establish a hugely influential computer graphics lab at the University of Utah. The
computer science department there was founded in 1965, one
in the first wave that emerged from mathematics and electrical
engineering. The western migration continued as students
from the lab, including Alan Kay and William Newman (and
later Jim Blinn and Jim Clark), went to California. Most graphics
systems were built on the DEC PDP-1 and PDP-7. These expensive machinesthe list price of a high-resolution display alone
was over $100,000 in todays dollarswere capable of multitasking, but graphics programs generally required all of the
processing for one task.
In 1973, the Xerox Alto arrived. It was a powerful step toward realizing Alan Kays vision of computation as a medium
for personal computing (Kay & Goldberg, 1977). It was too
expensive to be widely usedthe Alto never became a productand not powerful enough to support high-end graphics
research. However, it was a machine produced in volume that
supported graphical user interfaces of the kind Engelbart had
prototyped.
William Newman expressed the result this way: Everything changedthe computer graphics community got interested in realism, I remained interested in interaction, and I
eventually found myself doing HCI (personal communication). Ron Baecker and Jim Foley were other graphics researchers whose focus shifted to broader interaction issues.
Foley and Wallace (1974) identified requirements for designing interactive graphics systems whose aim is good symbiosis
between man and machine, and 18 papers in the first SIGGRAPH conference the same year had interactive or interaction in their titles.
At Xerox, Larry Tesler and Tim Mott took another step, recognizing that the Alto could support a graphical interface accessible to untrained people. By early 1974, they had developed
the GYPSY text editor, which along with Xeroxs Bravo editor developed by Charles Simonyi preceded and influenced Microsoft
Word (Hiltzik, 1999).
The distinct focus on interaction was given a voice in 1976,
when SIGGRAPH sponsored a two-day workshop in Pittsburgh

INTRODUCTION

titled, User Oriented Design of Interactive Graphics Systems.


Participants who were later active in CHI included Jim Foley,
William Newman, Ron Baecker, John Bennett, Phyllis Reisner,
and Tom Moran.
J.C.R. Licklider and Nicholas Negroponte presented vision
papers. UODIGS76 can be thought of as ending the visionary
period, embodying an idea whose time had not quite yet come.
Licklider (1976, p. 89) saw it clearly:
Interactive computer graphics appears likely to be one of the main
forces that will bring computers directly into the lives of very large numbers of people during the next two or three decades. Truly user-oriented
graphics of sufficient power to be useful to large numbers of people
has not been widely affordable, but it will soon become so, and, when
it does, the appropriateness and quality of the products offered will to
a large extent determine the future of computers as intellectual aids and
partners of people.

UODIGS was not repeated and the 150-page proceedings


were not cited. Not until 1981 was another such user-oriented
design conference held, after which such conferences were
held every year. Application was not quite at hand; most HCI
research remained focused on interaction driven by commands,
forms, and full-page menus.
Artificial Intelligence: winter follows summer. In the
late 1960s and early 1970s, AI burst onto the scene, promising to
transform HCI. It did not go as expected.
Logically, AI and HCI are closely related. What are intelligent
machines for if not to interact with people? AI research has influenced HCI: speech recognition and natural language are
perennial HCI topics; expert, knowledge-based, adaptive, and
mixed-initiative systems have been tried, as have applications
of production systems, neural nets, and fuzzy logic; and recently
human-robot interaction has attracted attention.
Although some AI features make their way into systems and
applications, frequent predictions that more powerful machines
would soon bring major AI technologies into widespread use
were not borne out. Thus, AI did not come into focus in HCI,
and AI researchers have shown limited interest in HCI.
To piece this together requires a brief review of the early history. The term artificial intelligence first appeared in a 1955
call by John McCarthy for a meeting on machine intelligence
that was held in Dartmouth the next year. Also in 1956, Alan
Turings prescient essay, Computing Machinery and Intelligence, was reprinted in The World of Mathematics, where it attracted attention. (It was first published in 1950, as were Claude
Shannons Programming a Computer for Playing Chess and
Isaac Asimovs three laws of robotics.) Newell and Simons
logic-theory machine appeared in 1956, after which they focused on developing a general problem solver. McCarthy invented LISP in 1958.
AI pioneers were trained primarily in mathematics and logic,
where much can be built from a few axioms and a small set of
rules. Mathematics is considered a high form of intelligence, even
by nonmathematicians. It is not surprising that AI researchers anticipated that machines that operate logically and tirelessly would
make great strides. Nor is it surprising that mathematicians would
overlook the complexity and illogic that mark human users and
social constructs. Early work on AI focused heavily on theorem
proving and on games and problems with a strong logical focus,

such as chess and go. In 1988 McCarthy, who espoused predicate calculus as a foundation for AI, summed it up as follows:
As suggested by the term artificial intelligence, we werent considering
human behavior except as a clue to possible effective ways of doing
tasks. The only participants who studied human behavior were Newell
and Simon. (The goal) was to get away from studying human behavior
and consider the computer as a tool for solving certain classes of problems. Thus, AI was created as a branch of computer science and not as
a branch of psychology.

Strong claims date back to the predawn of AI, when in the


summer of 1949 Alan Turing was quoted in the London Times:
I do not see why [the computer] should not enter any one of the fields
normally covered by the human intellect, and eventually compete on
equal terms. I do not think you can even draw the line about sonnets,
though the comparison is perhaps a little bit unfair because a sonnet
written by a machine will be better appreciated by another machine.

When he identified speech understanding as important, Licklider realized its difficulty. He predicted that intelligent machines
would appear in 10 to 500 years (Pew, 2003). As director of ARPAs
Information Processing Techniques Office from 19621964, he
initiated extensive support for computer science in general and
AI in particular. MITs Project Mac, founded in 1963 by Marvin
Minsky and others, initially received $13 million per year, rising
to $24 million in 1969. ARPA also sponsored the Artificial Intelligence Laboratory at Stanford Research Institute, AI research at
SRI and CMU, and Nicholas Negropontes Machine Architecture
Group at MIT. An early dramatic result, SRIs Shakey the Robot,
was featured in 1970 articles in Life (Darrach, 1970) and National
Geographic. Given a simple but nontrivial task, Shakey went to
the desired location, scanned and reasoned about the surroundings, and moved objects as needed to accomplish the goal (for
Shakey at work: http://www.ai.sri.com/shakey/).
In 1970, Negroponte outlined the case for machine
intelligence:
People generally distrust the concept of machines that approach (and
thus why not pass?) our own human intelligence . . . Why ask a machine
to learn, to understand, to associate courses with goals, to be self-improving, to be ethicalin short, to be intelligent? . . . Because any design procedure, set of rules, or truism is tenuous, if not subversive,
when used out of context or regardless of context. (p. 1)

Negroponte followed this insightful diagnosis with a false


inference:
It follows that a mechanism must recognize and understand the context
before carrying out an operation. (p. 1)

Not when the mechanism is guided by a human who is cognizant of the context. However, Negroponte used this to build
a case for an ambitious research program:
Therefore, a machine must be able to discern changes in meaning
brought about by changes in context, hence, be intelligent. And to do
this, it must have a sophisticated set of sensors, effectors, and processors to view the real world directly and indirectly. . . . A paradigm for
fruitful conversations must be machines that can speak and respond to
a natural language. . . . But, the tete--tete (sic) must be even more direct and fluid; it is gestures, smiles, and frowns that turn a conversa-

A Moving Target: The Evolution of Human-Computer Interaction

tion into a dialogue. . . . Hand-waving often carries as much meaning as


text. Manner carries cultural information: the Arabs use their noses, the
Japanese nod their heads. . . (pp. 1213)
Imagine a machine that can follow your design methodology, and at
the same time discern and assimilate your conversational idiosyncrasies.
This same machine, after observing your behavior, could build a predictive model of your conversational performance. Such a machine
could then reinforce the dialogue by using the predictive model to respond to you in a manner that is in rhythm with your personal behavior and conversational idiosyncrasies . . . The dialogue would be so intimateeven exclusivethat only mutual persuasion and compromise
would bring about ideas, ideas unrealizable by either conversant alone.
No doubt, in such a symbiosis it would not be solely the human designer who would decide when the machine is relevant.

The same year, Negropontes MIT colleague Minsky took the


next step, as reported in Life:
In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able
to read Shakespeare, grease a car, play office politics, tell a joke, and
have a fight. At that point, the machine will begin to educate itself with
fantastic speed. In a few months, it will be at genius level and a few
months after that its powers will be incalculable. (Darrach, 1970, p. 60)

Other AI researchers told Darrach that Minskys timetable


was somewhat ambitious:
Give us 15 years was a common remarkbut all agreed that there
would be such a machine and that it would precipitate the third Industrial Revolution, wipe out war and poverty and roll up centuries of
growth in science, education and the arts. (p. 60)

Responding to such calls, ARPA initiated major funding of


speech recognition and natural language understanding in
1971. Five years later, disappointed with the progress, ARPA discontinued support for speech and languagefor a while.
In Europe, a similar story unfolded. Through the 1960s, AI
research expanded in Great Britain, a principal proponent being
Turings former colleague Donald Michie. Then in 1973, the
Lighthill Report, commissioned by the Science and Engineering Research Council, reached generally negative conclusions
about the prospects for AI systems to scale up to address realworld problems, and almost all government funding was cut off.
The next decade was described as an AI winter.

19801985: DISCRETIONARY
USE COMES INTO FOCUS
In 1980, most people in HF&E and IS were focused on the
down-to-earth business of making efficient use of expensive
mainframes. Almost unnoticed was the foreshadowing of a
major shift. Less expensive and more capable minicomputers
based on LSI technology enabled Digital Equipment Corporation, Wang Laboratories, and Data General to make inroads into
the mainframe market. At the low end, home computers gained
capability. Growing numbers of student and hobbyist programmers were drawn to these minis and micros, creating a population of hands-on discretionary users.
Then, between 1981 and 1984, the Xerox Star, IBM PC, Apple
Lisa, Symbolics and LMI Lisp machines, Sun Microsystems and

Silicon Graphics workstations, and the Apple Macintosh were


released.
Another major event came on January 1, 1984, when AT&T
broke into competing companies. AT&T had had the most employees and the most customers of any U.S. company. Neither
customers nor employees had much discretion in technology
use, so AT&T and its Bell Laboratories division had focused on
improving training and efficiency through Human Factors. In
addition to competing in telephony, AT&T entered the PC market in1985 with the ill-fated Unix PC. AT&T and the regional operating companies faced customers who had choices, and their
HCI focus broadened accordingly (see Israelski & Lund, 2003).
In general, less-expensive computers created markets for
shrinkwrap software, and for the first time, computer and software companies targeted significant numbers of nontechnical
hands-on users who would get little or no formal training.
After 20 years, the visions were being realized. Nonprogrammers were choosing to use computers to do their work. The psychology of discretionary users was of particular interest to two
groups: (a) psychologists who liked to use computers and (b)
technology companies planning to sell to discretionary users.
There was a match: Computer and telecommunication companies
hired many experimental psychologists. However, before describing this, I will discuss discretionary use and its contrast: the handson use required to keep expensive systems running efficiently.

Discretion in Computer Use


Our lives are distributed along a continuum between the
assembly-line nightmare of Modern Times and utopian visions
of completely empowered individuals. To use a technology or
not to use it: Sometimes we have a choice, other times we dont.
On the phone, I may have to wrestle with speech recognition
and routing systems. In contrast, my home computer use is
largely discretionary. The workplace often lies in between: Technologies are recommended or prescribed, but we can ignore
some injunctions, obtain exceptions, use some features but not
others, and join with colleagues to press for changes in policy or
availability.
For early computer builders, work was more a calling than a
job, but operation required a staff to carry out essential but less
interesting tasks. For the first half of the computing era, most
hands-on use was by people hired with a mandate. Hardware innovation, more versatile software, and steady progress in understanding the psychology of users and tasksand transferring
that understanding to software developersled to hands-on
users with more choice in what they did and how they did it.
Rising expectations played a rolepeople learned that software
is flexible and expected it to be more congenial. Competition
among vendors produced alternatives. Today there is more emphasis on marketing to consumers and user friendliness.
Discretion is not all or none. No one must use a computer,
but many jobs and pastimes require it. True, people can resist,
sabotage, use some features but not others, or quit the job. However, a clerk or systems administrator is in a different situation
than someone using technology for leisure activity. For an airlinereservation operator, computer use is mandatory. For someone
booking his or her own flight, computer use is discretionary. This
chapter explores implications of these differences.

10

INTRODUCTION

Several observers noted the shift toward greater discretion.


A quarter century ago, John Bennett (1979) predicted that discretionary use would lead to more concern for usability. A
decade later, Liam Bannon (1991) noted broader implications of
a shift from human factors to human actors. However, the trajectory is not always toward choice. Discretion can be curtailedfor example, a word processor is now required for many
jobs and is no longer simply an alternative to a typewriter. Even
in an era of specialization, customization, and competition, the
exercise of choice varies over time and across contexts.
Discretion is one factor of many, but an analysis of its role
casts light on the relationships among diverse HCI efforts: why
they differ and why they have remained distinct.

Minicomputers and Office Automation


Cabinet-sized computers that could support several people were
available from the mid-1960s. Starting with the VAX 11/780, superminis in the late 1970s were capable enough to affect mainframe sales and support integrated suites of productivity tools. In
1980, Digital Equipment Corporation, Data General, and Wang
Laboratories were growth companies near Boston. Digital became the second largest computer company in the world.
A minicomputer could handle a database of moderate size or
personal productivity tools used from terminals. For dumb terminals, the central processor handled each keystroke; other
minicomputers came with terminals with a processor that supported a person filling out a screen with information that was
then sent to the central processor. A mini could support a small
group (or office) with file sharing, applications such as word
processing, spreadsheets, and e-mail, and output devices. They
were marketed as office systems, office automation systems,
or office information systems.
In 1980, the Stanford International Symposium on Office
Automation launched a research field that was influential for
a decade. Douglas Engelbart contributed two papers to the
proceedings (Landau, Bair, & Siegman, 1982). The same year,
the American Federation of Information Processing Societies
(AFIPS, parent organization of ACM and IEEE at the time) held
the first of seven annual Office Automation conferences with
an associated product exhibition. Also in 1980, ACM formed a
Special Interest Group on Office Automation (SIGOA), which
two years later launched the biennial Office Information System
Conference. In 1983, ACM Transactions on Office Information
Systems (TOOIS) emerged, one year after the independent journal Office: Technology and People.
Office Information Systems, which focused on the uses of
minicomputers at the time, was positioned alongside Management Information Systems (IS), which focused on mainframes.
The scope is reflected in the charter of TOOIS: database theory,
artificial intelligence, behavioral studies, organizational theory,
and communications. Minis were accessible database research
tools; Digitals PDP series was a favorite for AI researchers; minis
were familiar to behavioral researchers who used them to run
and analyze experiments, and they became interactive computers of choice for many organizations. Computer-mediated communication was an intriguing new capability that easily supported users at different terminals of the same computer.

The researchers were discretionary computer users, but


most office workers did not choose their tools. The term automation, challenging and exciting to the researchers, conjured
up different images for some office workers, some of whom preferred Engelbarts focus on augmentation.
Papers in the SIGOA newsletter, COIS, and TOOIS included
technical work on database theory, a moderate number of AI papers (the AI winter had not yet ended), decision-support and
computer-mediated communication papers from the IS community, and behavioral studies by researchers who later became
active in CHI. IS papers were prevalent in the newsletter and
technical papers in the journal. The journal was also a major
outlet for behavioral studies before HCI started in 1985.
Although OA/OIS research was eventually absorbed by other
fields, it led the way in addressing a range of important emerging issues, including hypertext, computer-mediated communication, and collaboration support more generally.

The Formation of ACM SIGCHI


Major threads of HCI research are illustrated in Fig 1: Human Factors, Information Systems, and the research focused on discretionary hands-on use that emerged in the 1980s. In 1980, Human
Interaction with Computers, by Harold Smith and Thomas
Green, perched on a cusp. It briefly addressed the human as a
systems component (the nondiscretionary perspective, p. ix).
One third covered research on programming. The remainder addressed nonspecialist people, discretionary users who were not
computer specialists. Smith and Green wrote, Its not enough
just to establish what people can and cannot do; we need to
spend just as much effort establishing what people can and want
to do (italics in the original; p. viii).
That year, as IBM prepared to launch the PC, a groundswell
of attention to computer user behavior was building. IBM had
recently added software to hardware as a product focus. Several cognitive psychologists joined an IBM research group that
included John Gould, who had engaged in human factors research since the late 1960s. They initiated empirical studies of
programming and software design and use. Other psychologists
leading recently formed HCI groups included Phil Barnard at
the Medical Research Council Applied Psychology Unit (APU)
in Cambridge, England; Tom Landauer at Bell Labs; Donald Norman at the University of California, San Diego; and John Whiteside at Digital Equipment Corp.
Xerox PARC and its CMU collaborators were particularly active,
continuing work in several areas that proved to have singular influence. The 1981 Star, with its carefully designed graphical user
interface, was not a commercial success (nor were a flurry of GUIs
that followed, including the Apple Lisa), but it influenced researchers and developersand of course the Macintosh.
Communications of the ACM created a Human Aspects of
Computing department in 1980. The next year, Tom Moran
edited a special issue of Computing Surveys on The Psychology of the Computer User. Also in 1981, the ACM Special Interest Group on Social and Behavioral Science Computing
(SIGSOC) extended its workshop to cover interactive software
design and use. In 1982, a conference in Gaithersburg, Maryland, on Human Factors in Computing Systems was unex-

A Moving Target: The Evolution of Human-Computer Interaction

11

FIGURE 1. HCI events and topics discussed in this chapter. Expansion of acronyms, significance of people and books, and reasons
for their placement are in the text.

pectedly well attended. Shortly afterwards, SIGSOC shifted its


focus to computer-human interaction and its name to SIGCHI
(Borman, 1996).
In 1983, the first CHI conference drew more than 1,000 people. Half of the 58 papers were from the seven research labs
just mentioned. Cognitive psychologists in industry dominated
the program, although the HFS cosponsored the conference
and contributed the program chair Richard Pew, committee
members Sid Smith, H. Rudy Ramsay, and Paul Green, and several presenters. Brian Shackel and society president Robert
Williges gave tutorials the first day.
The first profession to become discretionary hands-on users
was computer programming, as paper coding sheets were discarded in favor of text editing at interactive terminals, PCs, and
small minicomputers. Therefore, many early CHI papers, by Ruven
Brooks, Bill Curtis, Thomas Green, Ben Shneiderman, and others,
continued the psychology-of-programming research thread. IBM
Watson researchers also contributed, as noted by John Thomas:
One of the main themes of the early work was basically that we in IBM
were afraid that the market for computing would be limited by the number of people who could program complex systems, so we wanted to

find ways for nonprogrammers to be able, essentially, to program.


(personal communication, October 2003)

Psychologists and studies of editing were so prevalent that in


1984 Thomas Green remarked that, text editors are the white
rats of HCI. As personal computing spread and the same methods were applied to studying other discretionary use, studies
of programming gradually disappeared.
CHI focused on novice use. Initial experience is particularly
important for discretionary users and the vendors developing
software for them. Novice users are also a natural focus when
studying new technologies, and a critical focus when more people take up computing each year than did the year before.
Routine, experienced use was still widespread. Computer
databases were extensively used by airlines, banks, government
agencies, and other organizations. This hands-on activity was
rarely discretionary. Managers mainly oversaw development
and analyzed data, leaving data entry and information retrieval
to people hired for those jobs. Improving skilled data handling
was a human factors undertaking. CHI studies of database use
were fewI count three over a decade, all focused on novice
or casual use.

12

INTRODUCTION

Fewer European companies produced mass-market software. European research favored in-house development and
use. At Loughborough University, HUSAT focused on job design
(the division of labor between people and systems) and collaborated with the Institute for Consumer Ergonomics, particularly
on product safety. In 1984, Loughborough initiated an HCI graduate program drawing on human factors, industrial engineering,
and computer science. The International Conference on HCI
(INTERACT) conference, first held in London in 1984 and chaired
by Shackel, drew HF&E and CHI researchers.
In his perceptive essay just cited, Bannon urged that more attention be paid to discretionary use while criticizing CHIs heavy
emphasis on initial experiences; this may have been a reflection of Bannons European perspective.
The visionaries were not familiar to many of the CHI researchers who helped realize some of their visions. The 633 references in the 58 papers presented at CHI 83 included many authored by well-known cognitive scientists, but Bush, Sutherland,
and Engelbart were not cited at all. Many computer scientists
familiar with the early work entered CHI a few years later, and
the CHI psychologists eventually discovered and identified with
these pioneers who shared their concern for discretionary use,
provided conceptual continuity, and bestowed legitimacy on a
young enterprise seeking to establish itself academically and
professionally.

CHI and Human Factors Diverge


Between 1980 and 1985, researchers at Xerox PARC and CMU
introduced another influential research program. Card, Moran,
and Newell (1980a, 1980b) described a keystroke-level model
for user performance time with interactive systems with cognitive components in the goals, operators, methods, selection
rules (GOMS) model that were the basis for their landmark 1983
book, The Psychology of HCI.
This work was highly respected within CHI although it did
not address discretionary, novice use. On the contrary, it focused on the repetitive expert use studied in human factors. In
fact, it was explicitly positioned in opposition to the stimulusresponse bias of human-factors research:
Human-factors specialists, ergonomists, and human engineers will find
that we have synthesized ideas from modern cognitive psychology and
artificial intelligence with the old methods of task analysis. . . . The user
is not an operator. He does not operate the computer, he communicates with it. . . . (p. viii)

Newell and Card (1985) noted that human factors had a role
in design, but continued,
Classical human factors . . . has all the earmarks of second-class status. (Our
approach) avoids continuation of the classical human-factors role (by
transforming) the psychology of the interface into a hard science. (p. 221)

Card wrote:
Human factors was the discipline we were trying to improve. . . . I personally changed the (CHI conference) call in 1986, so as to emphasize
computer science and reduce the emphasis on cognitive science, be-

cause I was afraid that it would just become human factors again. (Email, June 2004).
Hard science, in the form of engineering, drives out soft science, in
the form of human factors, wrote Newell and Card. (1985, p. 212)

Ultimately, human-performance modeling drew a modestbut-fervent CHI following. Key goals of the modelers differed
from those of practitioners and other researchers. The central
idea behind the model is that the time for an expert to do a task
on an interactive system is determined by the time it takes to do
the keystrokes, wrote Card, Moran, and Newell (1980b). Modeling was extended to a range of cognitive processes, but remained most useful in helping to design for nondiscretionary
users, such as telephone operators engaged in repetitive tasks
(e.g., Gray, John, Stuart, Lawrence, & Atwood, 1990). Its role in
augmenting human intellect was unclear.
CHI and human factors moved apart, although Human Factors in Computing Systems remains the CHI conference subtitle. They were never closely integrated. Most of the cognitive
psychologists had turned to HCI after earning their degrees and
were unfamiliar with the human-factors research literature. The
HFS did not again cosponsor CH, and its researchers disappeared from the CHI program committee. Most CHI researchers
who had published in the annual human-factors conference and
Human Factors journal shifted to CHI, Communications of the
ACM, and the journal HCI established in 1985 by Moran and published by Erlbaum, a publisher of psychology books and journals.
The shift was reflected at IBM T.J. Watson Research Center.
John Gould and Clayton Lewis authored a CHI 83 paper that
nicely defined the CHI focus on user-centered, iterative design
based on prototyping. Watson cognitive scientists helped shape
CHI, but Goulds principal focus remained on human factors; he
served as HFS president four years later. Symbolically, in 1984,
Watsons Human Factors Group faded away and a User Interface
Institute emerged.
CHI researchers wanted to be seen as engaged in hard science or engineering. The terms cognitive engineering and usability engineering were adopted. In the first paper presented
at CHI 83, Design Principles for Human-Computer Interfaces,
Donald Norman applied engineering techniques to discretionary use, creating user-satisfaction functions based on technical parameters. Only years later did CHI loosen its identification with engineering.

Workstations and Another AI Summer


High-end workstations from Apollo, Sun, and Silicon Graphics
appeared between 1981 and 1984. Graphics researchers no
longer had to congregate in heavily financed labs (notably MIT
and Utah in the 1960s; MIT, NYIT, and PARC in the 1970s). Because these workstations did not reach a mass market, graphics research that focused on photorealism and animation did
not directly influence HCI more broadly.
The Xerox Star (actually called an Office Workstation), Apple Lisa, and other commercial GUIs appeared, but when the
first CHI conference was held in December, 1983, none was succeeding. They were priced too high or ran on processors that
were too weak to exploit graphics effectively.

A Moving Target: The Evolution of Human-Computer Interaction

In 1981, Symbolics and LMI introduced workstations optimized to run the Lisp programming language. The timing could
not have been more fortuitous. In October of that year, a conference on Next Generation Technology was held in the National
Chamber of Commerce auditorium in Tokyo, and, in 1982, the
Japanese government announced the establishment of the Institute for New Generation Computer Technology (ICOT) and its
10-year Fifth-Generation project focused on AI. AI researchers
in Europe and the United States sounded the alarm. Donald
Michie of Edinburgh saw it as a threat to western computer technology, and Ed Feigenbaum of Stanford wrote,
The Japanese are planning the miracle product. . . . Theyre going to give
the world the next generationthe Fifth Generationof computers,
and those machines are going to be intelligent. . . . We stand, however,
before a singularity, an event so unprecedented that predictions
are almost silly. . . . Who can say how universal access to machine
intelligencefaster, deeper, better than human intelligencewill change
science, economics, and warfare, and the whole intellectual and sociological development of mankind? (Feigenbaum & McCorduck, 1983)

At the same time, parallel distributed processing, or neural


net models, seized the attention of researchers and media. Used
to model signal detection, motor control, semantic processing,
and a wide range of phenomena, neural nets represented conceptual and technical advances over earlier AI work on perceptrons, but were of particular interest because the new generation of minicomputers and workstations supported simulation
experiments. Production systems, a computer-intensive AI modeling approach with a psychological foundation developed at
CMU, gained wider use in research.
These developments triggered an artificial intelligence
gold rush. As with actual gold rushes, most of the money was
made by those who outfitted and provisioned the prospectors, although generous government funding again flowed to AI
researchers. The European ESPRIT and UK Alvey programs
invested over $200 million per year starting in 1984 (Oakley,
1990). In the United States, funding for the DARPA Strategic
Computing AI program alone, begun in 1983, rose to almost
$400 million in 1988 (Norberg & ONeill, 1996). Investment in AI
by 150 U.S. companies was estimated at about $2 billion in 1985
(Kao, 1998).
The unfulfilled promises of the past led to changes this time
around. General problem solving was emphasized less, domainspecific problem solving was emphasized more. Terms such as
intelligent knowledge-based systems, knowledge engineering,
expert systems, machine learning, language understanding, image understanding, neural nets, and robotics were often favored
over AI.
In 1983, Raj Reddy of CMU and Victor Zue of MIT criticized
the mid-1970s abandonment of funding for speech-processing
research, and soon funds were plentiful for that and other AI
efforts (Norberg & ONeill, 1996, p. 238). Johnson (1985) estimated that 800 corporate employees and 400 academics were
working on natural language-processing research in1985. Commercial NLU interfaces to databases such as AI Corporations
Intellect and Microrim Clout appeared.
AI optimism is illustrated by two meticulously researched
Ovum reports on speech-and language processing ( Johnson,

13

1985; Engelien & McBride, 2001). In 1985, speech-and-language


product revenue was $75 million, comprised mostly of income from grants and investor capital. Ovum projected that
sales would reach $750 million by 1990 and $2.75 billion by
1995. In 1991, sales were under $90 million, but projections
were optimistically pegged at $490 million for 1995 and $3.6 billion for 2000.
U.S. corporations banded together to counter the Japanese
Fifth-Generation project, jointly funding the Microelectronics
and Computer Technology Corporation (MCC). (U.S. antitrust
laws were relaxed to allow them to cooperate in this way.) MCC
embraced AI, reportedly becoming the leading customer for
both Symbolics and LMI. MCC projects included two parallel
NLU efforts, work on intelligent advising, and CYC, Douglas
Lenats ambitious project to build an encyclopedic commonsense knowledge base for other programs to consult. In 1984,
Lenat predicted that by 1994 CYC would be intelligent enough
to educate itself from online texts. Five years later, CYC was reported to be on schedule to spark a vastly greater renaissance
in (machine learning) (Lenat, 1989).
Knowledge engineering was another facet of AI that involved
human interaction, bringing it closer to HCI, at least in theory.
The difficulty of eliciting knowledge from experts frustrated researchers whose primary interest was in representing and reasoning about knowledge. This created opportunities for researchers
in the HF&E community. European funding directives particularly encouraged work spanning technical and behavioral concerns. International Journal of Man-Machine Studies became a
major outlet for both HF&E and AI research in the 1980s.
AI interaction with CHI was limited. CHI83 and CHI85 sessions covered speech and language, cognitive modeling, knowledge-based help, and knowledge elicitation, but AI technologies
did not succeed in the marketplace and were often directed at
nondiscretionary users. AI Corporation sold the database interface Intellect primarily to the government before it disappeared.
Nor were many AI researchers and developers interested in interaction details: They focused on the power of tools such as
EMACS and UNIX, quickly forgetting the painful weeks spent
learning needlessly arbitrary commands.

19851995: GRAPHICAL USER


INTERFACES SUCCEED
There will never be a mouse at the Ford Motor Company.
High-level acquisition manager, 1985

Graphical user interfaces were a disruptive revolution in


interaction when they finally succeeded commercially, as were
earlier shifts to stored programs and to interaction based on
commands, full-screen forms and full-screen menus. Some sectors were affected well before others.
GUIs were particularly attractive to new users. Their success
immediately affected the CHI field. However, not until Windows
3.0 succeeded in 1990 did GUIs have much influence among
government agencies and business organizations that were the
focus of the other HCI researchers. By then the technology was
better understood and less disruptive. The early 1990s also saw

14

INTRODUCTION

the maturation of local area networks and the Internet. This


foundation for computer-mediated communication and information sharing was also transformational.

CHI Embraces Computer Science


Apple launched the Macintosh with a 1984 Super Bowl ad pitched
at office work, but sales did not follow, and in mid-1985 Apple was
in trouble. Then Macs appeared with four times as much RAM,
sufficient to manage Aldus PageMaker, Adobe Postscript, and the
Apple LaserWriter as they were released, along with Microsofts
Excel and Word for Macintosh. The more powerful Mac Plus arrived in January, 1986. The Mac succeeded where the many commercial GUIs before it had not; it was popular with consumers
and the platform for desktop publishing.
Even within CHI, GUIs were initially controversial. They had
disadvantages: an extra level of interface code increased development complexity and created reliability challenges; they consumed processor cycles and distanced users from the underlying system, which, many believed, experienced users would
have to learn eventually. Carroll and Mazur (1986) showed that
GUIs confused and created problems for people familiar with
existing interfaces. An influential essay on direct manipulation
interfaces, Hutchins, Hollan, and Norman (1986), concluded, it
is too early to tell how GUIs would fare. GUIs could well prove
useful for novices, they wrote, but we would not be surprised if
experts are slower with Direct Manipulation systems than with
command language systems. Most HCI research had focused
on expert use, so this valid insight seemed significant. However,
in a rapidly expanding consumer market, first-time use is critical.
Hardware and software advances eliminated other difficulties,
and GUIs were around to stay.
The effects within CHI were dramatic. Active topics of research including command naming, text editing, and the psychology of programming were quickly abandoned; more technical topics such as user-interface management systems became
significant. At a higher level, psychology gave way to computer
science as the driving force in interaction design.
Researchers had been engaged in establishing a comprehensive psychological theoretical framework based on formal experiments (Newell & Card, 1985; Carroll & Campbell, 1986;
Long, 1989; Barnard, 1991). This was conceivable for constrained
command- and form-based interaction, but could not be scaled
up to design spaces that included color, sound, animation, and
an endless variety of icons, menu designs, window arrangements,
and so on. The urgent need was to identify the most pressing
problems and find satisfactory rather than optimal solutions. Rigorous experimentation, the principal skill of cognitive psychologists, gave way to faster and less precise assessment methods.
Also, to explore the dynamically evolving, unconstrained design
space required software engineering expertise.
As a result, the late 1980s saw an influx of computer scientists
into the CHI community. The topic HCI became part of the curriculum of many computer-science departments. Computer scientists working on interactive graphics saw CHI as a natural
home, as did software engineers interested in interaction, and
some AI researchers working on speech recognition, language
understanding, and expert systems. Reflecting this shift, in 1994

ACM launched Transactions on Computer-Human Interaction.


Of course, computer-science researchers were more familiar
with early pioneering work than were many of the cognitive scientists who preceded them.
Early PCs and Macs were not easily networked, but as local
area networks spread, CHIs focus expanded to include collaboration support. This brought it into contact with efforts in MIS
and Office Automation research, which are discussed below.

Human Factors and Ergonomics Maintains


a Nondiscretionary Use Focus
HF&E research continued to respond to the needs of government agencies, the military, aviation, and telecommunications.
Census, tax, social security, health and welfare, power-plant operation, air-traffic control, ground control for space missions,
military logistics, processing text, and voice data for intelligence
contribute to the government being the largest consumer of
computing.
Most users in these settings are assigned technology. The focus is on skilled use. Small efficiency gains in individual transactions can yield large benefits over time. For routine data entry and other tasks, improvements that may not influence
discretionary users can make a difference. After CHI formed, the
HFS undertook a study to see how it would affect membership
in its Computer Systems Technical Group and found unexpectedly little overlap (Richard Pew, personal communication).
Research funding in HF&E responded to governmental concerns and initiatives. Government also promoted the development of ergonomic standards, in part to help with the problem
of defining system requirements for competitive bidding while
remaining at arms length from the potential developers who
better understand the technical possibilities. Compliance with
standards was specified in a contract.
In 1986, Sid Smith and Jane Mosier published the last in a
series of government-sponsored interface guidelines. Nine hundred and forty-four specific design guidelines were organized
into sections titled Data Entry, Data Display, Data Transmission,
Data Protection, Sequence Control, and User Guidance. They
recognized that GUIs would expand the design space beyond
the reach of such an already cumbersome, comprehensive set
of guidelines that did not cover icons, pull-down or pop-up
menus, mice-button assignments, sound, animation, and so on.
Requirements definition shifted to specify predefined interface
styles and design processes rather than to identify specific features that would be built from scratch.
DARPAs heavily funded Strategic Computing program set
out to develop an Autonomous Land Vehicle, a Pilots Associate, and a Battle Management system. All raised human-factors
research issues. These systems would require interactive technologies such as speech recognition, language understanding,
and heads-up displays. These might not be used by people who
have a choice, but pilots, people guiding autonomous vehicles,
and officers under stressful conditions may have no better alternative. Such technologies could also prove useful for professional translators and intelligence analysts, when a phone system provides no alternative, a disability limits keyboard use, or
hands are otherwise occupied.

A Moving Target: The Evolution of Human-Computer Interaction

IS Extends Its Range


Although graphical user interfaces were not quickly adopted by
organizations, business graphics was important in a research
field focused on managerial use. Remus (1984) contrasted tabular and graphic presentations, Benbasat and Dexter (1985)
added color as another factor, and many studies followed. The
concept of cognitive fit between task and tool was introduced in
this context to explain apparently contradictory results in the literature (Vessey & Galletta, 1991). Studies considered online and
paper presentation. In practice, color displays were rare in the
1980s; most managers dealt with printed reports.
Involvement of internal end users in the development
process was actively discussed (Friedman, 1989). A Scandinavian
group set out to empower workers in this way (Bjerknes, Ehn, &
Kyng, 1987), but more often, the focus was on increasing user
acceptance of the resulting system.
Hands-on managerial use was atypical, but it was central to
group-decision support-systems research, which emerged from
decision support systems and evolved into group support systems. Computer-supported meeting facility research was conducted in the mid-1980s in several laboratories (e.g., Begeman
et al., 1986; DeSanctis & Gallupe, 1987; Dennis, George, Jessup,
Nunamaker, & Vogel, 1988). Given the expense of these systems
and the decision-maker focus, key research was in schools of management and not in computer-science departments or software
companies. Jay Nunamakers group at the University of Arizona
explored approaches to brainstorming, idea organizing, online
voting, and other meeting activities. This became a significant IS
contribution to research in Computer Supported Cooperative
Work (CSCW), discussed next, and also led to the formation of a
company to market the technology (Nunamaker, Briggs, Mittleman, Vogel, & Balthazard, 1997).
An influential IS research thread is based on the Technology
Acceptance Model (TAM) introduced in F.D. Davis (1989). It focuses on perceived usefulness and perceived usability to improve white-collar performance that is often obstructed
by users unwillingness to accept and use available systems
(p. 319). An element of uncertainty exists in the minds of decision makers with respect to the successful adoption, wrote
Bagozzi, Davis, and Warshaw (1992, p. 664). This managerial
view of individual behavior was influenced by Daviss exposure
to some early CHI usability research.
High interest in TAM showed that the MIS focus, in which
hands-on use was primarily a nondiscretionary operation,
data entry, and data retrieval, was shifting as hands-on use
spread to white-collar workers who could refuse to play. Contrast IS with CHI: Consumers choose technologies that they
perceive to be useful, so CHI assumes perceived utility and
rarely considers utility at all. TAM researchers considered utility more important than usability. CHI focused on usability a
decade before TAM, albeit more on measures of actual usability than measures of perceived usability. Perception was a secondary user satisfaction measure to CHI researchers, who
believed (not entirely correctly) that measurable reduction
in time, errors, questions, and training would, over time,
translate into positive perceptions. Acceptance is not in the
CHI vocabulary. A discretionary user chooses or adopts,
rather than accepts.

15

Harvard Business Review published Usability: The new dimension of product design (March, 1994). In concluding that
user-centered design is still in its infancy, it made no mention
of CHI. The communities remained largely isolated.

Collaboration Support: OIS Gives Way to CSCW


In the late 1980s, three research communities focused on smallgroup communication and information sharing. The Office Automation/Office Information System field was already there. Declining costs of computing attracted MIS researchers focused on
organizational decision making to group decision making more
generally, as previously noted. The proliferation of LAN networks encouraged some CHI researchers to move from individual productivity to a quest for killer apps that would appeal to groups.
Although the OA/OIS field had led the way, it declined and
largely disappeared in this period. The minicomputer platform
for much of the work did not survive the competition from
PCs and workstations. The concept of office or group was
problematic: organizations and individuals are relatively
sharply defined persistent entities with goals and needs, but
small groups often have ambiguous memberships and
processes that shift when one member joins or departs. People who work together often fall under different budgets, making technology acquisition complicated when it is not organization wide. Conference series and journals dropped the
identification with offices.
First automation lost favor as a term; ACM SIGOA shifted to
SIGOIS (Office Information Systems) in 1986, the same year the
annual AFIPS OA conferences ended. By 1991, the term Office
itself began to disappear: Transactions on Office Information
Systems became Transactions on Information Systems; Office:
Information and People became Information Technology and
People; Conference on Office Information Systems became
Conference on Organizational Communication Systems.
The AI summer that was one component of the OA/OIS effort ended as AI failed to meet expectations: massive funding
did not deliver a Pilots Associate, Autonomous Land Vehicle,
or Battle Management system for the military of automated offices for enterprises. CHI conference sessions on language processing diminished early, but sessions on modeling, adaptive
interfaces, advising systems, and other uses of intelligence in interfaces increased through the late 1980s before declining in the
1990s. AI research did not disappear, but funding became scarce,
employment opportunities dried up, and conference participation dropped off.
Building on a 1984 workshop (Greif, 1985), the 1986 Computer Supported Cooperative Work conference brought together researchers from diverse disciplines interested in issues
of communication, information sharing, and coordination. Participants came primarily from IS, OIS, CHI, distributed AI, and
anthropology. Four of 13 CSCW program committee members
and many papers were from schools of management, with similar numbers from OIS.
A field seemed to coalesce in 1988, with the publication of
Computer-Supported Cooperative Work, edited by Irene Greif,
and SIGCHIs management of the biennial CSCW conferences

16

INTRODUCTION

held in North America. However, representation from IS and


OIS faded. Scandinavian cooperative or participatory design researchers remained active longer. Although they shared the IS
focus on organizational development and use rather than commercial software development, they also shared CHIs focus
on discretionary use, albeit with a different slant: empowering
workers to control the design of their workplaces. Ethnographers, whose ethic is also to act to benefit their informants, in
this case technology users rather than technology managers,
rose in influence in CSCW.
Some IS researchers shifted participation to a series of less
selective annual Groupware conferences that started in 1992.
They also created a newsletter: Groupware Report listed many
relevant conferences, but not CSCW. Eventually, many in IS settled on the Collaboration Technology track of the annual HICSS
conference as a prejournal publication arena, with some participating in the Organizational Computing Systems (19911995)
and GROUP (1997present) descendants of the Conference on
Office Information Systems.
The cultural issues that pulled apart the groups initially active in CSCW are addressed in the Discussion section. However, CSCW remains a strong research area. It has attracted a
broad swath of HCI researchers long enough to foster mutual
understanding, ranging from highly technical work on group
undo to thick ethnographies of workplace activity, from studies
of IM dyads to scientific collaboratories involving hundreds of
people over several continents and many years. For details see
the Handbook chapter Groupware and Computer Supported
Cooperative Work by prominent CSCW researchers Gary and
Judy Olson.

19952005: THE INTERNET ERA ARRIVES


How did the spread of the Internet and emergence of the Web
affect the different HCI research threads? CHI researchers were
relatively Internet savvy. Although excited by the prospects, they
took this change in stride. Over time, though, the nature of research, development, and use were affected. The change was
not disruptive to human factors and ergonomics, either. The
Web was initially a return to a relatively familiar form-driven interface style, and it was not a locus of routine work for many
people. However, these developments had a seismic impact on
IS, so that is where this section begins.

The Formation of AIS SIGHCI


Computer users in organizations were no longer almost slaves
devoted to maximizing computer usescreen savers vied with
solitaire to be the main consumer of processor cycles. Embrace
of the Internet created more porous organizational boundaries.
Employees download free software such as instant-messaging
clients, music players, and weblog tools inside the firewall despite IT concerns about productivity and security. These are not
the high-overhead applications of the past. Another change over
time is that home use of software has reduced employee pa-

tience with poor interactive software at work. In addition,managers who were hands-off users in the 1980s became late
adopters in the 1990s and are now hands-on early adopters of
technologies that benefit them.
These changes affect use, but the Web had a more dramatic
effect on IS research. Corporate IT departments had focused on
internal operations, but suddenly organizations were creating
Web interfaces to vendors and customers. The Internet bubble
revealed how little was understood in these areas, but neither
online presence and services nor business-to-business systems
disappeared when the bubble burst. Portals proliferated: the
Web became an essential business tool. IT professionals tasked
with providing interfaces to highly discretionary external customers found themselves in much the same place CHI had been
20 years earlier, whether they realized it or (most often) not.
In 2001, the Association for Information Systems (AIS) established the Special Interest Group in Human-Computer Interaction (SIGHCI). The founders defined HCI by citing 12 works
by CHI researchers (Zhang, et al., 2004, p. 148) and made it a
priority to bridge to CHI and the Information Science community (Zhang, 2004, p. 2). SIGHCIs broad charter includes a
range of organizational issues, but published work focuses on
interface design for e-commerce, online shopping, online behavior especially in the Internet era, and effects of Web-based
interfaces on attitudes and perceptions. Eight of the first 10 papers in SIGHCI-sponsored journal issues cover Internet and
Web behavior.

Human Factors and Ergonomics


Embraces Cognitive Approaches
In 1996, the Human Factors and Ergonomics Society formed a
new technical group, Cognitive Engineering and Decision Making. It became the largest technical group in the society. A
decade earlier, this would have seemed an unlikely development: Senior human factors researchers disliked cognitive approaches, and it was in the CHI field that cognitive engineering
was being used in this sense (Norman, 1982; 1986).
Even more astonishing would have been the fact that in
2005, human performance modeling would be a new, thriving
technical group in HFES. The technical group was started by
Wayne Gray and Dick Pew, both of whom participated in CHI in
the 1980s. Human performance modeling was the Card, Moran,
and Newell (1983) effort to reform the discipline of human factors from the outside. Work had continued, focused on expert
performance (e.g., a special issue of HCI, Vol. 12, Number 4,
1997). Today the reform effort is appropriately positioned
within human factors, focused largely on nondiscretionary use.
See Michael Byrnes chapter on cognitive architectures for more
on this topic.
HF&E has to a large degree shaped government funding.
The U.S. National Science Foundation Interactive Systems programsubsequently renamed Human-Computer Interaction
was described as follows:
The Interactive Systems Program considers scientific and engineering
research oriented toward the enhancement of human-computer com-

A Moving Target: The Evolution of Human-Computer Interaction

munications and interactions in all modalities. These modalities include


speech/language, sound, images and, in general, any single or multiple,
sequential, or concurrent, human-computer input, output, or action.
(National Science Foundation, 1993)

One NSF HCI program manager reported that his proudest


accomplishment was doubling the already ample funding for
natural language understanding. NSF established a separate
Human Language and Communication Program in 2003, but
speech-and-language research continued to draw heavy funding
support in the HCI and Accessibility programs, and lighter support in AI and other NSF programs.
Two subsequent NSF HCI program managers chose to
emphasize direct-brain interfaces or brain-computer interaction, using brainwaves, implants, or other means. These program managers rarely attended CHI conferences, where one finds
little work on speech, language, or direct-brain interaction.
Whether or not these technologies prove useful, they will not appear soon in many homes or offices. A review committee in 2003
noted that a random sample of NSF HCI grants included none by
prominent CHI researchers (National Science Foundation, 2003).
Human-factors research on computer use in general has dispersed. Within HFES, the Computer Systems Technical Group
has declined in membership, but HCI issues now appear in
most branches of human factors, from telecommunications to
medical systems.

CHI Evolves, Embraces Design


The steady flow of new hardware, software features, applications, and systems insures that initial and early use of digital
technology is always present, is important to technology producers, and raises new research issues. CHI has tracked this
flow of innovation, generally picking up an innovation at the
point it attracts a wide audience.
As an application matures, use may become routine. Many
people now have little choice about using e-mail and word processing, for example. Such technologies get less attention as
CHI directs its gaze at discretionary use of the moment: instant
messaging, weblogs, collaboration technology, Web design,
ubiquitous computing, mobile computing, and social computing, and so on. New technologies have raised new issues, such
as privacy, and encouraged new methods, such as ethnography.
At a more abstract level, there is continuity at CHI: there is continued exploration of input devices, communication channels,
information-visualization techniques, and design methods.
The growing participation in an Internet that seemed to become more reliable and support higher bandwidth every month
through the mid-1990s increased the focus on the use of realtime communication technologies, and quasi-real-time technologies such as e-mail. The Web temporarily stopped much of
this, however, by shifting attention to less indirect interaction
via static sites.
The Web is like a new land mass, a new continent. First came
the explorers, posting flags here and there. Then came the first
attempts at settlement, in the form of virtual worlds research
and development. Few of these first pioneers survived. There

17

was little to do in virtual worlds, except for multiplayer games


and simulations. However, in recent years, people, especially
young people, are shifting major portions of their work and play
online, including reliance on online reference, digital photos,
social software, preferences for digital documents and online
shopping, and multiplayer games. This is reflected in CHI research topics, although game design has not yet attracted the attention it merits.
The Web has curtailed some research focused on self-contained personal productivity tools. Despite high development
and maintenance costs, representing knowledge in application
software was appealing when external information resources
were limited. Now that so much information is available online,
including the ability to locate and access knowledgeable people,
static knowledge representation is less useful. In contrast, adaptive systems that build and maintain local knowledge can play a
greater role. Steady progress in machine learning is influencing
productivity toolsalthough hyperbolic forecasts have not
disappeared.
The psychologists and computer scientists who formed CHI
considered interface design to be a scientific and engineering
undertaking. Their focus on performance assumed that people
eventually choose the most efficient alternatives. Because human
discretion involves aesthetic preferences and invites marketing
and nonrational persuasion, as computing costs came down this
view could not be sustained. However, it held on longer in CHI
than in SIGGRAPH, where aesthetic appeal motivated much of
the research. Even now, the study of enjoyment in CHI is labeled
funology (Blythe, Monk, Overbeeke, & Wright, 2003) lest someone think that the researchers are too relaxed about it.
Visual designers participated in graphical interface research
from early on. Aaron Marcus began working full-time on computer graphics in the late 1960s. William Bowmans 1968 book
Graphic Communication was a strong influence on the Xerox
Star, for which the designer Norm Coxs icons were chosen in
an evaluation described in Bewley et al., 1983). However,
graphic designers were usually seen as secondary (Evenson,
2005). In 1995, building on working-group meetings at recent
conferences, SIGCHI initiated Designing Interactive Systems
(DIS), a biennial conference drawing some visual designers and
many systems designers. In 2003, SIGCHI, SIGGRAPH, and the
American Institute of Graphic Arts (AIGA) initiated the Designing for User Experience (DUX) conference series that fully embraces visual and commercial design.
The evolution of CHI is reflected in the influential contributions of Donald Norman. A cognitive scientist who introduced
the term cognitive engineering, he presented the first CHI 83
paper. It defined User Satisfaction Functions based on speed
of use, ease of learning, required knowledge, and errors. His
influential 1988 book Psychology of Everyday Things (POET) focused on pragmatic usability. Its 1990 reissue as Design of
Everyday Things reflected a field refocusing on invention. Fourteen years later, he published Emotional Design: Why We Love
(or Hate) Everyday Things, stressing the role of aesthetics in
our response to objects.
Designs first cousin, marketing, has been poorly regarded in
the CHI community (see Marcus, 2004). However, website design forces the issue. Site owners often wish to keep users on a

18

INTRODUCTION

site; whereas users may prefer to escape quickly. Consider supermarkets, where items that most shoppers want are positioned far apart, forcing people to traverse aisles so other products can beckon. CHI professionals usually align themselves
with end users, but when designing for a site owner, they face
a stakeholder conflict. This was not true in the past: Designers
of individual productivity tools had negligible conflict of interest
with prospective customers. Marketing is likely to find a place in
CHI, perhaps as brandology.

derstanding of users that can be drawn upon as new possibilities arise. Unlike HF&E, CHI slowly abandoned its roots in scientific theory and engineering. This did not impress rigorously
experimental HF&E or theory-oriented IS researchers. The
controversial psychological method of verbal reports, developed by Newell and Simon (1972), was applied to design as
the thinking-aloud method by Clayton Lewis (1983; Lewis &
Mack, 1982). Perhaps the most widely used CHI method, it led
some researchers in the other areas to characterize CHI people as wanting to talk about their experiences rather than doing research.

LOOKING BACK: CULTURES AND BRIDGES


Academic, Linguistic, and Generational Cultures
Despite a significant common focus and a dynamic environment with shifting alliances, the three major threads of HCI
researchhuman factors and ergonomics (HF&E), Information
Systems (IS), and Computer-Human Interaction (CHI)have
not merged. They have not even interacted as much as one
might expectand not for lack of trying. The HFS was a 50%
cosponsor of the first CHI conference. CSCW set out to bring together CHI and IS, and AIS SIGHCI has engaged in another such
effort. Understanding the obstacles to interaction provides insight into the nature of these disciplines.

Effects of Varying Discretion


A principal distinction is that HF&E and IS arose before discretionary hands-on use was widespread, whereas CHI made that
its focus. Researchers in the first two examined organizational
as well as technical issues, the latter came only slowly to consider organizational context. Researchers in HF&E and IS
shared journals; for example, Benbasat and Dexter (1985) was
published in Management Science and cited five human factors articles.
The difference in user motivation affected methods. The psychologists who shaped HF&E and CHI were trained to test hypotheses about behavior in laboratory experiments. Experimental subjects agree to follow instructions for an extrinsic
reward. This is a good model for nondiscretionary use, but not
for discretionary use. CHI researchers relabeled them participants, which sounds volitional, but discovered that lab findings
require confirmation in real-world settings more often than is
true for ergonomics studies.
Traditional ergonomic goals applyfewer errors, faster performance, quicker learning, greater memorability, and being enjoyablebut the emphasis differs. For power-plant operation,
error reduction is key and performance enhancement is good.
Other goals are less critical. In other settings, a formal experiment is necessary to show that a new interface will shave a few
seconds from a repetitive operation.
In contrast, consumers often respond to visceral appeal and
initial experience at the expense of long-term usability and utility.
In assessing designs for mass markets where catching obvious
problems is more significant than striving for an optimal solution, less-rigorous studies (e.g., discount usability, Nielsen, 1989)
are adequate and time-consuming qualitative approaches (e.g.,
personas, Pruitt, and Adlin, 2006) and can provide a deeper un-

HF&E and IS also share the traditional academic culture of the


sciences: Conferences are venues for work in progress, and
journals are repositories for polished work. In contrast, for CHI
and other U.S. computer-science disciplines, conference proceedings are the final destination for most work. Journals are
secondary. Outside the United States, computer science retains
more of a journal focus, perhaps due to the absence of professional societies that archive proceedings. This circumstance impedes communication across disciplines and continents. Researchers in journal cultures chafe at CHIs rejection rates; CHI
researchers are dismayed by the relatively unpolished work at
other conferences.
Accepting only around of 20% of submissions, CHI conferences are selective. With few exceptions, HF&E and IS conferences have acceptance rates of 50% or greater. On the other
hand, CHI journals receive fewer submissions and have higher
acceptance rates. (See Grudin, 2005.) Many CHI researchers report that journals are not relevant, and I estimate that as little
as 10% of work in CHI-sponsored conferences reaches journal
publication. In contrast, an IS track organizer for Hawaii International Conference on System Sciences 2004 estimated that
80% of research there progressed to a journal ( Jay Nunamaker,
HICCS-38 presentation, January 2004).
A linguistic divide also set CHI apart. HF&E and IS used the
term operator; in IS, user could be a manager who used
printed computer output, not a hands-on end user. Within CHI,
operator was demeaning, user was always hands-on, and end
user seemed a superfluous affectation. In HF&E and IS, task
analysis generally refers to an organizational decomposition of
work, or, especially more recently, and a broader analysis that
includes external factors; in CHI it was a cognitive decomposition, such as breaking a text editing move operation into select,
cut, select, paste. In IS, implementation meant deployment of
a system in an organization; in CHI it was a synonym for development. System, application, and evaluation also had markedly
different connotations or denotations. Significant misunderstandings and rejections resulted from failure to recognize
these distinctions.
Different perspectives and priorities were reflected in attitudes toward standards. Many HF&E researchers contributed to
the development of standards, believing that standards contribute to efficiency and innovation. A widespread CHI view was
that standards inhibit innovation. There are elements of truth in
both views, and positions may have converged as Internet and

A Moving Target: The Evolution of Human-Computer Interaction

Web standards were tackled. However, the attitudes reflected


the different demands of government contracting and commercial software development. Specifying adherence to standards is
a useful tool for those preparing requests for proposals, but
compliance with standards can make it more difficult for a product to differentiate itself.
A generational divide also existed. Many CHI researchers
grew up in the 1960s and 1970s, and did not appreciate the
HF&E orientation toward military and government systems, or
the inability (still occasionally observed) of HF&E and IS manmachine interaction researchers to adopt gender-neutral terminology. It added to the difficulty of generating enthusiasm for
building bridges and exploring literatures.

LOOKING AHEAD: TRAJECTORIES


The future of HCI will be tremendously varied, dynamic, and full
of surprises. The intent in this chapter is to provide a perspective or framework with which to view other chapters or topics of
interest and assess how they might develop. In this section, I
extrapolate from some observations about the past and present
state of the field.

DiscretionNow You See It, Now You Dont


We exercise choice more at home than at work, a lot when buying online, none when confronted by a telephone answering
system, considerable when young and healthy, less when constrained by injury or aging. Software that was discretionary yesterday is indispensable today. The need to collaborate forces us
to adopt common systems and conventions.
Consider a hypothetical team. In 1987, one member still
used a typewriter, and others chose different word processors.
All exchanged printed documents. One emphasized by underlining, another by italicizing, a third by bolding. In 1997,
group members wanted to share documents digitally, so they
had to adopt the same word processor and conventions.
Choice was curtailed; it had to be exercised collectively. Today,
it suffices to share documents in PDF format, so in 2007 the
team is using different word processors again. Tomorrow perhaps I could personalize my view to see in italics what you see
as bold.
Shackel (1997) noted this progression under the heading
From Systems Design to Interface Usability and Back Again.
Early designers focused at the system level; operators had to
cope. When the PC merged the roles of operator, output user,
and program provider, the focus shifted to the human interface
and choice. Then individual users again became components
in fully networked organizational systems. When a technology
becomes mission-critical, as e-mail did for many in the 1990s,
discretion is gone.
The converse also occurs. Discretion increases when employees download free software and demand capabilities they
have at home. Managers are less likely to mandate the use of
a technology that they use and find burdensome. Even in the
military, where language-processing systems appealed to mili-

19

tary officers, the situation changed when they became handson users:
Our military users . . . generally flatly refuse to use any system that requires speech recognition. . . . Over and over and over again, we were
told If we have to use speech, we will not take it. I dont even want to
waste my time talking to you if it requires speech. . . . I have seen generals come out of using, trying to use one of the speech-enabled systems looking really whipped. One really sad puppy, he said OK, whats
your system like, do I have to use speech? He looked at me plaintively.
And when I said No, his face lit up, and he got so happy. (Forbus,
2003; see also Forbus, Usher, & Chapman, et al., 2003)

As familiar applications become essential and security concerns curtail openness, one might expect discretion to recede.
However, Moores law, competition, and the phenomenal ease
of sharing bits seem to guarantee that a steady flow of unproven
technologies will find their way to us.

Ubiquitous Computing, Invisible HCI?


Norman (1988) wrote of the invisible computer of the future.
Like motors, he speculated, computers would be present everywhere and visible nowhere. We interact with clocks, refrigerators, and cars. Each has a motor, but there is no human-motor
interaction specialization. A decade later, at the height of the
Y2K crisis and the Internet bubble, computers were more visible
than ever. We may always want a multipurpose display or two,
but part of Normans vision is materializing. However, with computers embedded everywhere, concern with interaction is also
everywhere.
Perhaps HCI, too, will become invisible through omnipresence. Interaction with digital technology is becoming part of
everyones research and the three major HCI fields are losing
participation.
Human Factors and Ergonomics. David Meister, author of The History of Human Factors and Ergonomics, stresses
the continuity of HF&E in the face of technology change:
Outside of a few significant events, like the organization of HFS in 1957
or the publication of Proceedings of the annual meetings in 1972, there
are no seminal occurrences . . . no sharp discontinuities that are memorable. A scientific discipline like HF has only an intellectual history; one
would hope to find major paradigm changes in orientation toward our
human performance phenomena, but there is none, largely because the
emergence of HF did not involve major changes from pre-World War II
applied psychology. In an intellectual history, one has to look for major changes in thinking, and I have not been able to discover any in
HF. (e-mail to author, 7 September 2004)

Membership in the Computer Systems Technical Group of


HFES has declined sharply, but technology use is stressed in
many technical groups, such as Cognitive Engineering and Decision Making, Communication, Human Performance Modeling,
Internet, System Development, and Virtual Environment. Nor
can Aging, Medical Systems, or other technical groups avoid invisible computers.

20

INTRODUCTION

Information Systems. As IS thrived during the Y2K crisis and Internet bubble years, other management-school disciplinesfinance, marketing, operations research, organizational
behaviorbecome more technically savvy. When the bubble
burst and enrollments declined, IS was left with a less well-defined niche. IS research issues, including HCI, remain significant, but this cuts two ways. With the standardization and outsourcing of IT functions, Web portals and business-to-business
ties get more attention. These bring in economic and marketing
considerations, making it easier to outsource HCI functions to
the traditional management disciplines.

tered workplaces and changed the way technology was used.


Now a generation has grown up with game consoles and cell
phones, absorbing an aesthetic of technology design, communicating with IM and text messaging, developing skills at searching, browsing, tagging, synthesizing, and acquiring multimedia
authoring talent via digital cameras and blogs. They are entering
workplaces, and they will change everything once again.
However it comes to be defined and wherever it is studied,
HCI is still in its early days.

Computer-Human Interaction. This nomadic group


started in psychology and obtained a place at the table in computer sciencein some departments. Lacking a well-defined academic niche, CHIs identity is tied to its conference, where participation peaked in 2001. Conferences that are more specialized
thrive. As technologies appear and attract a critical mass at an
ever-increasing pace, researchers may start new conferences or
just blog their findings. Soon after the Web came into view, annual
WWW conferences drew papers on HCI issues. Now we see conferences on ubiquitous and pervasive computing, agents, design,
emerging technologies, and so on. HCI is invisibly present in
each. High computer science conference rejection rates and a
new generational divide could accelerate this dispersion of effort.

APPENDIX: A FEW PERSONAL OBSERVATIONS


My career from 1973 to 1993 followed a common path: computer programmer, cognitive psychology student, who worked
first in HCI R&D industry, and then academia. My experiences
were not special; I was one of many in a position to see and feel
events and changes. I describe some of them here to add texture and a sense of human impact. My interest in history arose
from the feeling of being disrupted and swept along by unexpected forces. My first try at understanding was called, The
Computer Reaches Out (Grudin, 1990)I saw computers
slowly reaching into the world and changing us in ways we did
not foresee.

Information Science

1970: A Change in Plans

In 2005, the first annual i-Conference drew deans and faculty


from information schools to State College, Pennsylvania. Nineteen universities with information programs are listed on the
Web site http://iconference.ist.psu.edu/content/view/23/37/. These
schools vary considerably: Some are transformed library schools,
some have closer ties to information systems, and others formed
relatively recently, focusing on digital libraries and other issues.
Together they represent an interesting development, in that
they have drawn HCI researchers from HF&E, IS, and CHI. Information science also has conceptual and human links to the
office information system thread, which had information retrieval and language-processing elements that have been part of
the digital library research.
Within these schools, the issues of academic cultures are being addressed on almost a daily basis. It is too early to say how
this will evolve, but information science will be a significant
player in HCI. Design and information are two foci active in
2007: Design is a compensation for past neglect, and information is a new world opening before us.

I am a student, reading and believing the Life magazine article


describing super-human intelligent computers arriving in several years. I conclude that if we make it through the next few
years, we can count on machines to do all useful work. Humans
should focus on what they enjoy. I shifted from physics to mathematics, from politics to literature.

1973: Three Computing Professions,


Only One of Them Hands-On
Looking for my first job in 1973, I found three computer job
categories in the Boston Globe classifieds: operators, programmers, and systems analysts. Not qualified to be a highly paid analyst, I interviewed for some low-paid, hands-on operator jobs,
and landed a programming job with Wang Laboratories, which
was then a small electronics company. In two years at that job, I
never saw the computer my programs ran on. We flowcharted
on paper and coded on coding sheets, which a secretary sent
to be punched and verified. A van carried the stack of cards
20 miles to a computer center, and later that day or the next I
got the printout (e.g., Error in Line 20).

CONCLUSION: THE NEXT GENERATION


Moores law ensures that landscapes will continue to shift, providing new forms of interaction to explore and new practices
to improve. The first generation of computer researchers, designers, and users grew up without computers. Then came a
generation who began using computers as students; they en-

1975: Joining the First Profession of Discretionary


Hand-On Users
In 1975, the company acquired a few teletype terminals with
access to the WYLBUR line editor, developed at the Stanford

A Moving Target: The Evolution of Human-Computer Interaction

Linear Accelerator. Some programmers chose to abandon paper


and became hands-on computer users.

21

velopment processes were badly suited to interactive software


development. What could be done about it? These two issues
formed the basis of much of my research for the next decade
and made CSCW my natural home.

1983: A Chilly Reception for an Early Paper


on Discretion in Use
My first HCI publication showed that people may choose a
slower interface for aesthetic or other reasons even when familiar with a more efficient alternative (Grudin & MacLean,
1984). A senior colleague asked us not to publish it. He worked
on improving expert performance efficiency through cognitive
human performance modeling. He felt that a demonstration
that greater efficiency could be undesirable would be a distraction, saying, Sometimes the larger enterprise is more important than a small study.

1984: First Encounters with IS, Human


Factors, and Design
I returned to Wang Laboratories, now a large minicomputer
company. I was influenced there by another cognitive psychologist, Susan Ehrlich. She was in a marketing research group and
later managed the human factors group. She introduced me to
IS literature, which I found difficult to understand. I attended
Boston-area chapter meetings of both the HFS and SIGCHI. I
noted the cultural differences but felt CHI could learn from human factors. I decided to label myself human factors engineer,
a conscious-if-futile gesture to counter CHI antipathy toward
human factors. I drove to Cambridge to see the newly released
Macintosh. Few software engineers had the visual design skills
that I realized would become important, so at work I found industrial designers of the boxes who could be attracted to software interface design.

1985: The GUI Shock


In the early 1980s, I was one of many cognitive psychologists
working on command naming. This was an important application
in the era of command-line interfaces, but our greater ambition
was to develop a comprehensive theoretical foundation for HCI.
The success of the Mac in 1985 curtailed interest in command
names. For most of us, it also dashed the hope of developing a
comprehensive psychology of HCI. No one would build on our
past worka depressing thought. We had to choose: Am I a cognitive psychologist or a computer professional?

1986: Beyond The User: Group and Organizational Issues


I joined MCC, an industry research consortium. Between jobs I
worked on two papers, each addressing a different major challenge encountered in product development. From 1984 to
1986, I had worked on several products or features intended to
support groups rather than individual users. These had not
done well. Why was group support so challenging? In addition,
it was painfully evident that organizational structures and de-

1989: Discovering Contexts of Development,


a Major CHI-IS Differentiator
I was invited to spend two years at Aarhus University in Denmark. Within weeks of arriving in a country with little commercial software development, I recognized the significance of the
differences in the conditions that govern product, in-house, and
contract development of interactive software, differences that
shape CHI, IS, and software engineering perspectives. Sorting
this out led to my first library research into historical information (Grudin, 1991). Consulting journals and magazines long
untouched in dusty library corridors felt like wandering through
an archaeological site.

1990: Just Words: Terminology Can Matter


I felt a premonition in 1987 when my IS-oriented colleague Susan Ehrlich titled a paper Successful Implementation of Office
Communication Systems. By implementation, she meant introduction into organizations. To me, implementation was a
synonym for coding or development. Sure enough, the ACM editor asked her to change implementation to adoption (Ehrlich,
1987). What she called systems, I called applications. It left me
uneasy. Would my dear friend Language get in the way?
In 1990, I set out to teach HCI at Aarhus, describing the focus
as user-interface evaluation. My new colleagues seemed embarrassed. Weeks later, a book written by one of them was published (Bdker, 1990). Its first sentence was a quotation: Design
is where the action is, not evaluation. Now I was embarrassed.
In their in-house development world, with its dogma of getting
the design right up front, development projects often took
10 years or more. Evaluation occurred at the end, when only
cosmetic changes were possiblethus the negative stigma. In
commercial product development, evaluation of the previous
version, competitive products, and (ideally) prototypes was integral to design. Evaluation is central to iterative design. It was
also the skill experimental psychologists brought to the table.
We considered it a good thing.
Later in 1990, I participated in a panel on task analysis at a
European conference. To my dismay, this IS-oriented group defined task analysis differently than I did. To them, it meant an
organizational task analysis: tasks as components in a broad
work process. In CHI, it meant a cognitive task analysis: breaking a simple task into components; for example, is move text
better thought of as select-delete-paste or select-move-place?
Also in 1990, en route to give a job talk at UC Irvine, my first
lecture to an IS audience at the UCLA Anderson School of Management ended badly when the department head asked a question. It seemed meaningless, so I replied cautiously. He rephrased
the question. I rephrased my response. He started again, then
stopped and shrugged as if to say, this fellow is hopeless.
When I saw him a few months later, he was visibly astonished

22

INTRODUCTION

to learn that his Irvine friends were hiring me. Later, I understood the basis of our failure to communicate: We attached different meanings to the word users. In CHI, it means hands-on
computer users. IS users often never used a keyboardthey
identified software requirements, managed development, read
printed output and reports, and so on. His question had focused
on users who were not hands on. To me, all use was hands on,
so the question made no sense.
A book could be written about the word user. From a CHI
perspective, the IS user was called customer. Consultants use
client. In IS, the hands-on user was the end user. In CHIs parlance, end user and user were one and the samea person who
both entered data and used the outputso end user seemed a
superfluous or odd affectation. Human factors used operator,
which CHI considered demeaning. In software engineering,
user typically denoted a tool user, namely a software engineer.
I usually consider words a necessary but uninteresting
medium for conveying meaning, but these experiences led to an
essay on unintended consequences of language (Grudin, 1993).

2007: Reflecting on Bridging Efforts


I have been a minor participant in efforts to find synergies between CHI and human factors, office information systems, in-

formation systems (in both CSCW and AIS SIGHCI), and Design.
A sixties person, I experienced generational and cultural divides. Some of us avoided publishing in man-machine conferences and journals, and many of my MCC colleagues joined
the consortium to avoid Star Wars military projects. We lived
through disputes between cognitive psychologists and radical
behaviorist or strictly perceptual-motor psychologists. Many
CHI researchers shifted from journals to conferences as preferred publication venues, and from hypothesis-driven research
to build-and-assess research.
Some differences fade over time, but terminology continues
to impede communication. Conference reviewers are often irritated by acronyms used by authors from other fields. Writing
a chapter for an IS-oriented book, my coauthor and I wrangled
at great length with the editor over terminology (Palen &
Grudin, 2002).
A final example: In this research, I reviewed the literature
on TAM, the model of White-collar employee perceptions of
technology that is heavily cited in IS but never in CHI. I repeatedly had difficulty using a search engine to return to TAM references. On the third such occasion, I saw why: TAM stands for
Technology Acceptance Model, but I always typed in Technology Adoption Model. Nondiscretionary acceptance vs. discretionary adoption: different biases led to different terminology, and to error and confusion.

References
Ackoff, R. L. (1967). Management misinformation systems. Management
Science, 14, B147B156.
Asimov, I. (1950). I, robot. New York: Gnome Press.
Baecker, R., & Buxton, W. (1987). A historical and intellectual perspective. In R. Baecker & W. Buxton, Readings in HCI: A multidisciplinary approach (pp. 4154). San Francisco: Morgan Kaufmann.
Baecker, R., Grudin, J., Buxton, W., & Greenberg, S. (1995). A historical
and intellectual perspective. In R. Baecker, J. Grudin, W. Buxton, &
S. Greenberg, Readings in HCI: Toward the Year 2000 (pp. 3547).
San Francisco: Morgan Kaufmann.
Bagozzi, R. P., Davis, F. D., & Warshaw, P. R. (1992). Development and
test of a theory of technological learning and usage. Human Relations, 45(7), 660686.
Banker, R. D., & Kaufmann, R. J. (2004). The evolution of research on
Information Systems: A fiftieth-year survey of the literature in Management Science. Management Science, 50(3), 281298.
Bannon, L. (1991). From human factors to human actors: The role of
psychology and HCI studies in system design. In J. Greenbaum &
M. Kyng (Eds.), Design at Work (pp. 2544). Hillsdale, NJ: Erlbaum.
Barnard, P. (1991). Bridging between basic theories and the artifacts of
HCI. In J. M. Carroll (Ed.), Designing interaction: Psychology at the
human-computer interface (pp. 103127). Cambridge: Cambridge
University Press.
Begeman, M., Cook, P., Ellis, C., Graf, M., Rein, G., & Smith, T. (1986).
Project Nick: Meetings augmentation and analysis. Proceedings
Computer-Supported Cooperative Work 1986, 16.
Benbasat, I. & Dexter A. S. (1985). An experimental evaluation of graphical and color-enhanced information presentation. Management
science, 31(11), 13481364.

Bennett, J. L. (1979). The commercial impact of usability in interactive systems. In B. Shackel (Ed.), Man-computer communication
(Vol. 2.). Maidenhead: Infotech State-of-the-Art, Pergamon-Infotech.
Bewley, W. L., Roberts, T. L., Schroit, D., & Verplank, W. L. (1983). Human
factors testing in the design of Xeroxs 8010 Star office workstation. Proceedings CHI83 (pp. 7277). New York: ACM.
Bjerknes, G., Ehn, P., & Kyng, M. (Eds.). (1987). Computers and Democracya Scandinavian Challenge. Aldershot, UK: Avebury.
Blackwell, A. (2006). The reification of metaphor as a design tool. ACM
Transactions on Computer-Human Interaction, 13, 4, 490530.
Blythe, M. A., Monk, A. F., Overbeeke, K. & Wright, P. C. (Eds.). (2003).
Funology: From usability to user enjoyment. New York: Kluwer.
Borman, L. (1996). SIGCHI: the early years. SIGCHI Bulletin, 28(1), 133.
Bowman, William J. (1968). Graphic Communication. New York: John
Wiley.
Bush, V. (1945). As we may think. The Atlantic Monthly, 176, 101108.
Bdker, S. (1990). Through the Interface: A Human Activity Approach
to User Interface Design. Mahwah, NJ: Lawrence Erlbaum Associates.
Cakir, A., Hart, D. J. & T. F. M. Stewart, T. F. M. (1980). Visual display
terminals. New York: Wiley.
Card, S. K., & Moran, T. P. (1986). User technology: From pointing to
pondering. Proceedings of the Conference on the History of Personal Workstations (pp. 183198). New York: ACM.
Card, S. K., Moran, T. P., & Newell, A. (1980a). Computer text-editing:
An information-processing analysis of a routine cognitive skill. Cognitive Psychology, 12, 396410.
Card, S. K., Moran, T. P., & Newell, A. (1980b). Keystroke-level model
for user performance time with interactive systems. Communications of the ACM, 23(7), 396410.

A Moving Target: The Evolution of Human-Computer Interaction

Card, S., Moran, T. P., & Newell, A. (1983). The Psychology of HCI. Mahwah, NJ: Lawrence Erlbaum Associates.
Carroll, J. M., & Campbell, R. L. (1986). Softening up hard science:
Response to Newell and Card. HCI, 2(3), 227249.
Carroll, J. M., & Mazur, S. A. (1986). Lisa learning. IEEE Computer,
19(11), 3549.
Damodaran, L., Simpson, A., & Wilson, P. (1980). Designing systems for
people. Manchester, UK: NCC Publications.
Darrach, B. (1970, November 20). Meet Shaky: The first electronic person. Life Magazine, 69(21), 58B68.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and
user acceptance of information technology. MIS Quarterly, 13(3),
319339.
Davis, G. B. (1974). Management information systems: Conceptual
foundations, structure, and development. New York: McGraw-Hill.
Dennis, A., George, J., Jessup, L., Nunamaker, J., & Vogel, D. (1988).
Information technology to support electronic meetings. MIS Quarterly, 12(4), 591624.
DeSanctis, G., & Gallupe, R. B. (1987). A foundation for the study of
group decision support systems. Management Science, 33, 589610.
Dyson, F. (1979). Disturbing the universe. New York: Harper & Row.
Ehrlich, S. F. (1987). Strategies for encouraging successful adoption of
office communication systems. ACM Transactions on Office Information Systems, 5(4), 340357.
Engelbart, D. (1963). A conceptual framework for the augmentation
of mans intellect. In P. Howerton & D. Weeks (Eds.), Vistas in information handling (Vol. 1, pp. 129). Washington, DC: Spartan
Books.
Engelien, B., & McBryde, R. (1991). Natural language markets: Commercial strategies. London: Ovum Ltd.
ePresence (2006). Early interactive graphics at MIT Lincoln Labs.
Retrieved March 13, 2007 from http://epresence.tv/archives/ or
http://www.billbuxton.com/Lincoln.html
Evenson, S. (2005, February). Design and HCI highlights. Presented at
the HCIC 2005 Conference, Winter Park, Colorado.
Fano, R., & Corbato, F. (1966). Timesharing on computers. Scientific
American 214(9), 129140.
Feigenbaum, E. A., & McCorduck, P. (1983). The Fifth Generation: Artificial Intelligence and Japans computer challenge to the world.
Reading, MA: Addison-Wesley.
Foley, J. D., & Wallace, V. L. (1974). The art of natural graphic manmachine conversation. Proceedings of the IEEE, 62(4), 462471.
Forbus, K. (2003, May). Sketching for knowledge capture. [Lecture.]
Unpublished raw data.
Forbus, K. D., Usher, J., & Chapman, V. (2003). Qualitative spatial reasoning about sketch maps. Proceedings of the Innovative Applications of AI (pp.). Menlo Park: AAAI.
Friedman, A. (1989). Computer systems development: History, organization and implementation. New York: Wiley.
Gould, J.D., & Lewis, C. (1983). Designing for usabilityKey principles and what designers think. Proceedings of CHI83 (pp. 5053).
New York: ACM.
Grandjean, E., & Vigliani, A. (1980). Ergonomics aspects of visual display
terminals. London: Taylor and Francis.
Gray, W. D., John, B. E., Stuart, R., Lawrence, D., & Atwood, M. E. (1990).
GOMS meets the phone company: Analytic modeling applied to
real-world problems. Proceedings of Interact90 (pp. 2934). Amsterdam: North Holland.
Greenbaum, J. (1979). In the name of efficiency. Philadelphia: Temple
University.
Greif, I. (1985). Computer-Supported Cooperative Groups: What are the
issues? Proceedings AFIPS Office Automation Conference (pp.).
Montvale, NJ: AFIPS Press.
Greif, I. (ed.) (1988). Computer-Supported Cooperative Work: A book of
readings. San Mateo, CA: Morgan Kaufmann.

23

Grudin, J. (1990). The computer reaches out: The historical continuity


of interface design. Proceedings of CHI90 (pp. 261268). NY: ACM.
Grudin (1991). Interactive systems: Bridging the gaps between developers and users. IEEE Computer, 24(4), 5969.
Grudin, J. (1993). Interface: An evolving concept. Communications of
the ACM, 36(4), 110119
Grudin, J. (2005). Three faces of HCI. IEEE Annals of the History of
Computing, 27(4), 4662.
Grudin, J. (2006). Human Factors, CHI, and MIS. In P. Zhang & D. Galletta (eds.), HCI in MIS (I): Foundations. Armonk, NY: M. E. Sharpe.
Grudin, J., & MacLean, A. (1984). Adapting a psychophysical method to
measure performance and preference tradeoffs in HCI. Proceedings of INTERACT84 (pp. 338342). Amsterdam: North Holland.
Hertzfeld, A. (2005). Revolution in the valley: The insanely great story
of how the Mac was made. Sebastopol, CA: OReilly Media.
Hiltzik, M. A. (1999). Dealers of lightning: Xerox PARC and the dawn
of the computer age. New York: HarperCollins.
Hopper, G. (1952). The education of a computer. Proceedings of ACM
Conference, reprinted in Annals of the History of Computing,
9(34), 271281, 1987.
Hutchins, E. L., Hollan, J. D., & Norman, D. A. (1986). Direct manipulation interfaces. In D. A. Norman & S. W. Draper (Eds.), User Centered
System Design (pp. 87124). Mahwah, NJ: Lawrence Erlbaum Associates.
Israelski, E., & Lund, A. M. (2003). The evolution of HCI during the
telecommunications revolution. In J. A. Jacko & A. Sears (Eds.), The
HCI handbook. Mahwah, NJ: Erlbaum.
Johnson, T. (1985). Natural language computing: The commercial
applications. London: Ovum Ltd.
Kao, E. (1998). The history of AI. Retrieved March 13, 2007, from
http://www.generation5.org/content/1999/aihistory.asp
Kay, A., & Goldberg, A. (1977). Personal dynamic media. IEEE Computer
10(3), 3142.
Keen, P. G. W. (1980). MIS research: reference disciplines and a cumulative tradition. In First International Conference on Information Systems (pp. 918). Chicago: Society for Management Information
Systems.
Landau, R., Bair, J. & Siegmna, J. (Eds.). (1982, March). Emerging office
systems. Extended proceedings of the Stanford International Symposium on Office Automation, Norwood, NJ.
Lenat, D. (1989). When will machines learn? Machine Learning, 4,
255257.
Lewis, C. (1983). The thinking aloud method in interface evaluation.
Tutorial given at CHI83.
Lewis, C., & Mack, R. (1982). Learning to use a text processing system:
Evidence from thinking aloud protocols. Proceedings of the Conference on Human Factors in Computing Systems (pp. 387392).
New York: ACM.
Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions of
Human Factors in Electronics HFE-1, 1, 411.
Licklider, J. C. R. (1965). Libraries of the future. Cambridge, MA: MIT Press.
Licklider, J. C. R. (1976). User-oriented interactive computer graphics. In
Proc. SIGGRAPH workshop on User-oriented design of interactive
graphics systems, 8996. New York: ACM.
Licklider, J. C. R, & Clark, W. (1962). On-line man-computer communication. AFIPS Conference Proceedings, 21, 113128.
Lighthill, J. (1973). Artificial intelligence: A general survey. In J. Lighthill,
N. S. Sutherland, R. M. Needham, H. C. Longuet-Higgins, & D. Michie
(Eds.), Artificial intelligence: A paper symposium. London: Science
Research Council of Great Britain.
Long, J. (1989). Cognitive ergonomics and HCI. In J. Long & A. Whitefield (Eds.), Cognitive ergonomics and HCI (pp. 434). Cambridge:
Cambridge University Press.
March, A. (1994). Usability: the new dimension of product design. Harvard Business Review, 72(5), 144149.

24

INTRODUCTION

Marcus, A. (2004). Branding 101. ACM Interactions, 11(5), 1421.


Markoff, J. (2005). What the dormouse said: How the 60s counterculture shaped the personal computer. London: Viking.
Martin, J. (1973). Design of man-computer dialogues. New York:
Prentice-Hall.
McCarthy, J. (1960). Functions of symbolic expressions and their computation by machine, part 1. Comm. ACM, 3(4), 184195.
McCarthy, J. (1988). B. P. Bloomfield, The question of artificial intelligence: Philosophical and sociological perspectives. Annals of the
History of Computing, 10(3), 224229.
Meister, D. (1999). The history of human factors and ergonomics. Mahwah, NJ: Lawrence Erlbaum Associates.
Meister D. (2005). HFES history. HFES 20052006 directory and
yearbook (pp. 23). Santa Monica: Human Factors and Ergonomic
Society.
Mumford, E. (1971). A comprehensive method for handling the human
problems of computer introduction. IFIP Congress, 2, 918923.
Myers, B. A. (1998). A brief history of HCI technology. ACM interactions,
5(2), 4454.
Mylonopoulos, N. A., & Theoharakis, V. (2001). Global perceptions of
IS journals. Comm. ACM, 44(9), 2932.
National Geographic. (1970, November). Behold the computer revolution.
National Science Foundation. (1993). NSF 932: Interactive Systems Program Description. 13 January 1993.
National Science Foundation. (2003). NSF Committee of Visitors Report:
Information and Intelligent Systems Division. 28 July 2003.
Negroponte, N. (1970). The architecture machine: Towards a more
humane environment. Cambridge: MIT Press.
Nelson, T. (1968). A file structure for the complex, the changing, and the
indeterminate. Proceedings of the ACM National Conference (pp.
84100). New York: ACM.
Nelson, T. (1973). A conceptual framework for man-machine everything.
Proceedings of the National Computer Conference (pp. M21M26).
Montvale, New Jersey: AFIPS Press.
Newell, A., & Card, S. K. (1985). The prospects for psychological science
in HCI. HCI, 1(3), 209242.
Newell, A., & Simon, H. A. (1956). The logic theory machine: A complex information processing system. IRE transactions on information theory IT-2, 6179.
Newell, A., & Simon, H. A. (1972). Human problem solving. New York:
Prentice-Hall.
Nielsen, J. (1989). Usability engineering at a discount. In G. Salvendy &
M.J. Smith (Eds.), Designing and using human-computer interfaces
and knowledge based systems (pp. 394401). Amsterdam: Elsevier.
Norberg, A. L., & ONeill, J. E. (1996). Transforming computer technology: Information processing for the Pentagon (pp. 19621986). Baltimore: Johns Hopkins.
Norman, D. A. (1982). Steps toward a cognitive engineering: Design
rules based on analyses of human error. Proceedings of the Conference on Human Factors in Computing Systems (pp. 378382). New
York: ACM.
Norman, D. A. (1983). Design principles for human-computer interfaces. Proceedings of CHI83 (pp. 110). New York: ACM.
Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W.
Draper (Eds.), User centered system design (pp. 3161). Mahwah,
NJ: Lawrence Erlbaum Associates.
Norman, D. A. (1988). Psychology of everyday things. New York: Basic
Books.
Norman, D. A. (2004). Emotional Design: Why We Love (or Hate) Everyday Things. New York: Basic Books.
Nunamaker, J., Briggs, R. O., Mittleman, D. D., Vogel, D. R. & Balthazard,
P. A. (1997). Lessons from a dozen years of group support systems
research: A discussion of lab and field findings. Journal of Management Information Systems, 13(3), 163207.

Oakley, B. W. (1990). Intelligent knowledge-based systemsAI in the


U.K. In R. Kurzweil (Ed.), The age of intelligent machines (pp. 346
349). Cambridge, MA: MIT Press.
Pew, R. (2003). Evolution of HCI: From MEMEX to Bluetooth and
beyond. In J. A. Jacko & A. Sears (Eds.), The HCI handbook (pp. 1
17). Mahwah, NJ: Lawrence Erlbaum Associates.
Polen, L., & Grudin, J. (2002). Discretionary adoption of group support
software. In B. E. Munkvald, Implementing collaboration technologies in industry, (pp. 159180). London: Springer-Verlag.
Proceedings of the joint conference on easier and more productive use
of computer systems. (1981). New York: ACM. (See http://portal
.acm.org)
Pruitt, J., & Adlin, T. (2005). The Persona lifecycle: Keeping people in
mind throughout product design. San Francisco: Morgan Kaufmann.
Remus, W. (1984). An empirical evaluation of the impact of graphical and
tabular presentations on decision-making. Management Science,
30(5), 533542.
Roscoe, S. N. (1997). The adolescence of engineering psychology. Santa
Monica, CA: Human Factors and Ergonomics Society.
Sammet, J. (1992). Farewell to Grace HopperEnd of an era! Comm.
ACM, 35(4), 128131.
Shackel, B. (1959). Ergonomics for a computer. Design, 120, 3639.
Shackel, B. (1962). Ergonomics in the design of a large digital computer
console. Ergonomics, 5, 229241.
Shackel, B. (1997). HCI: Whence and whither? Journal of ASIS, 48(11),
970986.
Shannon, C. E. (1950). Programming a computer for playing chess.
Philosophical magazine, 7(41), 256275.
Sheil, B. A. (1981). The psychological study of programming. ACM Computing Surveys, 13(1), 101120.
Shneiderman, B. (1980). Software psychology: Human factors in computer and information systems. Cambridge, MA: Winthrop.
Smith, H. T., & Green, T. R. G. (1980). Human interaction with computers. Orlando, FL: Academic.
Smith, S. L. (1963). Man-computer information transfer. In J. H. Howard
(Ed.), Electronic information display systems (pp. 284299). Washington, DC: Spartan Books.
Smith, S. L., Farquhar, B. B., & Thomas, D. W. (1965). Color coding in formatted displays. Journal of Applied Psychology, 49, 393398.
Smith, S. L., & Goodwin, N. C. (1970). Computer-generated speech and
man-computer interaction. Human Factors, 12, 215223.
Smith, S. L., & Mosier, J. N. (1986). Guidelines for designing user interface software (ESD-TR-86-278). Bedford, MA: MITRE.
Sutherland, I. (1963). Sketchpad: A man-machine graphical communication system. Unpublished doctoral dissertation, MIT.
Taylor, F. W. (1911). The principles of scientific management. New York:
Harper.
Turing, A. (1949, June 11). Letter in London Times. See Highlights from
the Computer Museum report Vol. 20, Summer/Fall 1987. http://edthelen.org/comp-hist/TheCompMusRep/TCMR-V20.html
Turing, A. (1950). Computing machinery and intelligence. Mind, 49, 433
460. Republished as Can a machine think? in J. R. Newman (Ed.), The
world of mathematics, (Vol. 4, pp. 20992123). New York: Simon &
Schuster.
Vessey, I., & Galletta, D. (1991). Cognitive fit: An empirical test of information acquisition. Information Systems Research, 2(1), 6384.
Waldrop, M. M. (2001). The dream machine: J.C.R. Licklider and the
revolution that made computing personal. New York: Viking.
Weinberg, G. (1971). The psychology of computer programming. New
York: Van Nostrand Reinhold.
Zhang, P. (2004). AIS SIGHCI three-year report. SIGHCI newsletter, 3(1),
26.
Zhang, P., Nah, F. F.-H., & Preece, J. (2004). HCI studies in management
information systems. Behaviour & Information Technology, 23(3),
147151.

You might also like