International Journal of Human–Computer Interaction
ISSN: 1044-7318 (Print) 1532-7590 (Online) Journal homepage: https://www.tandfonline.com/loi/hihc20
Seven HCI Grand Challenges
Chairs Constantine Stephanidis, Gavriel Salvendy, Members of the Group
Margherita Antona, Jessie Y. C. Chen, Jianming Dong, Vincent G. Duffy,
Xiaowen Fang, Cali Fidopiastis, Gino Fragomeni, Limin Paul Fu, Yinni Guo,
Don Harris, Andri Ioannou, Kyeong-ah (Kate) Jeong, Shin’ichi Konomi,
Heidi Krömker, Masaaki Kurosu, James R. Lewis, Aaron Marcus, Gabriele
Meiselwitz, Abbas Moallem, Hirohiko Mori, Fiona Fui-Hoon Nah, Stavroula
Ntoa, Pei-Luen Patrick Rau, Dylan Schmorrow, Keng Siau, Norbert Streitz,
Wentao Wang, Sakae Yamamoto, Panayiotis Zaphiris & Jia Zhou
To cite this article: Chairs Constantine Stephanidis, Gavriel Salvendy, Members of the Group
Margherita Antona, Jessie Y. C. Chen, Jianming Dong, Vincent G. Duffy, Xiaowen Fang, Cali
Fidopiastis, Gino Fragomeni, Limin Paul Fu, Yinni Guo, Don Harris, Andri Ioannou, Kyeong-ah
(Kate) Jeong, Shin’ichi Konomi, Heidi Krömker, Masaaki Kurosu, James R. Lewis, Aaron Marcus,
Gabriele Meiselwitz, Abbas Moallem, Hirohiko Mori, Fiona Fui-Hoon Nah, Stavroula Ntoa, PeiLuen Patrick Rau, Dylan Schmorrow, Keng Siau, Norbert Streitz, Wentao Wang, Sakae Yamamoto,
Panayiotis Zaphiris & Jia Zhou (2019): Seven HCI Grand Challenges, International Journal of
Human–Computer Interaction, DOI: 10.1080/10447318.2019.1619259
To link to this article: https://doi.org/10.1080/10447318.2019.1619259
© 2019 The Author(s). Published with
license by Taylor & Francis Group, LLC.
Published online: 01 Jul 2019.
Submit your article to this journal
View Crossmark data
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=hihc20
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
https://doi.org/10.1080/10447318.2019.1619259
SURVEY ARTICLE
Seven HCI Grand Challenges
Chairs
Constantine Stephanidisa, Gavriel Salvendyb
a
University of Crete and FORTH-ICS, Greece; bUniversity of Central Florida, USA
Members of the Group
Margherita Antonac, Jessie Y. C. Chend, Jianming Donge, Vincent G. Duffyf, Xiaowen Fangg, Cali Fidopiastish,
Gino Fragomenii, Limin Paul Fuj, Yinni Guok, Don Harrisl, Andri Ioannoum, Kyeong-ah (Kate) Jeongn, Shin’ichi Konomio,
Heidi Krömkerp, Masaaki Kurosuq, James R. Lewisr, Aaron Marcuss, Gabriele Meiselwitzt, Abbas Moallemu,
Hirohiko Moriv, Fiona Fui-Hoon Nahw, Stavroula Ntoac, Pei-Luen Patrick Raux, Dylan Schmorrowy, Keng Siauz,
Norbert Streitzaa, Wentao Wangab, Sakae Yamamotoac, Panayiotis Zaphirism, and Jia Zhouad
c
FORTH-ICS, Greece; dU.S. Army Research Laboratory, USA; eHuawei Inc., P.R. China; fPurdue University, USA; gDePaul University, USA; hDesign
Interactive, USA; iU.S. Army Futures Command, USA; jAlibaba Group, USA; kGoogle, USA; lCoventry University, UK; mCyprus University of Technology,
Cyprus; nIntel, USA; oKyushu University, Japan; pIlmenau University of Technology, Germany; qThe Open University of Japan, Japan; rIBM
Corporation, USA; sAaron Marcus and Associates, USA; tTowson University, USA; uSan Jose State University, USA; vTokyo City University, Japan;
w
Missouri University of Science and Technology, USA; xTsinghua University, P.R. China; ySoarTech, USA; zMissouri University of Science and
Technology, USA; aaSmart Future Initiative, Germany; abBaidu, Inc., P.R. China; acTokyo University of Science, Japan; adChongqing University, P.R.
China
ABSTRACT
This article aims to investigate the Grand Challenges which arise in the current and emerging landscape
of rapid technological evolution towards more intelligent interactive technologies, coupled with
increased and widened societal needs, as well as individual and collective expectations that HCI, as
a discipline, is called upon to address. A perspective oriented to humane and social values is adopted,
formulating the challenges in terms of the impact of emerging intelligent interactive technologies on
human life both at the individual and societal levels. Seven Grand Challenges are identified and
presented in this article: Human-Technology Symbiosis; Human-Environment Interactions; Ethics,
Privacy and Security; Well-being, Health and Eudaimonia; Accessibility and Universal Access; Learning
and Creativity; and Social Organization and Democracy. Although not exhaustive, they summarize the
views and research priorities of an international interdisciplinary group of experts, reflecting different
scientific perspectives, methodological approaches and application domains. Each identified Grand
Challenge is analyzed in terms of: concept and problem definition; main research issues involved and
state of the art; and associated emerging requirements.
BACKGROUND
This article presents the results of the collective effort of a group of 32 experts involved in the
community of the Human Computer Interaction International (HCII) Conference series. The group’s
collaboration started in early 2018 with the collection of opinions from all group members, each
asked to independently list and describe five HCI grand challenges. During a one-day meeting held
on the 20th July 2018 in the context of the HCI International 2018 Conference in Las Vegas, USA, the
identified topics were debated and challenges were formulated in terms of the impact of emerging
intelligent interactive technologies on human life both at the individual and societal levels. Further
analysis and consolidation led to a set of seven Grand Challenges presented herein. This activity was
organized and supported by the HCII Conference series.
1. Introduction
In the current wave of technological evolution, a near future is
foreseen where technology is omnipresent, machines predict
and anticipate human needs, robotic systems are an integral
part of everyday life, and humans’ abilities are technologically
supported. Home, work, and public environments are
anticipated to be smart, anticipating and adapting to the
needs of their inhabitants and visitors, empowered with
Artificial Intelligence (AI) and employing big data towards
training and perfecting their reasoning. Interactions in such
environments will be not only conscious and intentional, but
also subconscious and even unintentional. Users’ location,
CONTACT Constantine Stephanidis
cs@ics.forth.gr
Constantine Stephanidis, FORTH-ICS, N. Plastira 100, Vassilika Vouton, GR-700 13 Heraklion, Crete, Greece.
Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/hihc.
© 2019 The Author(s). Published with license by Taylor & Francis Group, LLC.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/),
which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.
2
C. STEPHANIDIS ET AL.
postures, emotions, habits, intentions, culture and thoughts
can all constitute candidate input commands to a variety of
visible and invisible technological artifacts embedded in the
environment. Robotics and autonomous agents will be typically embedded in such technologically enriched environments. Information will be communicated from one
interaction counterpart to another ‘naturally’, while the digital
world will coexist with and augment physical reality, resulting
in hybrid worlds.
The focus of HCI has traditionally been the human and
how to ensure that technology serves users’ needs in the best
possible way, a perspective that is claimed to – and so it
should – constitute also the ultimate goal of the new intelligent technological realm. HCI has evolved over the years, has
considerably enlarged its domain of enquiry, and has achieved
remarkable advances. However, as new technologies bring
increased complexity and escalate the need for interaction
and communication in numerous ways, the human counterpart of technology is also changing, becoming more conscious
of the impact that interactive systems and devices have on
everyday life: humans have become more attentive and
demanding, yet also appear to be less optimistic, as well as
more concerned and critical. As a consequence, humancentered approaches have to face new challenges, calling for
shifts in both focus and methods, in order to formulate and
address the critical issues that underlie a more trustful and
beneficial relationship between humankind and technology.
In recent years, many proposals have been put forward regarding the future of HCI and its renewed research agenda. Norman
(2007) in his book “The Design of Future Things” reshaped the
concepts and methods of product design under the light of the
freshly emerging dilemma of automation vs. control and machine
intelligence vs. human intelligence. Ten years later, as artificial
intelligence began demonstrating practical feasibility and impact,
following a number of rise and fall cycles, Kaplan (2016) in
“Artificial Intelligence: Think Again” strives to clarify common
misunderstandings and myths around machine intelligence,
bringing back the issue to one of sensible design which accommodates social, cultural, and ethical conventions.
With respect to societal aspects of HCI research, Hochheiser
and Lazar (2007) in “HCI and Societal Issues: A Framework for
Engagement”, define the factors and mechanisms which underlie HCI responses to societal demands and call for proactive and
principled engagement in design. Along these lines, Value
Sensitive Design uses explicit consideration of specific values as
a means of achieving goals – such as democracy, fairness, inclusion, and appropriate use of technology – by addressing questions such as: which values are important in a given design case;
whose values are they and how are they defined with respect to
the given context; which methods are suited to discover, elicit
and define values; what kind of social science knowledge or skills
are needed; etc. (Friedman, Kahn, Borning, & Huldtgren, 2013).
In the first decade of the new millennium, by seeking to
address human value in the development of intelligent interaction,
Harper, Rodden, Rogers, and Sellen (2008) reflect upon ongoing
changes and outline a new paradigm for understanding human
relationship with technology, whereby the boundaries between
computers and people, and between computers and the physical
world are reconsidered, also taking into account increasing
techno-dependency, collection of digital footprints and increasing
creative engagement. In such a context, HCI needs to develop new
views about the role, function, and consequences of design, form
new partnerships with other disciplines, and re-examine and
reflect on its basic terms and concepts.
Also, Shneiderman et al. (2016), in their article “Grand
Challenges for HCI Researchers” analyze the role of HCI in
addressing important societal challenges, identify 16 grand challenges related to both society-oriented and technology-oriented
issues, and highlight the need for improved interdisciplinary
methods that emerge from science, engineering, and design.
This article presents the results of the collective effort of
a group of 32 experts involved in the community of the Human
Computer Interaction International (HCII) Conference series.
The goal is to investigate the grand challenges which arise in the
current landscape of rapid technological evolution towards more
intelligent interactive technologies, coupled with increased and
widened societal needs, as well as individual and collective expectations that HCI, as a discipline, is called upon to address.
The group’s collaboration started in early 2018 with the collection of opinions from all group members, each asked by email to
independently list and describe five challenges of greatest importance to their area. The expressed opinions were collected and
classified into 10 categories which were summarized in a draft
document. During a one-day meeting held on the 20th July 2018
in the context of the HCI International 2018 Conference in Las
Vegas, USA, the identified items were analyzed and discussed, and
a condensed set of challenges was produced. The group took the
decision to formulate the challenges in terms of the impact of
emerging interactive technologies on human life, both at the
individual and societal levels. Further consolidation led to a set
of seven challenges, as depicted in Figure 1. A definition and the
rationale behind each challenge is presented in Table 1. It should
be noted that challenges are not presented in a hierarchical manner, nor in order of importance; all the identified challenges are
considered equally important in the context of future technological environments. It is also inevitable that the discussions that
follow are interconnected, involving issues that are common in
more than one challenges. For example, privacy and ethics are
a major concern for human-technology symbiosis, e-health services, and technology-supported learning activities, therefore
although they are discussed in detail in Section 4 (Ethics,
Privacy, and Security), they are also briefly introduced in other
sections as well. To the extent possible, redirections to relevant
discussions within the paper are provided.
Whereas some of the identified challenges are common to the
previous approaches – similar to those suggested by earlier
efforts as published in the articles mentioned above, thus confirming their perceived importance and urgency among the
scientific community, this paper engages a larger group of
experts in the field – from various sub-disciplines in HCI, who
also discussed and debated the addressed issues at a greater
depth. Further, this group of experts not only identified the
challenges but also has attempted to analyze at a deeper level
the current debate around each challenge and to propose a rather
detailed list of related research topics for each challenge in
a systematic manner. Thus, this paper advances the discussions
on the future of HCI research and contributes to the development of current and future HCI research.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
3
Figure 1. The Seven Grand Challenges.
The effort was motivated by the “intelligence” and the
smart features that already exist in current technologies (e.g.
smartphone applications monitoring activity and offering tips
for a more active and healthy lifestyle, cookies recording web
transactions in order to produce personalized experiences)
and that can be embedded in future environments we will
live in. Yet, the perspective adopted is profoundly humancentered, bringing to the foreground issues that should be
further elaborated to achieve a human-centered approach –
catering for meaningful human control, human safety and
ethics – in support of humans’ health and well-being, learning
and creativity, as well as social organization and democracy.
In the subsequent sections of this article, each identified
challenge is analyzed in terms of concepts and problem definition, main research issues involved and state of the art, as
well as the associated emerging requirements, with the ultimate goal of offering to the HCI community food for thought
and inspiration towards identifying and addressing compelling topics for investigation and lines of research.
2. Human-technology symbiosis
2.1. Definitions and rationale
Many drops of sci-fi ink have been devoted to describing
futures in which technology is ubiquitous and integrated in
everyday objects, holograms and robots are typical interaction
counterparts, humans live in virtual worlds, and often they are
technologically augmented. Although featured in sci-fi novels
and movies, some of the above technological advances are
already a reality, while others are soon expected to permeate
society. There is already a trend that everything must be
‘smart’, whether it is a device, software, service, car, environment, or even an entire city. Technological advancements will
eventually make it possible to inject some AI into even the
most common products, a revolution occurring so gradually
that it may have passed unnoticed (Holmquist, 2017). The
advent of smart ecosystems, comprising smart devices, services, materials, and environments that cooperate in
a seamless and invisible manner, imposes the need for considering, defining, and optimizing the terms of symbiosis of
the two main counterparts, namely humans and technology.
Several terms in the HCI literature have been introduced to
reflect how humans experience the new Information and
Communication Technology (ICT) era, including HumanComputer Confluence and Human-Computer Integration.
Human-Computer Confluence describes a research area
studying how the emerging symbiotic relations between
humans and ICT can be based on radically new forms of
sensing, perception, interaction, and understanding (Ferscha,
2016). Human-Computer Integration broadly refers to the
partnership or symbiotic relationship in which humans and
software act with autonomy, giving rise to patterns of behavior that must be considered holistically (Farooq & Grudin,
2016). Human-Computer Symbiosis was introduced by
Licklider back in 1960, who envisioned a future when
human brains and computing machines – tightly coupled
together – would “think as no human brain has ever thought
and process data in a way not approached by the informationhandling machines we know today” (Licklider, 1960, p. 1)
Human-Technology Symbiosis
Definition: Human-technology symbiosis refers to defining how humans will live and work harmoniously together with technology, which in the near future will exhibit characteristics that until now were typically associated
with human behavior and intelligence, namely understanding language, learning, reasoning, and problem solving (keeping in mind that they are limited to specific application domains).
Rationale: The advent of smart ecosystems, comprising smart devices, services, materials, and environments that cooperate in a seamless and transparent manner, imposes the need for considering, defining, and optimizing
the terms of symbiosis of the two main counterparts, namely humans and technology.
Human-Environment Interactions
Definition: Human-environment interactions refer to the interaction of people not only with a single artifact, but with entire technological ecosystems featuring increased interactivity and intelligence.
Rationale: In technologically enriched, autonomous, and smart environments, interactions will become more implicit, often concealed in the continuum between the physical and the digital. Therefore, the topic of
supporting human interactions in these environments brings about novel implications and challenges.
Ethics, Privacy and Security
Definition: Ethics refer to moral principles that govern behavior. In this paper, it refers to the moral principles that govern the conduct of activities in the context of HCI, and in particular design. Privacy refers to the ability of
users to be in control and to determine what data can be collected and exploited by a computer system and then be shared with third parties. Security in the context of computing refers to the protection of computer
systems from theft or damage to their hardware, software or electronic data, as well as from disruption or misdirection of the services they provide.
Rationale: Intelligent systems need to behave so that they are beneficial to people beyond simply reaching functional goals or addressing technical problems, by serving human rights and the values of their users, ensuring
privacy and cybersecurity. Ethics, privacy, trust and security have always been important concerns in relation to technology, acquiring yet new dimensions in the context of technologically augmented and intelligent
environments.
Well-being, Health and Eudaimonia
Definition: Health refers to a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity. Well-being also includes achieving high life satisfaction, happiness, prosperity, and
a sense of meaning or purpose. Eudaimonia refers to a person’s state of excellence characterized by objective flourishing across a lifetime, and brought about through the exercise of moral virtue, practical wisdom, and
rationality.
Rationale: Technological advances, coupled with advances in medicine, offer the opportunity to provide more effective and less expensive ways of fostering a healthy life. Beyond physical health, technology can also be used
to promote human well-being, including not only health aspects, but also psychological well-being through life goals’ fulfilment (eudaimonia). Overall, technology in the context of healthcare is already widely used, yet
there are still open research issues. Moreover, in a world where technology is omnipresent, the question arises of how its role towards enhancing well-being and human eudaimonia can be optimized, especially
addressing interaction issues and ensuring a human-centered approach.
Accessibility and Universal Access
Definition: Accessibility refers to the design of products, devices, services, or environments suitable for people with disabilities. Universal Access refers to the accessibility and usability of Information Society Technologies by
anyone, anywhere, anytime.
Rationale: Intelligent environments bring new challenges regarding accessibility and universal access, mainly stemming from the increased technological complexity, having considerable impact on the access not only to
information or technology, but to a wide variety of services and activities of daily life. As HCI has always focused on the human, in the new technology-augmented environments, efforts in the field will be extended
towards improving the quality of life of various populations, including the disabled and older persons. Accessibility and Universal Access are not new concepts, however in view of the demographic development (aging
society) and of the constantly increasing technological complexity, they become not only timely, but also pivotal for the prosperity of future societies.
Learning and Creativity
Definition: Learning refers to the activity or process of gaining knowledge or skill by studying, practicing, being taught, or experiencing something. Creativity refers to the ability to produce original and unusual ideas, or to
make something new or imaginative.
Rationale: As technologies continue to mature, new opportunities for fostering individual growth through multimodal stimulation of how humans learn and apply creativity will emerge; people with diverse backgrounds,
skills, and interests will be able to collaborate to solve challenging problems, by cooperatively learning and creating knowledge together. New technologies have the potential to support new and emerging learning
styles, as they have recently evolved and been influenced by the pervasiveness of technology in the everyday life of the new generations. At the same time, the discussion of how technology should be applied in the
learning context has become timelier than ever, expanding to issues such as privacy and ethics, learning theories and models, and pedagogical aspects. In any case, the success of technology in education depends to
a large extent on HCI issues. On the other hand, human creativity is expected to have a central role in the future society, therefore, it is important to not only cultivate it, but also explore how it can be assisted.
Social Organization and Democracy
Definition: Social organization refers to the formation of a stable structure of relations inside a group, which provides a basis for smooth functioning of the group. Democracy refers to a form of government in which the
people freely govern themselves and where the executive (or administrative) and law-making (or legislative) power is given to persons elected by the population.
Rationale: As humanity moves from smart environments towards smart societies where an abundance of ethical concerns arise, social organization should be supported. HCI research will have a multifaceted pivotal role in
the forthcoming technological developments, addressing major societal and environmental challenges towards societies where the ideals of democracy, equality, prosperity, and stability are pursued and safeguarded. The
critical times we live in, as well as future dark scenarios, have already directed research towards creating technology to assist humanity in coping with major problems, such as resource scarcity, climate change, poverty
and disasters. Social participation, social justice, and democracy are ideals that should not only be desired in this context, but also actively and systematically pursued and achieved.
Table 1. Definition and rationale for each identified challenge.
4
C. STEPHANIDIS ET AL.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Symbiosis – a term mentioned in all the aforementioned
approaches – is a composite Greek word meaning “to live
together”. Emphasizing not only on “together” but also on “living”,
symbiosis is an ideal term to use when discussing the challenges
stemming from the co-existence and interactions of two counterparts: humankind and intelligent computer systems that exhibit
characteristics, which until now were typically associated with
human behavior and intelligence, namely understanding language, learning, reasoning, and problem solving (Cohen &
Feigenbaum, 2014). At the same time, one has to be aware that
the results refer only to specific domains. This section discusses the
challenges of this symbiosis, with a focus on supporting and
empowering humans, ensuring at the same time human control
and safety.
2.2. Main research issues and state of the art
2.2.1. Meaningful human control
A major concern is that, despite the “intelligence” of the technology and its potential to make automatic inferences and decisions, humans should be kept in the loop through supervisory
control and monitoring of intelligent autonomous systems.
A representative example of human control over automated
systems, is that of pilots who have a supervisory control of
systems and flight deck. The pilots exercise control in an outerloop manner (setting high-level goals and monitoring systems),
rather than in an inner loop manner (‘hands on’, minute
to minute). This has changed dramatically the nature of the
pilot’s control task on the flight deck. A similar design problem,
relevant for a much larger population of users, arises with the
advent and diffusion of automated driving at levels which are not
fully automated. Here, the issue of design trade-offs between
human control and automation has to be addressed in terms of
transferring control between driver and vehicle (forth and back
when switching between autonomy levels).
Meaningful human control has been defined as one of the
short-term research priorities for robust and beneficial AI,
whether it is through human in the loop, on the loop, or
some other protocol (Russell, Dewey, & Tegmark, 2015). The
various protocols refer to how human interference affects
acting entities (e.g. an AI agent); for example, when human
interference directly affects the acting entity it is referred as
“human in the loop”, while when it indirectly affects the
actions or the community of the entity it is referred as
“human on the loop” (Hexmoor, McLaughlan, & Tuli,
2009). In any case, meaningful human control is a principle
that goes beyond any specific protocol; it advocates that
humans, not computers and their algorithms, should ultimately remain in control of – and thus be morally responsible
for – actions mediated by autonomous systems (Chen &
Barnes, 2014; Santoni de Sio & van Den Hoven, 2018).
To achieve human control, important features of intelligent
systems are transparency, understandability and accountability,
which also contribute towards building a relationship of trust
between the user and the system and boost the performance of
the human-automation team (Chen et al., 2018; Pynadath,
Barnes, Wang, & Chen, 2018; Siau & Wang, 2018). In particular,
transparent user interfaces make their own decisions and outcomes visible, clarify users’ responsibility, and can promote
5
more appropriate behaviors (Chen et al., 2018; Shneiderman
et al., 2016). Transparency is closely related to system interpretability/explainability, a property of machine learning systems
indicating their ability to explain and present information in
a manner understandable by humans (Došilović, Brčić, &
Hlupić, 2018). Benefits of explainable AI include the capability
for verification and improvement of the system, compliance with
legislation, as well as the potential for human learning and
acquisition of new insights (Samek, Wiegand, & Müller, 2017).
On the other hand, the ideal of complex systems’ transparency
faces technical and temporal limitations, as well as major shortcomings (Ananny & Crawford, 2018). For instance, transparency can lose its power if it does not have meaningful effects,
while it can also create privacy threats. More importantly, making a system visible does not necessarily mean that it is understandable, which is a truly challenging task when it comes to
complex systems. In this respect, transparency also entails accessibility concerns, in terms of providing appropriate explanations
in a comprehensible manner (see also Section 5.2.3).
Besides transparency, understandability is fostered by intelligibility and accountability, which allows users to better
understand underlying computational processes, and therefore gives them the potential to control system actions better
(Shneiderman et al., 2016). Intelligibility can be defined as the
answer to the question “how does this work?”, while accountability as the answer to the question “who is responsible for
the way it works?” (Floridi et al., 2018). Assuming that people
have decided to provide personal data in order to use a smart
service (see also Section 4), it is an important challenge to
keep people in the loop and in control so as to determine how
the service is provided and under which conditions. Other
challenges related to making intelligent systems understandable to users refer to these systems’ inherent opacity and
unpredictability. Systems featuring AI will have the ability to
evolve themselves without human influence in a way that is
not explicit, even to their developers, drawing their own
conclusions from given data. This can result in systems behaving in a manner that is not predictable, a feature that must be
effectively communicated to users, so as to avoid jeopardizing
their trust and confidence in the system (Chen et al., 2018;
Holmquist, 2017). One could even argue that – depending on
the application domain – such an unpredictable and opaque
as well as independent/autonomous behavior should be prohibited, e.g., in safety-critical situations, where the implications are devastating.
2.2.2. Humane digital intelligence
The above issues point toward the bigger picture of a humane
digital intelligence, which does not merely incorporate human
values in the innovation process, but actually brings them to
the forefront, emphasizing people and their experience with
technology, not just ‘intelligent functionality’ and ‘intuitive
usability’ (Bibri, 2015). The key criteria for technology adoption will move away from technical or User Experience (UX)
issues towards how aligned technology is with human values
(José, Rodrigues, & Otero, 2010).
Therefore, smart should stand for “smart, but only if
cooperative and humane” (Streitz, 2018). In brief, the humane
face of intelligence (Tegmark, 2017) entails the establishment
6
C. STEPHANIDIS ET AL.
of a calm technology (Weiser, 1991) supporting and respecting individual and social life, the respect for human rights and
privacy, supporting humans in their activities, pursuing trust,
as well as enabling humans to exploit their individual, creative, social and economic potential, and to live and enjoy
a self-determined life (Streitz, 2017).
2.2.3. Adaptation and personalization to human needs
In the same context, intelligent environments should be able
to think and behave in ways that support humans, by providing personalized, adaptive, responsive, and proactive services
in a variety of settings: living spaces, work spaces, social and
public places, and on the move (Bibri, 2015). Such a behavior
constitutes a fundamental characteristic of Ambient
Intelligence (AmI) environments, as they were initially introduced in 2001, through the elaboration of the IST Advisory
Group (ISTAG) Ambient Intelligence scenarios in the near
future of 2010 (Ducatel, Bogdanowicz, Scapolo, Leijten, &
Burgelman, 2001). The AmI approach already adopted at
that time an orientation towards the goals and values which
are discussed now again in the scientific community and also
reflected in this paper.
To support the vision of personalized, adaptive, responsive,
and proactive services, adaptation and personalization methods
and techniques will need to consider how to incorporate AI and
big data (Siau & Wang, 2018). These technologies are expected
to work in synergy in intelligent environments, as deep learning
algorithms need comparatively big data to learn sufficient
knowledge from it and perfect their decision-making (Lan
et al., 2018). Meanwhile, AI has already been used in several
different ways to facilitate capturing and structuring big data,
and to analyze big data for key insights (O’Leary, 2013; Siau et al.,
2018). For example, in smart and Ambient Assisted Living
(AAL) environments, it is relatively easy to detect users’ current
activity but it is difficult to understand or even predict their
intention of movement. To deal with this problem, big data
fused from an increasing number of sensors and sensing devices
are used to understand the full extent of users’ personal movement patterns (Suciu, Vulpe, Craciunescu, Butca, & Suciu, 2015).
Furthermore, personalization/adaptation should not only consider individuals’ preferences, but also other organizational and
social factors, such as organizational culture and level of sophistication in the use of technology to collect big data (Vimarlund &
Wass, 2014).
2.2.4. Human skills’ support
As machines acquire capabilities to learn deeply and actively
from data, fundamental changes can occur to what humans
are expected to learn in order to live and work meaningfully
(Siau, 2018). For instance, under the perspective of symbiotic
learning, the human could handle the qualitative subjective
judgements and the machine could handle the quantitative
elements, an approach that requires tools to support human
skill and ingenuity rather than machines which would objectify knowledge (Gill, 2012).
Advanced technologies, including AI, can be used to support
human memory, as well as human problem-solving, especially
in situations in which a human has to process a huge amount of
complex data, or in which a human must make a critical decision
quickly or under extreme stress (Hendler & Mulvehill, 2016).
Technology to compensate human functional limitations is
already in use today in multiple forms, such as personal assistants, reminders, memory and cognition aids, as well as decision
support and recommender systems. AI technology is also
employed for assisting individuals in managing daily tasks, making purchasing decisions, and managing finances (Hendler &
Mulvehill, 2016). In the future, technology can also be used to
extend human perception and overcome human senses’ limitations, by providing the ability to capture experiences holistically
(Schmidt, Langheinrich, & Kersting, 2011). It can also extend
human memory capacity and learning ability, by ensuring
humans’ access to interconnected cognitive intelligent technologies that can assist them in solving typical everyday problems
(Wang, 2014). Ultimately, technology could be used to augment
human perception and cognition, through the confluence of ICT
with the biological brain (Ferscha, 2016). In any case, the view of
symbiosis emphasizes that human judgement, tacit knowledge,
and intuition, should be united in a symbiotic totality with the
potential of machine intelligence, in terms of computability,
capacity, and speed (Gill, 2012). Such a development of course
currently remains a vision and research objective, but not
a tangible reality (Tegmark, 2017). Future endeavors in this
area will require further advancements in cognitive sciences, so
as to better understand human cognition and the human brain.
To that end, cognitive science research can be assisted by recent
achievements in ICT, such as big data stemming from human
behavior that can provide clues towards understanding basic
principles of cognition (Jones, 2016).
2.2.5. Emotion detection and simulation
Humans are emotional and empathetic beings, expressing emotions not only in human-human interactions, but also in interactions with machines. As a result, an important challenge with
regard to optimizing human-technology symbiosis refers to how
ICT captures and correlates emotional expressions, as well as
how technology can express emotions and exhibit empathic
behavior (Ferscha, 2016). As technological environments have
the potential to affect humans psychologically, it is critical to
consider how and when intelligent artifacts may be used for
“nudging” individuals for their own or the entire humanity’s
benefit (The IEEE Global Initiative on Ethics of Autonomous
and Intelligent Systems, 2019). For example, “nudging” can be
used to help humans manage health conditions, improve lifestyle, or reduce biased judgement (see also the discussion in
Section 5.2.1).
Simulated affect poses a moral dilemma, as on the one
hand, it promotes human-technology interaction and collaboration, while on the other hand, it may deceive humans
who will find it hard to keep in mind that machines do not
actually have these affective states and may become emotionally over-attached (Beavers & Slattery, 2017). Besides such
inevitable deceptions, there is also the case of deliberate
deceptions, when an affective computing system is used in
order to persuade people to believe something that is actually
false, for example to proceed to a purchase (Cowie, 2015). In
this respect, the establishment of a new code of ethics is
imperative. The reinforcement of such an ethical code should
be safeguarded through a multi-level approach, starting with
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
the education of designers and developers and involving
extensive testing and new benchmarking procedures, as it
also addressed by the IEEE Global Initiative (2019).
2.2.6. Human safety
In this new context, the targeted autonomous nature of smart
environments brings to the surface the paramount importance
of human safety, which should be accounted for during design
and implementation, but also it should be safeguarded through
novel testing procedures. In particular, the development and use
of intelligent systems with the potential to self-evolve entails
considerable risks, due to technology misuse or poor design
(The IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems, 2019). To this end, a recommendation is
that advanced intelligent systems should be “safe-by-design,”
which involves designing architectures using known-safe and
more-safe technical paradigms as early in the lifecycle as possible
(The IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems, 2019).
At the same time, testing becomes one of the biggest challenges, while existing software testing methodologies cannot
cope with the challenges of testing autonomous systems, mainly
due to the fact that systems’ behavior not only depends on design
and implementation, but also on the acquired knowledge of the
system (Helle, Schamai, & Strobel, 2016). Furthermore, as AI
matures, it will be critical to study its influence on humans and
society, on a short-term and long-term basis, which requires
engagement with interdisciplinary groups, including computer
scientists, social scientists, psychologists, economists, and lawyers, developing assessments and guidance through focused
studies, monitoring, and analysis (Horvitz, 2017).
2.2.7. Cultural shift
Beyond technical challenges and shifts, intelligent environments
and AI present a cultural shift as well (Crawford & Calo, 2016;
Siau & Wang, 2018). As a result, this technological change has to
evolve mutually with societal change (Bibri, 2015). In order to
achieve a more holistic and integrated understanding of the
impacts of smart ecosystems, a social-systems analysis is required,
drawing on philosophy, law, sociology, anthropology, science-and
-technology studies, as well as on studies of how social, political
and cultural values affect – and are affected by – technological
change and scientific research (Crawford & Calo, 2016).
Currently, the dominant public narrative about AI is that the
increasingly intelligent machines will ultimately surpass human
capabilities, will render us useless and steal jobs, and possibly even
will escape human control and kill humans (Husain, 2017;
Kaplan, 2016). Creating smart ecosystems that will be broadly
trusted and utilized requires changing current beliefs and attitudes
of the general public, which can be achieved through a careful
reassessment of the purpose, goals, and potential of the field
(Kaplan, 2016; Siau, 2018). In order to be convincing to the public
and to the scientific community, detailed analyses of the challenges regarding ethics and privacy, as well as design approaches
with their inherent views about the role of humans are needed
when generating claims for future developments (Streitz,
Charitos, Kaptein, & Böhlen, 2019).
7
2.3. Summary of emerging requirements
The issue of symbiosis of humans with smart ecosystems is complex and multi-faceted, extending beyond technical boundaries to
a multi-disciplinary approach with the aim to also address ethical,
societal, and philosophical compelling issues (Table 2). This symbiosis entails a number of considerations, such as incorporating
human values in the design choices and trade-offs such as, for
example, between automation and human control, working strategically towards becoming more driven by humanistic concerns
than deterministic ones, and taking into account the social
dynamics involved (Bibri, 2015). The above factors should be
combined with a number of practical concerns, which include,
but are not limited to, designing meaningful human control,
ensuring systems’ transparency and accountability, and accounting for intelligent systems’ inherent opacity and unpredictability.
A collection of resources compiled by the “People + AI Research”
initiative at Google offers guidelines on how to build better AI
enabled products, including how to plan for co-learning between
users and the AI, how to calibrate user trust given actual AI
capabilities, how to explain AI output, and how to support collaborative decision-making between people and AIs (Google, 2019).
Ultimately, a key success factor for intelligent systems will be
whether they are designed to work truly in concert with users
(Holmquist, 2017). To this end, such environments should support and empower humans, by providing personalized, adaptive,
responsive, and proactive services; they should support human
skills and ingenuity, recognize and respond to human emotions
and exhibit empathy without undue manipulation, and above all,
they should foster human safety.
3. Human-environment interactions
3.1. Definitions and rationale
Interactions in smart environments and ecosystems are in the
process of being radically transformed, shifting from “conventional interaction and user interfaces towards human-centric
interaction and naturalistic user interfaces” (Bibri, 2015, p. 259).
Nowadays, technology is used beyond the organizational context;
user interfaces are already far beyond the desktop metaphor, and
interactions have advanced from keyboard-and-pointer to touchand-gesturing.
Streitz (2007) formulated a shift from human-computer
interaction (HCI) to human-environment interaction (HEI),
because people will increasingly be confronted with collections of devices constituting smart environments. In many
cases, computing devices will be embedded in everyday artefacts as “disappearing computers” (Streitz et al., 2007), creating new challenges for providing appropriate interaction
affordances. Since then, new technological advancements
have already shaped an expanded user interface design
space, where the notion and definition of user interfaces
acquire new perspectives, and the interactive materials are
far more enriched (Janlert & Stolterman, 2015). There is
already a rich discussion regarding the implications of
increased interactivity (Agger, 2011; David, Roberts, &
Christenson, 2018; Elhai, Dvorak, Levine, & Hall, 2017;
Janlert & Stolterman, 2017; Sarwar & Soomro, 2013). Taking
8
C. STEPHANIDIS ET AL.
Table 2. Summary of main issues and challenges for human-technology symbiosis.
Main Issues
Challenges
Meaningful human control
●
●
●
●
Design trade-offs: human control vs. automation
Transparency
Accountability
Understandability: inherent systems’ opacity and unpredictability
Humane digital intelligence
●
●
●
●
●
Support for individual and social life
Human rights and privacy
Enabling humans to exploit their potential
Incorporate human values in the design choices
Technology adoption determined by new factors, such as technology alignment with human values
Adaptation and personalization to human needs
● Evolvement of techniques to incorporate AI and big data
● Consideration of organizational and social factors besides individual preferences
Human skills’ support
●
●
●
●
●
Support of human skills and ingenuity
Enhancement and extension of human memory and human problem solving
Extension of human perception
Advancement of the understanding of human cognition and human brain
Augmentation of human perception and cognition
Emotion detection and simulation
●
●
●
●
Capturing and correlation of human emotional expressions
Technology exhibiting emotions and empathic behavior
Addressing inevitable and deliberate deceptions
Development of a new code of ethics for designers and developers
Human safety
● “Safe by design” advanced intelligent systems
● New testing methodologies
● Multidisciplinary approaches to the influence of AI on humans and society
Cultural shift
●
●
●
●
Evolvement of societal change along with technological change
Social-systems analysis
Detailed analysis of the challenges regardingethics and privacy
New design approaches with their inherent views about the role of humans
into consideration that the nature of the new technologically
enriched, autonomous, and intelligent environments will
bring about novel implications and challenges, the topic of
supporting human interactions in these environments
becomes a challenge that should not be overlooked in view
of the other prominent issues related to user control, transparency, ethics, and privacy (see Section 2 and Section 4).
3.2. Main research issues and state of the art
3.2.1. Interactions in the physical and digital continuum
Interactions in smart environments and ecosystems will
become more implicit (Bibri, 2015), often concealed in the
continuum between the physical and the digital (Conti et al.,
2012). In such ‘hybrid worlds’, one also has to consider representations of real objects in the virtual domain and vice versa:
virtual/digital objects in the physical/architectural domain
(Streitz, Geißler, & Holmer, 1998). Of course, there is no oneto-one mapping of all objects between the two domains,
because not all objects have counterpart representations in all
domains, which has strong implications for interaction design.
A major challenge for designing interaction with smart
artifacts, embedded and integrated into the environment,
and soon with smart materials, concerns the disappearance
of the ‘computer’ as a ‘visible’ distinctive device (Norman,
1998). This takes place either physically, through integration
in the environment, or mentally from our perception (Streitz,
2001), thus providing the basis for establishing a ‘calm technology’ as envisioned already by Weiser (1991). In such
environments, computers become part of the furniture (e.g.
as shown in the Roomware® environment with interactive
tables, walls, and chairs) (Tandler, Streitz, & Prante, 2002),
and decoration (Aylett & Quigley, 2015). The fact that
computers “disappear” when embedded in the environment,
raises the concern of how humans perceive the interactivity of
an object and how they regard it as an interaction partner
(Sakamoto & Takeuchi, 2014).
One of the main design challenges in such contexts is that
‘users’ are in many cases not anymore fully aware of the interaction options that are provided in their current smart environments, because traditional ‘affordances’ (Norman, 1999) are not
available anymore. However, it is reassuring in this respect that
users are able to extend existing knowledge and interaction
habits from the digital to the physical world, as reported in the
study of Interactive Maps, a system featuring touch-based interaction with physical printed maps (Margetis, Ntoa, Antona, &
Stephanidis, 2019). Additional challenges for interactions in
environments comprising multiple interactive objects and systems include how can users successfully address a specific system, how does the system acquire the appropriate context for
a given command, and how can fundamental design principles –
such as feedback and recovery from error – be effectively applied
(Bellotti et al., 2002).
Moreover, the miniaturization of computers has made it
possible to also embed interaction in jewelry and haute couture
(Aylett & Quigley, 2015; Seymour & Beloff, 2008; Versteeg, van
Den Hoven, & Hummels, 2016). Smart wearable devices, such
as glasses, watches, and bracelets, are already commercially
available today; however, they have not become mainstream
and exhibit a slower diffusion than other portable technologies,
such as smartphones (Adapa, Nah, Hall, Siau, & Smith, 2018;
Kalantari, 2017). It is noteworthy that the acceptance of wearable devices is influenced by aesthetic factors, such as compelling design and uniqueness (Adapa et al., 2018), even involving
cuteness as an emotional appeal (Marcus, Kurosu, Ma, &
Hashizume, 2017). Therefore, such artifacts have a twofold
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
essentiality, being perceived by consumers as a combination of
‘fashion’ and ‘technology, namely as “fashnology” artifacts
(Rauschnabel et al., 2016). Nevertheless, despite the excitement
and promises that are brought by the new potentiality of hiding
computers in everyday objects, furniture, and accessories, there
is the risk of eventually reaching an ever-increasing society of
information overload, information exploitation, and information inequality (Aylett & Quigley, 2015).
3.2.2. Implicit interactions
Interactions are anticipated to be not only conscious and intentional, but also subconscious and even unintentional/incidental,
or in the periphery of attention, where they are subconscious yet
intentional with direct but imprecise control (Bakker &
Niemantsverdriet, 2016; Dix, 2002). Despite the fertility in
terms of interaction possibilities, a major point of concern,
when designing for such environments, is how to create interaction-intensive but still not-too-invasive experiences.
It is also important to consider how better to support the
inevitable shifts of interaction between the center and periphery of attention (Bakker, Hoven, & Eggen, 2015). In any case
it must be ensured that the interactive environment does not
impose high perceptual and cognitive demands upon users,
nor does it create other negative consequences, such as confusion or frustration. Besides design concerns, implicit interactions in intelligent environments raise privacy concerns.
From a technical point of view, in order to support such
interactions, it is required that the environment records contextual and user data (such as movement or speech), and
carries out all the required analysis so as to respond accurately
and appropriately (McMillan, 2017). A more detailed discussion of the privacy and ethics concerns, and the challenges
entailed is presented in Section 4.
3.2.3. Novel and escalated interactions
At the same time, in the near future, novel forms of interaction can potentially arise, which exploit sensing the environments and simulating human senses. Currently, the senses
mostly employed in interaction with technology are vision,
hearing, and touch, whereas taste and smell remain largely
unexplored (Obrist et al., 2016). Interactions in future environments, however, have the potential to be multisensorial,
including digitized chemical senses, such as taste and smell
(Spence, Obrist, Velasco, & Ranasinghe, 2017), as well as
haptic sensations (Schneider, MacLean, Swindells, & Booth,
2017). Furthermore, other forms of natural interaction are
a sine qua non for intelligent environments, including facial
expressions, eye movements, hand gestures, body postures,
and speech, which can be used multi-functionally to acquire
context as implicit input, to recognize emotions, to denote
explicit inputs, and to detect multimodal communication
behavior (Bibri, 2015).
It is therefore evident that interaction will be considerably
escalated in terms of available options, potential combinations,
and technical requirements. Additionally, design will no longer
focus on a single user-artifact interaction. It will have to account
for entire environments and ecologies of artifacts, services, and
data, as well as for larger user populations, distributed in various
different physical locations and contexts of use (Brown, Bødker,
9
& Höök, 2017). This larger context cannot be addressed by
simply adding more users, data, or artifacts in the design process;
instead, it challenges existing methods to scale up, calling for
new design methods (Brown et al., 2017).
Design approaches focusing on the human, such as usercentered design and human-centered design, are highly relevant
and extremely important in the context of the forthcoming
intelligent environments. This is particularly true since their
main objective is how to ensure that technology serves users’
needs in the best possible way, a perspective that is claimed to –
and apparently should – constitute the ultimate goal of the new
technological realm. Still, such design approaches need to evolve,
so as to face the emerging challenges, in terms of acquiring and
framing the potential contexts of use, eliciting and analyzing user
requirements, producing designs, and carrying out evaluations
(Marcus, 2015a; Stephanidis, 2012).
3.2.4. Interactions in public spaces
The aforementioned concerns and challenges become even more
intricate in the context of public spaces with multiple users.
Research in the area is active, exploring various user roles and
interaction types, and proposing models and frameworks for the
design of interactions in public spaces. In brief, users can be
characterized with regard to their interaction with a public system as passers-by, bystanders, audience members, participants,
actors, or dropouts (Wouters et al., 2016). A major consideration
is how can interactive systems attract the attention of passers-by
and motivate them to engage with the system (Müller, Alt,
Michelis, & Schmidt, 2010). Factors that have been found to
motivate engagement with public systems include challenge,
curiosity, choices offered, fantasy, collaboration with other
users, one’s self-efficacy with technology, as well as the content
and appeal of the topic of an interactive artifact (Hornecker &
Stifter, 2006; Margetis et al., 2019; Müller et al., 2010).
Another concern for interaction design is the ubiquity of
technologies in public settings that “blurs” the boundaries
between private and public interaction (Reeves, 2011). For
instance, how should a third party experience the interaction of
a user with a public interactive system, and how can such a system
accommodate transitions between users? In this respect, the
design of interactive experiences in public spaces needs to provide
clear and timely feedback about who is in control of the interaction, to clarify what each user is in control of (in the case of multiuser systems), and to appropriately support the various user roles
in terms of (social) interaction and content (Hespanhol &
Dalsgaard, 2015; Hespanhol & Tomitsch, 2015). The “blurred”
boundaries between private and public interaction also raise privacy concerns regarding personal information that may be publicly
presented, and about how can such “harmless” personal information be defined (Vogel & Balakrishnan, 2004) (see also Section 4).
Furthermore, a characteristic of interactions in public
spaces is that they are transient. Cities and airports are good
examples of “transient spaces”, where multi-user as well as
multiple-devices activities take place, and which are increasingly transformed to smart cities and smart airports (Streitz,
2018). The transient nature of interactions in public spaces, as
well as the need for catering for a wide variety of potential
user characteristics (e.g. age, gender, cultural background,
technology familiarity) pose additional design requirements
10
C. STEPHANIDIS ET AL.
(Hespanhol & Tomitsch, 2015). These include calm aesthetics,
support for short-duration fluid interactions, and immediate
usability (Vogel & Balakrishnan, 2004). New methods for user
participation in the design process are also required
(Christodoulou, Papallas, Kostic, & Nacke, 2018). Other challenges stemming from the need to serve different users simultaneously include the balance between the single-user and
multi-user contexts, and the facilitation of collaboration
among multiple users who may be strangers (Ardito, Buono,
Costabile, & Desolda, 2015; Lin, Hu, & Rauterberg, 2015).
3.2.5. Interactions in virtual and augmented reality
Virtual Reality (VR) is one of the “scientific, philosophical, and
technological frontiers of our era” (Lanier, 2017, p. 1), posing
unprecedented challenges, as it provides a realistic representation of imaginary worlds and allows to navigate and interact in
an illusionary environment with virtual objects and characters,
who may actually represent physical persons located anywhere
in the world. Recent technological advancements have made VR
available at consumer prices, while most market forecasts suggest that VR will soon have a major impact (Steinicke, 2016).
Key elements to the VR experience are immersion, presence,
and interactivity, as well as the virtual world, its creators and
participants (Ryan, 2015; Sherman & Craig, 2018). In the recent
past, the greatest challenges in the field revolved around developing better hardware systems (Steinicke, 2016). Having already
achieved substantial progress in terms of devices, the challenge
now lies in creating realistic experiences, exhibiting increased
feelings of immersion and presence. This involves advances in
sense of embodiment, which includes sense of self-location,
sense of agency, and sense of body ownership (Kilteni, Groten,
& Slater, 2012). Obstacles in delivering realistic experiences,
which need to be overcome, include user cyber-sickness, lack
of realistic simulation of locomotion, inadequate selfrepresentation, and lack of realistic visual-haptic interaction
(Steinicke, 2016). As Virtual Reality becomes more and more
immersive and realistic moving towards real virtuality
(Chalmers, Howard, & Moir, 2009), a tangible risk lies in users
becoming addicted to virtual worlds and over-attached to virtual
agents (see also Section 4.2.4).
Recent trends towards interconnected VR (Bastug,
Bennis, Médard, & Debbah, 2017) reveal new possibilities for
social experiences in VR and stimulate research to address novel
UI and interaction design requirements, evaluation methods, as
well as privacy and ethics concerns. At the same time, pertinent
design guidelines need to be further elaborated and expanded to
also include aspects of social interactions in virtual environments
(Sutcliffe et al., 2019). With regard to user experience evaluation,
current approaches typically employ subjective user assessment
of presence and immersion, whereas future activities should
move towards merging such assessments with objective metrics
and observations. Advancing research on VR user experience
could further produce new guidelines and practices for the
design of VR environments.
As technological advances are gained and user acceptance
increases, the future also holds promise for Augmented Reality
(AR). AR allows the user to interact with a digital layer superimposed on their physical real world. The technology is still in
the early stages, but when it reaches its full potential, it is
expected to disrupt and transform the way we communicate,
work, and interact with our world (Van Krevelen & Poelman,
2010). Some say the combination of voice commands, AI, and
AR will make screens obsolete (Scoble & Israel, 2017). The
potential for geographic AR experiences, messages (overt or
covert), and storytelling is immense (Yilmaz & Goktas, 2017).
‘Reality’ related technologies that are part of the current and
emerging information landscape have the potential to alter the
perception of reality, form new digital communities and allegiances, mobilize people, and create reality dissonance. These
realities also contribute to the evolving ways that information is
consumed, managed, and distributed. A major challenge in this
respect is how AR as a new medium will combine the real and
virtual in such a unique way that the provided experience cannot
be derived exclusively either from the real or from the virtual
content (Azuma, 2016).
3.2.6. Evaluation
Evaluation in intelligent environments should go beyond performance-based approaches to the evaluation of the overall user
experience, while it should take place in real-world contexts
(Gaggioli, 2005). Traditional evaluation practice has been
pointed out as insufficient for new interactive systems that
feature new sensing possibilities, shifts in initiative, diversifications of physical interfaces, and shifts in application purpose
(Poppe, Rienks, & van Dijk, 2007). Challenges include the interpretation of signals from multiple communication channels in
the natural interaction context, context awareness, the unsuitability of task-specific measures in systems which are often taskless, as well as the need for longitudinal studies to assess the
learning process of users (Poppe et al., 2007).
Taking into account the immense number of quality characteristics that should be evaluated in such environments
(Carvalho, de Castro Andrade, de Oliveira, de Sousa Santos, &
Bezerra, 2017), it is evident that new assessment methods and
tools are required, complementing self-reported or observed
metrics with automatically acquired user experience indications
(Ntoa, Margetis, Antona, & Stephanidis, 2019). Finally, new
frameworks and models are needed in order to provide holistic
and systematic approaches for the evaluation of UX in intelligent
environments, taking into account a wide range of characteristics and qualities of such environments (Ntoa, Antona, &
Stephanidis, 2017; Scholtz & Consolvo, 2004).
In the future, intelligence can transform into a service and
become a new design material (Holmquist, 2017). In fact,
Artificial Intelligence as a Service (AIaaS) is already available
today, with the aim to assist data scientists and developers in
delivering applications employing AI without having technical
know-how (Janakiram, 2018; Sharma, 2018). Similarly, AI as
a design material could be plugged into applications or artifacts,
facilitating designers throughout iterative design and evaluation.
Such a perspective points to a direction in which intelligence is
used as a tool to truly empower experts and technology becomes
an even more valuable tool (Klein, Shneiderman, Hoffman, &
Ford, 2017). At the same time, it brings to the foreground the
need for establishing concrete guidelines and a code of ethics for
the design and development of AI applications, services, and
artifacts; it also highlights the requirement for evolving the
existing design, evaluation, and testing procedures.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
3.3. Summary of emerging requirements
In summary, interaction in forthcoming technological environments will radically shift and will be considerably escalated as
users’ location, posture, emotions, habits, and intentions will
constitute candidate input data to a variety of visible and invisible
technological artifacts embedded in the environment. Robotics
and autonomous agents will be typically included in such technologically enriched environments. Information will be communicated from the one interaction counterpart to the other naturally,
while the digital will coexist with and augment the physical. These
new challenges pave the way towards evolving the existing design
and evaluation methodologies and techniques to encounter,
embrace, and eventually employ future technologies to their benefit. Table 3 summarizes the aforementioned concerns and challenges, as they stem from interaction-related issues. The
important role of Human-Environment Interaction will increasingly show in the context of designing interaction in smart cities.
Last, but definitely not least, it is crucial to comprehend how
the new interaction possibilities in technologically enriched
environments affect the human. As Norman (2014) put it:
“Technology is not neutral; it dominates”, as it poses a way of
thinking about it to those who are directly or indirectly influenced by it; the more successful and widespread a technology is,
the greater its impact is upon the entire society. As technologies
become smarter, pervasive, yet invisible, and able to recognize,
respond to, and even anticipate human needs and wishes, users
tend to attribute personality, agency, and intentionality to them
(Marenko & Van Allen, 2016). The design of interactions therefore can be seen as “designing relations between humans and the
world, and, ultimately designing the character of the way in
which we live our lives” (Verbeek, 2015, p. 31).
4. Ethics, privacy and security
4.1. Definitions and rationale
Aristotle wrote in his Rhetoric that the triplex ethos-pathoslogos are the three main attributes that affect the ability of
orators to convince their audiences. Ethos is the characteristic
of speakers that can make them trustworthy, because we tend
to give our trust to honorable persons. Pathos is related to
putting the audience into a certain frame of mind, as persuasion may occur when the speech evokes specific emotions.
Logos refers to the proof provided by the words of the speech
itself, meaning that persuasion may be achieved when
the truth is proven through appropriate argumentation.
Humans’ relation with technology-augmented environments
resembles more a dialogue than a rhetoric, still it is a fact that
people will be persuaded by these environments (and will
eventually accept and adopt them) if they are ethical, if they
raise positive emotions to their users and if they actually
prove to be worthy to trust.
Ethics, privacy, trust and security have always been important
concerns in relation to technology, acquiring yet new dimensions in the context of technologically augmented and intelligent
environments. It is indicative of the importance and timeliness
of ethics that there is a plethora of relevant articles in the
literature for every newfangled technological domain, such as
AI, AmI, big data, Internet of Things (IoT), autonomous agents
11
and robotics, as well as mobility services, e.g. automated driving
(Biondi, Alvarez, & Jeong, 2019; Dignum, 2018; Tene &
Polonetsky, 2013; Ziegeldorf, Morchon, & Wehrle, 2014). It is
also noteworthy that there are specialized communities and
organizations1 dedicated to exploring and developing the topic
of ethics and ethical intelligence (Becker, 2008; Marcus, 2015b).
Aligned with these concerns, and motivated by the importance
of ethics for human-technology symbiosis in technologyaugmented and intelligent environments, this chapter explores
critical issues related to ethics, privacy, and security as they
emerge in such contexts.
4.2. Main research issues and state of the art
4.2.1. HCI research
As interactive technologies pervade every life domain, HCI
research is challenged to move beyond lab studies and expand to
new fields and contexts, carrying out research “in the wild”
(Crabtree et al., 2013). Research in public spaces, in particular,
faces the dilemma of following typical procedures to inform participants and acquire their consent vs. studying the actual user
experience and behavior, which can be influenced if participants
are aware that they are being observed (Williamson & Sundén,
2016). Similar ethical concerns about engaging participants in
studies revolve around museums or research at the intersection
of art and technology, where the distinction between research
participants and event audience is not clear (Fiesler et al., 2018).
Another point of caution for HCI research refers to involving vulnerable user populations, such as older adults, people
with disabilities, immigrants, socially isolated individuals,
patients, children, etc. Besides issues regarding participants’
understanding of the study and giving their consent, there are
also concerns related to managing participants’ misunderstandings, handling potentially erroneous and optimistic
expectations about technology, and attending to problems
that may occur when the technology does not perform as
expected (Waycott et al., 2016).
The recent technological evolution has raised new issues
related to the usage of online data in HCI research, such as
the usage of data without subjects’ consent. In this case,
a fundamental question is if researchers are entitled to using
data without consent, especially in cases that data are publicly
available (Frauenberger, Bruckman, Munteanu, Densmore, &
Waycott, 2017; Vitak, Shilton, & Ashktorab, 2016). Other
open questions refer to what constitutes public data and
what are the best practices for acquiring informed consent
(Frauenberger et al., 2017). Participant anonymity is also
a challenging undertaking, as studies have identified that it
is possible to de-anonymize data when paired with other
datasets (Vitak et al., 2016).
4.2.2. Online Social Networks (OSNs)
Privacy in OSNs is a major point of concern, and indicative of
the discussion on privacy in all technological domains. Privacy
can be defined as “a human value consisting of a set of rights
including solitude, the right to be alone without disturbances;
anonymity, the right to have no public personal identity; intimacy, the right not to be monitored; and reserve, the right to
Challenges
The computer “disappears” as a “visible” distinctive device
New types of affordances for disappearing computers/devices
Successful direction of user commands to the appropriate interactive artifact
Appropriate user command interpretation according to the current context
Effective application of established design principles (e.g. feedback and recovery from error)
Perceivable interactivity of everyday objects
“Fashnology” artifacts perceived as a combination of fashion and technology
Risk of information overload, information exploitation, and information inequality
Appropriate support for shifts of interaction between the center and periphery of attention
Design for interaction-intensive experiences that are not invasive
Risk of high perceptual and cognitive demands, confusion, or frustration
Ethics and privacy issues
Natural interaction and novel forms of interaction
Multisensorial interactions
Escalated interaction, involving ecologies of artifacts, services and data, and addressing larger user populations
New/updated methods for acquiring and framing contexts of use, eliciting and analyzing user requirements, as well as producing designs
Attracting the attention of passers-by and motivating them to engage with the system
Ubiquity of technologies
Transient spaces in smart cities and smart airports
Support for various user roles and styles of engagement with the system in terms of (social) interaction and content
Privacy of personal information
Transient use
Wide variety or potential user characteristics
Accommodation of user transitions and balance between single-user and multi-user contexts
Facilitation of user collaboration even among strangers
Realistic VR experiences
Advanced sense of embodiment, realistic simulation of locomotion and adequate self-representation in VR environments
Overcoming limitations of cyber-sickness in VR environments
Achieving realistic visual-haptic interaction in VR environments
Pursuit of social VR User Experience
User experience evaluation in VR, combining subjective assessments with objective measurements
Successfully blending the real and virtual worlds to provide a unique seamless experience in AR
Interpretation of signals from multiple communication channels, context awareness, unsuitability of task-specific measures in systems which are often task-less
Evaluation of the overall UX beyond performance-based approaches
New methods and tools, taking advantage of the intelligent infrastructure towards automatically calculating UX metrics, besides self-reported or observed metrics
Intelligence offered as a service and as a design material
Comprehension of the impact of the new interaction possibilities in intelligent environments on the human
Development of a code of ethics for the design and development of intelligent applications, services, and artifacts
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Implicit interactions
Novel and escalated interactions
Interactions in public spaces
Interactions in virtual and augmented reality
Evaluation
Main Issues
Interactions in the physical and digital continuum
Table 3. Challenges stemming from interaction-related issues in intelligent environments.
12
C. STEPHANIDIS ET AL.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
control one’s personal information, including the dissemination
methods of that information” (Kizza, 2017, p. 303).
Privacy concerns in OSNs include who can view one’s
private information, the ability to hide information from
a specific individual or group, the degree to which other
users can post and share information about an individual,
data retention issues, the ability of the employees of the
OSN to browse private information, selling of data, and targeted marketing (Beye et al., 2012). Interestingly though,
despite their privacy concerns, individuals reveal personal
information for relatively small rewards, a phenomenon
which is referred as the “privacy paradox”. Kokolakis (2017)
identifies a number of potential factors for this paradox,
including that the use of OSNs has become a habit and is
integrated into daily life, that the perceived benefits of participation outweigh observed risks, that individuals do not
make information-sharing decisions as entirely free agents,
and that privacy decisions are affected by cognitive biases
and heuristics, as well as by bounded rationality and incomplete information. Privacy in OSNs becomes even more crucial when these are used by sensitive user groups for
specialized purposes (e.g. school communities and learning
OSNs used by students) (Parmaxi, Papadamou, Sirivianos, &
Stamatelatos, 2017). Privacy and content sharing (and therefore sociability) constitute two conflicting fundamental features of OSNs, which however need to be perfectly joined,
a challenge that designers of OSNs have to master
(Brandtzæg, Lüders, & Skjetne, 2010).
Towards raising awareness and protecting user rights, the
European Parliament and Council of the European Union
issued the regulation 2016/679 of the General Data
Protection Regulation (GDPR) on the protection of natural
persons with regard to the processing of personal data and
on the free movement of such data. Being legally in effect
since May 25, 2018, GDPR regulates processing of the data,
and the rights of the data subject, including transparent
information and communication for the exercise of rights,
the right of access, as well as the right to rectification,
erasure, restriction to processing, and data portability
(European Commission, 2016).
4.2.3. Healthcare technologies
Social media are often used in the context of healthcare ICT
to facilitate information exchange, and to create media content individually or shared with specific groups (Denecke
et al., 2015). In this context, ethical issues include the use of
social media for underage or older persons, as well as using
social media for research purposes, such as to harness patientreported data, to conduct online surveys, and to recruit participants (Denecke et al., 2015).
Data privacy, accuracy, integrity and confidentiality become
an even greater concern in the context of healthcare, whether it
pertains to social media, telehealth, electronic health records,
wearable health sensors, or any other eHealth domain (George,
Whitehouse, & Duquenoy, 2013). Additional ethical principles
that should be addressed in the eHealth context include access
for all to eHealth (see also Section 5.2.5), anonymity, autonomy,
beneficence and non-maleficence, dignity, no discrimination,
13
free and fully informed consent, justice, safety, and valuesensitive design (Wadhwa & Wright, 2013). Telehealth ethical
concerns also extend to the impact that distant care may have on
the healing relationship that is typically developed between
patients and health providers, the loss of touch, and the danger
that virtual visits may replace actual doctor visits in the name of
cost/time effectiveness (Fleming, Edison, & Pak, 2009). With
regard to patients’ data, concerns pertain to the question of
legitimacy of purpose, and the potential for data exploitation
(Fleming et al., 2009). Furthermore, an often neglected yet
critical issue is that of liability, as patients become active participants in the delivery of healthcare, raising the issues of potential mistakes, misreports, or misinterpretations (Kluge, 2011).
Another aspect contained in all of the above technologies is the
prevalence of persuasion techniques to change people’s behavior.
While some objectives for the purposes of health (quitting smoking, weight maintenance, nutrition habits, exercise routines, etc.)
are usually valuable and desirable, persuasive technologies could,
in the wrong circumstances, be used to delude people or to
persuade them to engage in undesirable behavior. Mobile products (Marcus, 2015b) and social media are especially vulnerable
to these distortions and need to be assessed as technologies
develop and mature. An extended discussion, including ethical
concerns, on technology that fosters well-being health and human
eudaimonia is presented in Section 4.3.
4.2.4. Virtual reality
VR is a technological domain, in which – due to the illusion it
creates – two major social and ethical themes are raised: (i) reactions and feelings of the user, such as over-attachment to virtual
agents, or feeling out of control and behaving with hostility in the
virtual environment and outside in the physical world, as well as
(ii) the intentions of the VR environment creator, which may be
obscure and dangerous for the user, e.g. by collecting information
or inducing mental/psychological transformations without the
user’s knowledge (Kizza, 2017). VR allows the user to step in
a “reality”, which can be entirely synthetic and a created digital
environment, or it could be a suspended moment of an actual realworld environment. The synthetic environment could be modeled
after the real world, a fantasy, or both. Most virtual realities do not
fully cross over the uncanny valley (Mori, MacDorman, & Kageki,
2012), but this is an issue that is expected to improve in the future.
A recent tangible example of attachment to virtual characters is the marriage of a man with a virtual character, which
was asserted by the company producing the holograms with
a marriage certificate,2 opening a wide discussion regarding
the indisputable freedom of the individual, the role of technology in affecting free will, and the ethical dilemmas/responsibilities of technology creators and designers. Even more
crucial than over-attachment to virtual agents is the concern
of how social interactions may be reshaped in a VR context,
leading for example individuals to opt out of societal engagements (which can have far-reaching implications on fertility
rates, the economy, and existing social fabrics), or providing
the option to virtually extend one’s life beyond physical death
(The IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems, 2019).
14
C. STEPHANIDIS ET AL.
4.2.5. IoT and big data
The IoT paradigm is characterized by heterogeneous technologies, including smart objects that are interconnected and
cooperate over the Internet infrastructure, enabling many
new services and making buildings, cities, and transport smarter (Ziegeldorf et al., 2014). In this highly interconnected and
heterogeneous environment, where any interactions may
occur between humans, devices, and autonomous agents,
new privacy and security threats are manifested.
In particular, privacy will become even more important in
the smart hybrid cities to come (Streitz, 2018). In the virtual
world, people can use fake identities and anonymization services. This will be difficult or even impossible in the real
world. Data existing about people in the virtual world are
now complemented by and combined with real world data
and vice versa. Public and private Closed Circuit Television
(CCTV) cameras are taking pictures of people entering a shop
or a restaurant with known locations, while face recognition
identifies personal identities. Real objects that people are
wearing, carrying, using, buying will be recognized by sensors
in the environment because these objects are tagged.
Increased instrumentation of vehicles in the context of autonomous driving affects privacy. Personal walking behavior is
transparent when carrying a smartphone. Thus, it will become
more and more difficult to avoid object and person tracking,
and, the challenge of preserving privacy in the real/hybrid
world will be immense in IoT enabled environments.
Big data goes hand in hand with IoT, both being recent technological evolutions that will definitely constitute core components of future technologically augmented environments. Big data,
due to the harvesting of large sets of personal data coupled with the
use of state of the art analytics, outlines additional threats to
privacy, such as automated decision making (when decisions
about an individual’s life are handed to automated processes),
which raises concerns regarding discrimination, selfdetermination, and the narrowing of choices (Tene &
Polonetsky, 2013). For example, predictive analytics may hinder
implications for individuals prone to illness, crime, or other
socially unacceptable characteristics or behaviors (Tene &
Polonetsky, 2013). Other obscure possibilities are individuals (mistakenly) being denied opportunities based on the actions of others,
the reinforcement of existing inequalities for vulnerable user
groups (e.g. low-income consumers), as well as malevolent
attempts and misleading offers to vulnerable individuals, such as
seniors with Alzheimer or individuals with addictions (Federal
Trade Commission, 2016). Responsible and fair data management
and analysis is required from researchers to avoid inducing bias
and discrimination (Stoyanovich, Abiteboul, & Miklau, 2016).
4.2.6. Intelligent environments
It is evident that ethics, privacy, and trust are topics that span
all technological domains, with their main questions being
common. Nevertheless, the different domains pose supplementary concerns. For instance, biometrics in general pose
the same threats to data privacy as any other user data (e.g.
unwarranted identification and threats to the individual,
undesired collection of personal data, and unauthorized access
to personal information), however they also impose an additional moral dilemma because biocentric data has an impact
on one’s right to control the use and disposition of one’s body
(Alterman, 2003).
Likewise, intelligent environments invoke the same ethical
concerns with other developing technologies, especially with
those technologies that raise questions about how humans understand ourselves and our place in the world (Boddington, 2017). In
general, intelligent systems entail a number of risks, including
users’ identification based on collected data, permanence of personal/sensitive data, profiling and implicit deduction and attribution of new properties to individuals, use of data for monitoring,
misinterpretation of data, public disclosure of confidential information, as well as collection of data and applying persuasion
techniques without the user’s awareness (Jacucci, Spagnolli,
Freeman, & Gamberini, 2014). Despite the potential of the system
to acquire and retain large amounts of (personal) data, this collection should be limited (Könings, Wiedersheim, & Weber, 2011).
In fact, there is a tricky trade-off between creating smartness and
providing or maintaining privacy. Obviously, a smart system can
usually be ‘smarter’ with respect to a service offered, if it has more
knowledge about the person compared to a system with no or
insufficient data. The challenge is now to find the right balance.
Determining the balance should be under the control of the
involved person and would also imply that people are willing to
pay for a service – not with their data but with money. This
requires transparency about the options and real differentiated
choices. Extending the rules and concepts laid out in the GDPR
(European Commission, 2016) would be one direction to further
develop these ideas. Privacy should be also examined from the
perspective of rules that govern information flows according to
our values and norms, so that an ethical system of privacy rules
develops for the benefit of humans in intelligent environments,
embedded as an essential component of these future environments (Richards & King, 2016).
When it comes to AI and autonomous agents, a fundamental
ethical concern is that of responsibility: where does responsibility lie, what are the moral, societal and legal consequences of
actions and decisions made by an AI system, and can an AI
system be held accountable for its actions (Dignum, 2018)?
Ethical decision making in AI is a multi-disciplinary field of
research that is called to provide answers to such dilemmas
and safeguard our future. Moral decision-making by humans
involves utilitarian considerations and moral rules, which often
involve sacred values that may be acquired from past example
cases and may also be culturally sensitive (Yu et al., 2018). As AI
systems are constructed by humans, an approach is to integrate
societal, legal and moral values into all the development stages of
AI systems, so that AI reasoning takes into account these values,
weighs their priorities when it comes to different stakeholders in
various multicultural contexts, explains its reasoning and guarantees transparency (Dignum, 2018). Asimov’s three laws of
robotics (Asimov, 1950) are often considered an ideal set of
rules for machine ethics, however, there are arguments that
show these laws may not be adequate (Anderson, 2008). An
alternative approach advocates for shifting the burden of moral
reasoning to autonomous agents, and enabling agents to behave
ethically and to judge the ethics of other agents. This can be
achieved by developing primary rules that will allow the creation
of secondary rules, as well as the modification and substitution of
rules as situations evolve (Yu et al., 2018). Man-Machine Rules
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
can help organize dialog around questions, such as: how to
secure personal data, how ethical are chips embedded in people
and in their belongings, what degrees and controls need to be
taken into account for personal freedoms and risks, and whether
consumer rights and government organizations will audit algorithms (Dellot, 2017). Challenges involved in embedding values
and norms in autonomous intelligent systems include the need
for norm updating similar to how humans update their norms
and learn new ones, the conflicting norms that AIs will face and
how to resolve those conflicts, that not all norms of a target
community apply equally to human and artificial agents, and
that biases may be introduced that will disadvantage specific
groups (The IEEE Global Initiative on Ethics of Autonomous
and Intelligent Systems, 2019).
As autonomous intelligent agents will make increasingly
complex and important ethical decisions, humans will need to
know that their decisions are trustworthy and ethically justified (Alaieri & Vellino, 2016). Therefore, transparency is
a requirement (see also Section 2.2.1), so that humans can
understand, predict, and appropriately trust AI, whether it is
manifested as traceability, verifiability, non-deception, or
intelligibility (The IEEE GlobalInitiative on Ethics of
Autonomous and Intelligent Systems, 2019). Intelligible AI,
in particular, will further help humans identify AI mistakes
and will also facilitate meaningful human control (Weld &
Bansal,, 2019). Nevertheless, depending on how the explanations are used, a balance needs to be achieved in the level of
details, because full transparency may be too overwhelming in
certain cases, while not enough transparency may jeopardize
human trust in AI (Chen et al., 2018; Yu et al., 2018). At the
same time, system transparency and knowing that AI decisions follow ethics will influence human-AI interaction
dynamics, giving the opportunity to some people to adapt
their behaviors in order to render AI systems unable to
achieve their design objectives (Yu et al., 2018).
4.2.7. Cybersecurity
The issues discussed above mainly revolve around ethics and
privacy, highlighting challenges and potential threats. Privacy,
however, is coupled with cybersecurity (aka IT security), an issue
that has become prominent for two main reasons. First, the
transformation of our societies, through the expansion of digital
technologies, offers more opportunities for cyber-criminal activity (Moallem, 2018). It is not only public organizations and
institutions, but also residential units that are now highly digitized (e.g. including surveillance cameras, IoT-connected home
appliances and medical devices, home control and automation
systems, etc.). It can be said that every aspect of human activity is
managed, recorded, and tracked in the digital realm, even “in
person” meetings (Moallem, 2018). Second, cyber-attacks
require few expenses, they are geographically unconstrained
and involve less risk to the perpetrator than physical attacks
(Jang-Jaccard & Nepal, 2014).
Security is a challenging task, especially since the high number of interconnected devices raises scalability issues, making
traditional security countermeasures inapplicable (Sicari,
Rizzardi, Grieco, & Coen-Porisini, 2015). In addition, as most
of the current commercial IoT devices have limited on-board
security features, they can constitute an easy target for hacking,
15
blocking, altering their communication, changing their configuration, or sending them false commands (Tragos, Fragkiadakis,
Kazmi, & Serrano, 2018). Overall, the main security threats
involve data breach and privacy, as well as attacks against the
devices or the software of both devices and servers.
Data anonymity and confidentiality are threatened by the
connectedness of everyday things, which opens up possibilities for identification of devices through fingerprinting and
the possibility to create huge databases with identification
data (e.g. speech) (Ziegeldorf et al., 2014). At the same time,
devices may manage sensitive information (e.g. user habits or
health data), which entails privacy threats in the case of
inventory attacks (by non-legitimate parties), as well as in
lifecycle transitions of the devices (Sicari et al., 2015;
Ziegeldorf et al., 2014). Additional privacy threats in the IoT
context include: the possibility for advanced profiling through
inferences by correlations with other profiles and data, exposure of private information through a public medium, as well
as linking different systems such that the combination of data
sources reveals (truthful or not) information that the individual had not provided and may not wish to reveal (Ziegeldorf
et al., 2014). Threats may also occur by malicious attacks
against the sensors and activators of intelligent environments.
For instance, attackers may steal information regarding the
health status of a user who is being monitored and eventually
identify when the user is at home or absent (Tragos et al.,
2018). By attacking actuators, it may also be possible to control or tamper with house elements (e.g. doors and windows,
air-conditioning, alarms, etc.), which not only causes physical
security threats, but also decreases the reliability of the system
and therefore the trust that users put in it.
The discussion on cybersecurity in the context of IoT and
smart cities is rich from a technical point of view, identifying
challenges, proposing architectures, and suggesting future
research endeavors. However, despite technical advancements
that should be pursued, it has been recognized that the main
weak point in breached security is the human agent, be it
through error or ignorance (Moallem, 2018; Still, 2016).
Therefore, the role of HCI becomes crucial in pursuing usable
cybersecurity, in educating individuals so as to raise their
awareness and appropriately shape their behavior, as well as
in training organizations and institutions on the human side
of cybersecurity.
4.3. Summary of emerging requirements
In summary, trust is hard to come by, and requires initial
trust formation and continuous trust development, not only
through transparency, but also through usability, collaboration and communication, data security and privacy, as well as
goal congruence (Siau & Wang, 2018). To this end, technological systems need to behave so that they are beneficial to
people beyond simply reaching functional goals or addressing
technical problems, by serving human rights and the values of
their users, “whether our ethical practices are Western (e.g.
Aristotelian, Kantian), Eastern (e.g. Shinto, Confucian),
African (e.g. Ubuntu), or from a different tradition” (The
IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems, 2019, p. 2). The main concerns in this
16
C. STEPHANIDIS ET AL.
respect refer to privacy and the challenges it raises in the
context of the new digital realm, to ethical issues as they
appear in the various domains, and to cybersecurity. Table 4
presents a summary of these concerns, as they have been
discussed in this section. Additional ethical concerns, pertaining to specific contexts (e.g. symbiosis, health, learning), are
discussed in the various corresponding sections of this paper.
In conclusion, a new code of ethics needs to be established,
pursued in three directions: ethics by design, in design, and
for design (Dignum, 2018). In this new code, user privacy
should be further shielded, especially since the intelligent
technological environments feature such an abundance of
information and knowledge about the user, as well as automated information analysis and the potential for opinion
forming. In any case, the idea that people can fix the tech
world through a voluntary ethical code emergent from itself,
paradoxically implies that the people who created the problems will fix them (Simonite, 2018). HCI research should
also support regulation activities about privacy, safety, and
security in the new intelligence era.
5. Well-being, health and eudaimonia
5.1. Definitions and rationale
Technological advances, coupled with advances in medicine,
offer the opportunity to provide more effective and less expensive ways of fostering a healthy life, by promoting and supporting healthy behaviors, encouraging disease prevention, offering
new forms of therapy, and managing chronic illness.
Improvements in image analytics, computing, large-scale databases, and cloud capabilities in combination with precision
medicine, next-generation sequencing and molecular diagnostics that deepen understanding of one’s unique biology, have
contributed to the development of precision health. Precision
health aims to make health care more tailored to each person
based on their individual differences and can eventually lead to
precision aging, allowing individuals to individualize and optimize care over their lifespan (Dishman, 2019).
Moreover, in a world where technology is omnipresent, the
question arises of how its role towards enhancing well-being and
human eudaimonia can be optimized. Eudaimonia3 is a concept
that can be traced to classic Hellenic philosophy, and also constitutes a topic of contemporary philosophy. It stands for realizing one’s potential, and is associated with several variables
including self-determination, a balance of challenges and skills,
and the investment of considerable effort (Marcus, 2015d;
Waterman et al., 2010). Eudemonic experiences are related to
need fulfillment, long-term importance, positive affect, and feelings of meaningfulness, in contrast to happiness that is considered as hedonia or momentary pleasure, such as unwinding and
relaxing (Mekler & Hornbæk, 2016). The distinction between
eudaimonia and happiness can be useful in our understanding of
how technology use may contribute to eudaimonia and people’s
well-being, and could also inspire new technology designs pursuing meaningful experiences with interactive technology
(Mekler & Hornbæk, 2016).
This section discusses the topic of fostering health, wellbeing, and human eudaimonia through technology and
highlights points of concern and challenges as they arise in
this context.
5.2. Main research issues and state of the art
5.2.1. Personal Medical Devices (PMDs) and self-tracking
Medical technologies tailored to individuals have proliferated in
recent years through PMDs, “devices that are attached to, worn
by, interacted with, or carried by individuals for the purposes of
generating biomedical data and/or carrying out medical interventions on the person concerned” (Lynch & Farrington, 2017, p. 3).
Whether or not consumer wearable technology will be adopted
and accepted by the medical community, and how this technology
can best serve medicine remain unclear and will be determined by
two major concerns: (i) how health practitioners will be prepared
to accommodate the increasing number of patients who will bring
wearable data to their medical consultation appointments, and (ii)
the high potential for errors, when patients without medical
training attempt to interpret symptoms based on data stemming
from devices that may be unreliable (Piwek, Ellis, Andrews, &
Joinson, 2016). A point of concern that needs to be addressed for
improving the medical trustworthiness of such devices is the
trade-off between users’ comfort, sensor unobtrusiveness, and
signal quality (Arnrich, Mayora, Bardram, & Tröster, 2010).
PMDs do not refer only to dedicated wearable devices (e.g.
smartwatches), but also to activity monitoring and health
promotion applications deployed in smartphones (Lynch &
Farrington, 2017). Such applications have the benefit of offering a self-management intervention that is adaptable, low
cost, and easily accessible, while research has suggested that
the use of such apps has the potential to improve health
outcomes for patients with chronic diseases (Sun, Rau, Li,
Owen, & Thimbleby, 2016; Whitehead & Seaton, 2016), as
well as the potential to promote a healthy lifestyle and physical activity (Dallinga, Mennes, Alpay, Bijwaard, & de la FailleDeutekom, 2015).
In such cases, persuasive technologies are often employed to
aid and motivate people to adopt positive behaviors and avoid
harmful ones (see Marcus, 2015c for an example of how persuasion design was employed in a mobile phone application to
reduce food consumption and increase exercise). In brief, the
persuasive strategies – besides tracking and monitoring – that
are typically employed are social support, sharing and comparison, reward points and credits, praise, persuasive images and
messages, suggestion and advice, reminders and alerts, as well as
collaboration and cooperation with peers (Orji & Moffatt, 2018).
Overall, persuasive technology has proved to be effective (Orji &
Moffatt, 2018), however, it raises ethical concerns (see also
Section 4.2.3). For instance, the widespread manipulation of
humans by autonomous intelligent agents could result in loss
of human free agency and autonomy and even to deceptions of
humans (e.g. agents pretending to be another human being)
(The IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems, 2019).
The possibility for self-tracking – that is, of “monitoring and
recording and often measuring elements of an individual’s behaviors or bodily functions” (Lupton, 2016, p. 2) – as it is offered by
contemporary devices (including PMDs) has recently constituted
a popular topic for discussion and debate. The attributes tracked
Selling of data
Targeted marketing
Reduced user caution regarding privacy, due to habituation of OSNs, incomplete information, and cognitive bias
Persuasive technology used for questionable objectives
Technology recipients include vulnerable user groups (e.g. older persons, children, patients)
Use of social media to harness patient-reported data
Data accuracy and integrity
Accessibility and access for all
Autonomy
Beneficence and non-maleficence
Dignity
No discrimination
Free and fully informed consent
Justice
Safety
Loss of human contact
Legitimacy of purpose
Liability, as users become active participants in the delivery of healthcare
Data about people in the virtual world are complemented by and combined with real world data
Data anonymity is further threatened by the connectedness of everyday things (e.g. identification of personal devices, huge databases with identification data)
Privacy threats in the case of inventory attacks or in lifecycle transitions of devices that carry personal information
Possibility of advanced profiling through correlations with other profiles and data
Privacy and identification threats realized by linking different systems and combining data
Automated decision making may lead to discrimination and narrowing of choices
Potential for reinforcement of existing inequalities for vulnerable user groups
Malevolent attempts and misleading offers towards vulnerable individuals
Attribution of new implicitly derived properties to the individual
Use of data for monitoring
Misinterpretation of data
Applying persuasion techniques without the user’s awareness
Design trade-off: smartness vs. privacy
Development of an ethical system of privacy rules
Responsibility
Ethical decision making
Transparency, featuring a balance in the level of details
The high number of interconnected devices raises scalability issues making traditional security countermeasures inapplicable
Most commercial devices feature limited on-board security settings
Malicious attacks against sensors and actuators can threaten physical security and jeopardize trust
The human agent is the main weak point in cybersecurity
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● Violation of one’s right to control the use and disposition of one’s body
Over attachment to virtual agents
Hostile behavior in the virtual environment
Potentially obscure and dangerous intentions of the VR environment creator
Freedom of the individual vs. impact of technology on one’s free will
Reshaping of social interactions
Altered perception of reality and reality dissonance
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Online Social Networks
Healthcare technologies
Biometrics
Virtual Reality
IoT and Big Data
Intelligent environments
Cybersecurity
Privacy and ethics concerns
Solitude: the right to be alone without disturbances
Anonymity: the right to have no public personal identity
Intimacy: the right not to be monitored
Reserve: the right to control one’s personal information, including its dissemination
● Research in public spaces with the contradicting requirements of informed consent and observing original (uninfluenced) user behavior
● Involvement of vulnerable user populations
● Research using online data
●
●
●
●
HCI research
Main Issues
Fundamental privacy concerns
Table 4. Privacy, ethics and security concerns as they appear in different technological domains.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
17
18
C. STEPHANIDIS ET AL.
may include body state (e.g. physical and physiological), psychological state, activities (e.g. exercise, diet, and sleep), social interactions, and environmental and property states (Oh & Lee, 2015).
Self-trackers vary in their approaches, as some of them may collect
data on some dimensions of their lives and only for a short time,
while others may do so for a multitude of phenomena and for long
time periods (Lupton, 2016). Key UX issues for the adoption of
self-tracking have been found to be data controllability, data
integration and accuracy, data visualization, input complexity,
privacy, aesthetics, and engagement (Oh & Lee, 2015).
Self-tracking provides the potential to actively involve individuals in the management of their health and to generate data
that can benefit clinical decision making and research, resulting
in improved overall health and greater self-knowledge (Sharon,
2017). PMDs and self-tracking constitute two main sources of
small data (rich temporal data from a single person), which
have the potential to become more effective when combined
and thus lead to more accurate predictions (Dodge & Estrin,
2019). However, integrated analysis and tools to achieve it are
still in early stages and constitute a near future goal to achieve,
overcoming the current isolation of data analytics that are
based on a single data source.
On the other hand, the digitization and automation of selftracking entails privacy threats and ethical concerns regarding
the usage of the data generated (Lupton, 2017; Lynch &
Farrington, 2017). In addition, from a sociological perspective,
a number of questions arise, including how the concepts of the
body, self, social relationships, and behaviors are reconfigured,
and what are the implications for data politics, data practices and
the digital data economy, as well as what are the power inequalities inherent in self-tracking cultures (Lupton, 2017).
Self-tracking is strongly linked with the concept of quantified bodies, constructing dynamic bodies that produce data
and are knowable through specific forms of data, constituting
sites of possible intervention, and introducing new risks
(Lynch & Farrington, 2017). Critiques to the self-tracking
movement include that it opens the path for disengagement
of the state from its responsibility for citizens’ health, that
humans are under surveillance and discipline, that the scientific accuracy and objectivity of self-tracking activities is questionable, and that the acquired metrics are simple numbers
that cannot represent the richness and complexity of human
nature (Sharon, 2017). All things considered, it is evident that
developing self-tracking technologies is much more than
a technical task, and hence, a multi-disciplinary approach
should be followed that involves software developers, interface
designers, clinicians, and behavioral scientists (Piwek et al.,
2016).
5.2.2. Serious games for health
A technological intervention that has been used in the context of
health is serious games, that is, games used to drive health-related
outcomes (Johnson et al., 2016). There are various types of serious
games for health, including games for physical fitness (exergaming), education in self-healthcare, distraction therapy (e.g. to help
individuals with chronic illness to deal with pain), recovery and
rehabilitation, training and simulation, and cognitive functioning
support (Susi, Johannesson, & Backlund, 2007). An important
advantage of using serious games for health purposes is the ability
of games to motivate, and as a result they constitute a good way to
influence users and keep them engaged in health behavior change
(Johnson et al., 2016).
Despite their potential, it has been observed that the broad
adoption of health games is difficult to achieve. A considerable
challenge faced is that their development involves high design
complexity and multi-disciplinary teams, ideally including users
in the process, which all result in slower speed of implementation (Fleming et al., 2017). At the same time, user expectations
regarding gaming evolve rapidly (Fleming et al., 2017).
Therefore, a challenge for serious games approaches is to keep
up with user expectations, which are formed through users’
experience with the entertainment industry. Nevertheless, trying
to “compete” with professional, big studio entertainment games
induces additional costs that cannot be practically compensated,
as there is no developed market in the field (Johnson et al., 2016).
An additional concern refers to the evaluation of serious
games with target users. In particular, the majority of evaluation
efforts mainly focus on the usability of the designed games and
not on the actual long-term impact of a game on the individual
(Kostkova, 2015). Although such long-term evaluation efforts
are costlier and demanding, they are crucial for the evolution
and user acceptance of health games. IoT and big data are
technological advancements that have the potential to assist
towards the long-term evaluation and validation of these
games’ impact. Hence, future developments in the field will
feature “integrated gaming”, fusing data from games and social
networks with personal data, and providing feedback to diagnostic systems (McCallum, 2012). Obviously, all the issues that
have been previously discussed regarding ethics and privacy are
of paramount importance in this context.
5.2.3. Ambient Assisted Living
A well-known domain pioneering in assisting a person’s living
conditions, aimed at supporting older and disabled users, is
AAL. AAL refers to the use of ICT in a person’s living environment, to improve their quality of life and enable them to stay
independent and active for as long as possible (Moschetti,
Fiorini, Aquilano, Cavallo, & Dario, 2014; see also Section 5 for
a discussion on technologies to support the disabled and the
aging population). AAL, which is rooted in Assistive
Technologies and “Design for All” approaches (Pieper, Antona,
& Cortés, 2011) and has emerged as a technological response to
the phenomenon of population aging, benefits from the AmI
computing paradigm to provide intelligent, unobtrusive, and
ubiquitous assistance (Blackman et al., 2016).
The main solutions conceived to support older people in need
of care fall into three main AAL service areas: prevention; compensation and support; as well as independent and active living.
Indicative services include the prevention of early degeneration of
cognitive abilities, promotion of healthy living lifestyle, management of chronic diseases, fall prevention, management of daily
activities, maintaining social contacts, and having fun (Blackman
et al., 2016; Moschetti et al., 2014). Such environments can often be
equipped with robotic systems for assisting older adults in daily
activities, such as cleaning, picking up things, as well as in their
mobility. However, they can also contribute to the social, emotional, and relational aspects of older adults’ lives. (Breazeal,
Ostrowski, Singh, & Park, 2019). Future efforts should focus on
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
person-centered care, by improving personalization of care, and
considering different needs, expectations, and preferences of individuals (Kachouie, Sedighadeli, Khosla, & Chu, 2014).
Given the technical capabilities of AAL environments, and
their potential for recording personal and sensitive data implicitly and explicitly, privacy and ethics (see also Section 4) are
dominant concerns. Privacy and confidentiality should be
safeguarded, while technology should not replace human
care, resulting in older adults’ isolation (Rashidi &
Mihailidis, 2013), nor should it lead to abuse and violation
of human rights (e.g. excessive control by caregivers).
5.2.4. Intelligence in healthcare
With the advent of IoT and environment intelligence, it has
been foreseen that the traditional professional-centric model
of healthcare will transform into a distributed healthcare
system, where the individual becomes an active partner in
the care process (Arnrich et al., 2010). Intelligent environments have the technological infrastructure and ability to
support this transformation. In particular, they are substantially adept at discovering patterns, detecting anomalies and
deviations from daily routines (e.g. indicating an imminent
health problem or a crisis that needs to be attended to), as
well planning and scheduling (Acampora, Cook, Rashidi, &
Vasilakos, 2013; Consel & Kaye, 2019). An important potential pitfall refers to overreliance on the technology, which may
tamper with individuals’ self-confidence to manage their life
and result in preliminary loss of abilities (Acampora et al.,
2013). Moreover, overreliance can also lead to patient isolation and loss of personal care (Acampora et al., 2013; Andrade
et al., 2014) (see also Section 4.2.3).
Intelligent environments for healthcare also extend to
environments for therapy and rehabilitation, as well as smart
hospitals in the support of medical staff (Acampora et al.,
2013). Intelligence may include AI techniques, which are used
in healthcare and medicine. Typically, the AI methods
employed are machine learning, expert systems, and knowledge representation techniques which are mainly exploited in
the context of diagnosis, prognosis, and medical training
(Acampora et al., 2013). However, new potentials emerge
through the evolution of deep learning methods, achieving
high accuracy and performance comparable to that of experienced physicians (Jiang et al., 2017). This evolution does not
imply that technology will substitute humans; instead, it suggests that it can effectively support them by taking care of
trivial tasks, empowering humans towards enhanced performance and attainment of higher goals. Robots and autonomous agents can also constitute a component of an intelligent
environment in the context of healthcare. In brief, robots for
healthcare can be classified as inside the body (e.g. surgical
robotics), on the body (e.g. prosthetics), and outside the body
(e.g. robots for physical task assistance, robots as patient
simulators, or care robots) (Riek, 2017). Two challenging
issues for the adoption of robotics in healthcare are their
acceptance by end-users, as well as their high cost, which is
still prohibitive (Andrade et al., 2014). User acceptance in the
case of robots is determined not only by usability factors, but
also by the robot’s form and function (Riek, 2017). In the case
of intelligence and robots used to support clinical practices,
19
the medical staff (physicians and nurses) also confronts the
danger of becoming over-dependent on technology. This
highlights again the need for addressing ethical concerns
and designing technological environments that bring to the
forefront human values and autonomy.
Future research in the field should focus on pervasive, continuous and reliable long-term sensing and monitoring (Arnrich
et al., 2010). Safety and reliability are also key requirements to
address in the context of healthcare robotics (Riek, 2017). This
will result in reliable and trustworthy systems that can be
accepted both by patients and doctors. In addition, research
should focus on developing new and evolving existing design
and evaluation methods for ubiquitous patient-centric technologies (Arnrich et al., 2010), where evaluation should move
beyond classical HCI aspects to healthcare aspects, as well as to
the long-term impact of these technologies on patients’ quality of
life. Moreover, the assessment of the clinical effectiveness of
clinical intelligent systems and robots constitutes a key factor
for their adoption in the healthcare domain (Riek, 2017).
Predictions for the future of the field identify that “healthenabling and ambient assistive technologies will not even be
recognized as such; they will be an integrated part of the health
system” (Haux et al., 2016). By using big data, IoT, and AI, it will
be possible to collect data for a wide number of medical issues
stemming from a wide variety of contexts and train AI models
that will be able to predict, diagnose, and even suggest appropriate treatments. In this respect, two main issues arise: data
privacy and ethics, as well as the assessment of the performance
of intelligent environments. With regard to the latter, it is crucial
to identify the impact that any misjudgments of the technological environment may have on the individual, and to realize that
any technological failures will no longer be as “innocent” as in
the past. They will entail risks not only for users’ work or leisure
activities, but also for their health, well-being, and even their life.
This realization opens a breadth of discussions regarding the
education and training of designers and software engineers, the
development of an ethical code, as well as the establishment of
novel evaluation and testing procedures.
5.2.5. Well-being and eudaimonia
Beyond physical health, self-tracking and ICT in general have
been used to pursue mental health (Thieme, Wallace, Meyer, &
Olivier, 2015) and mood regulation (Desmet, 2015), in the wider
context of technology for user eudaimonia and happiness.
Aligned with this approach is the concept of positive computing,
which aims to develop technologies to support well-being and
human potential for individuals with diagnosed mental health
conditions, but also for anyone through preventative mental
health support, strengthening mental health through positive
behavior change and self-reflection, and through increasing
empathy towards and awareness of mental health (Wilson,
Draper, Brereton, & Johnson, 2017). Assisted by the advancement
that UX has brought by encompassing emotional and more
pleasure-oriented aspects, and by positive psychology that advocates positive human development, positive design offers the
framework for pursuing the design of technologies for lasting
wellbeing by targeting eudemonic experiences of products, services, and of the activities they enable (Pohlmeyer, 2013).
20
C. STEPHANIDIS ET AL.
A major concern in the direction of technology for wellbeing and eudaimonia is that – at least for the time being – this
partnership between positive psychology and interactive technology is mainly on a conceptual level (Diefenbach, 2018). In
the new era, HCI will need to change focus, shifting from user
experience to user eudaimonia, studying how technology
ensures users’ well-being. This is a challenging undertaking, in
principal due to the inherent difficulty entailed in defining and
measuring these concepts (Gilhooly, Gilhooly, & Jones, 2009).
In addition, in contrast to physical well-being, human eudaimonia is entirely subjective and therefore a challenging target to
analyze, model, and address through technology. Reflecting
back to topics discussed in the previous three challenges
(Section 2, Section 3, and Section 4), it is evident that in order
to effectively and efficiently support people in their pursuit of
happiness, well-being, and eudaimonia, a harmonious symbiosis
of technology and humans is required, featuring humane intelligence and respecting human rights.
In addition, despite any advances in the field of technology for
well-being, an open research question that needs to be resolved in
order to develop technologies that will demonstrably advance the
well-being of humanity is the lack of concise and useful indicators
to measure those advancements (The IEEE Global Initiative on
Ethics of Autonomous and Intelligent Systems, 2019). Such
metrics should take into account not only individual and community well-being, but also environmental and planet goodness
(see also Section 7), as well as human rights, capabilities, and fair
labor, as these circumstances among others constitute the basis for
human well-being (The IEEE Global Initiative on Ethics of
Autonomous and Intelligent Systems, 2019).
5.3. Summary of emerging requirements
Overall, technology in the context of healthcare is already widely
used, for instance through personal medical and self-tracking
devices, serious games, and AAL environments. More advanced
technologies, such as AAL and intelligent environments, featuring AI and robotics, have already seen their application in the
field, yet there are still open research issues. Beyond physical
health, technology can also be used to promote human wellbeing, including not only health aspects, but also psychological
well-being through life goals’ fulfilment, by helping to prevent,
reduce and manage stress, depression and psychiatric illness, as
well as by fostering purposefulness, self-confidence, and positive
feelings. Table 5 lists a summary of the challenges entailed, as
they have been highlighted throughout this section.
6. Accessibility and universal access
6.1. Definitions and rationale
Accessibility of Information Technology (IT) is a topic that has
actively engaged the HCI research community, and which aims
to ensure that IT applications and services can be used on an
equal basis by users with disabilities and older users. First efforts
in the field pursued accessibility via a posteriori adaptation, that
is, by employing assistive technologies to provide access to
applications that were originally designed and developed for
non-disabled users (Emiliani & Stephanidis, 2005). The reactive
nature of these approaches has been criticized for its failure to
catch up with the fast pace with which technology evolves, for its
cost-ineffectiveness, as well as for the fact that it cannot always
ensure equal access without functionality loss (Stephanidis &
Emiliani, 1999).
This criticism stimulated the conceptualization of theories,
methodologies, and tools of a proactive and more generic nature
that could more accurately adapt to the increased interactivity of
new technologies. Universal access advocates the right of all
citizens, regardless of age and disability, to obtain equitable
access to, and maintain effective interaction with, IT information
resources and artifacts (Stephanidis et al., 1998). In the context of
universal access, design for all has been defined as a general
framework catering for conscious and systematic efforts to
proactively apply principles, methods, and tools to develop IT
products and services accessible and usable by all citizens, thus
avoiding the need for a posteriori adaptation, or specialized
design. Indicative of the importance and interest that this area
has received is the number of relevant terms and approaches
used to describe the notion of proactive solutions to accessibility,
comprising inclusive design, barrier-free design, universal
design, and accessible design (Persson, Åhman, Yngling, &
Gulliksen, 2015).
Two decades later, and in view of the technologically
enriched environments that have already pervaded and the
intelligent environments that are about to be materialized,
several questions arise. Have the aforementioned approaches
been fruitful? How can the knowledge and experience
acquired from these past approaches be applied to the new
technological environments? What are the promises and challenges that these new environments bring with regard to
universal access? This section attempts to provide insights to
these inquiries.
6.2. Main research issues and state of the art
6.2.1. Adoption of proactive approaches
Accessibility has been established in national legislation and is
also employed by international standardization organizations,
which signifies that on a political level it is not only of
importance for people with disabilities, but anyone can also
benefit from universal access approaches, as it is now clear
that one’s abilities are constantly changing (Persson et al.,
2015). Yet, it is a fact that industry – at large – has not
embraced proactive approaches.
A potential reason that has been identified early is that
although the total number of older or disabled persons is large,
each individual disability or impairment area represents only
a small portion of the population, therefore it would be impractical and impossible to design everything so that it is accessible
by everyone regardless of their limitations (Vanderheiden,
1990). On the other hand, the range of human abilities and the
range of situations or limitations that users may find themselves
in is too large; therefore, products could only focus on being as
flexible as commercially practical (Vanderheiden, 2000).
Proactive approaches do not advocate a “one-size-fits-all”
approach; instead they aim to promote accessibility for everyone through the adaptation of the design to each user. As
a result, many companies perceive universal design as an extra
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
21
Table 5. Challenges for technology supporting well-being, health, and eudaimonia.
Main Issues
Personal Medical Devices and
self-tracking
Challenges
● Acceptance by the medical community
● Reliability of devices
● High potential for misinterpretations, when patients without medical training attempt to interpret symptoms based on PMD data
● Integrated analytics, effectively combining data from various sources
● Ethical concerns for persuasive strategies
● Loss of human free agency and autonomy
● Privacy and autonomy threats (humans are under surveillance and discipline)
● Ethical concerns regarding the usage of the generated data
● Sociological concerns regarding quantified bodies
● Scientific accuracy and objectivity of self-tracking activities
● Inadequacy of numbers to represent the complexity of human nature
Serious games for health
●
●
●
●
●
Limited adoption of serious games for health
High design and development complexity leading to slow speed of implementation
Demanding user expectations, which are formed by the entertainment industry
High cost vs. undeveloped market
Evaluation of the actual impact of serious games on the individuals’ health
Ambient Assisted Living
●
●
●
●
Privacy and confidentiality
Replacement of human care by technology
Isolation of the older adults and patients
Excessive control by caregivers, leading to violation of human rights
Intelligence in healthcare
● Over-reliance on the intelligent environment, leading to preliminary loss of abilities
● Patient isolation and loss of personal care
● Over-dependence of the medical staff on technology
● Safety and reliability
● Accuracy
● Clinical effectiveness assessment of medical technologies
● Long-term evaluation of the impact of these technologies on patients’ quality of life
● New design and evaluation methodologies for patient-centric design
● New development and testing procedures ensuring the accuracy of technologies, eliminating risks for the patients’ well-being and life
Well-being and eudaimonia
● Difficulty in defining and measuring well-being and eudaimonia
● Subjective nature of these concepts, impeding their analysis and modelling
● Broad scope of the concepts extending beyond the individual to community well-being and planet goodness, as well as other
concepts, such as human rights and fair labor
cost or an extra feature, a perception, however, which is not
accurate (Meiselwitz, Wentz, & Lazar, 2010). On the contrary,
by adopting a universal access approach, companies could
achieve a number of business-oriented objectives, including
to increase market share, take market leadership, enter new
product markets, achieve technology leadership, and improve
customer satisfaction (Dong, Keates, & Clarkson, 2004).
Overall, proactive approaches do not propose the elimination
of assistive technologies. Such technologies have always been
a good solution to many problems of individuals with disabilities. Besides representing a societal need, assistive technologies
constitute a notable niche market opportunity (Vanderheiden,
2000), and according to current market predictions assistive
technologies are even expected to experience some growth
(Rouse & McBride, 2019). Nevertheless, as technology evolves
and taking into account the frequent comorbidity of disabilities
(especially in the case of older adults), proactive approaches can
become a more realistic and plausible solution in the near future
(Stephanidis & Emiliani, 1999).
6.2.2. Population aging
In view of the technologically enriched environments where
all citizens will have to live, the approach of adapting already
developed technologies as a solution to the problems of integration of people with disabilities, will not be tenable
(Emiliani, 2006). An important influential factor is the rapid
aging of population, resulting in a considerable proportion of
the future technological environments users being elder,
who – in a literal sense – perceive technology differently,
due to functional limitations and age-related changes in cognitive processes (Charness & Boot, 2009; Czaja & Lee, 2007).
Within this perspective, universal access in the new technologically-enriched environments constitutes a challenge with
renewed and increased interest, where open issues need to be
promptly explored.
Moreover, the rapid aging of the population requires
appropriate approaches for “aging in place”, that is remaining
active in homes and communities for as long as possible,
avoiding residential care, and maintaining independence,
autonomy, and social connectedness (Pfeil, Arjan, &
Zaphiris, 2009; Wiles, Leibing, Guberman, Reeve, & Allen,
2012). AAL environments (see also Section 5.2.3) have the
potential to support independence and quality of life of older
adults (Blackman et al., 2016). Assisted living systems can also
contribute in addressing emergency situations, which are
expected to note a dramatic increase due to the aging of the
22
C. STEPHANIDIS ET AL.
population and the consequent rise of chronic diseases
(Kleinberger, Becker, Ras, Holzinger, & Müller, 2007). At
the same time, following a similar strategy, ambient assisted
working can foster adaptations of the workplace, thus ensuring that the aging and disabled population are active and
participate for the longest possible in the workforce (Bühler,
2009). Besides living and working environments, it is necessary that public environments and especially transportation
systems are revisited and redesigned in order to be agefriendly and age-ready (Coughlin & Brady, 2019). Overall,
a major concern is how elder users will be motivated to use
ICT technologies and how acceptable they will find them,
factors which are both expected to go through changes in
the near future, as a new generation of technologically-adept
elder users will emerge (Vassli & Farshchian, 2018).
6.2.3. Accessibility in technologically enriched
environments
These new environments bring about several risks, yet they also
bear the potential to effectively address old and new accessibility
issues and requirements, due to their technological richness. In
particular, the abundance of interactive and distributed devices
can result in a relaxed and enjoyable interaction, employing
multimodal interfaces, thus providing for each user those interaction modes that are more natural and suitable (Burzagli,
Emiliani, & Gabbanini, 2007). Other benefits ensured by these
new environments include the possibility of task delegation to
the environment and its agents, which can reduce physical and
cognitive strain, as well as the breadth of applications and
services that will be available, addressing a wide variety of
domains that are critical for the disabled and older users.
Henceforth, a fundamental benefit of these environments is
that they will be able to further support independent living,
and provide higher quality of healthcare, aligned with the AAL
vision to solve problems caused by the aging of the population
(Stephanidis, Antona, & Grammenos, 2007).
Despite their inherent assets, several matters pertaining to
universal access that mainly stem from the environments’
increased technological complexity, need to be resolved. For
instance, it has been identified that the use of “natural” interaction techniques, through spoken commands and gestures,
may actually “ban” users with disabilities and that spoken
communication from the environment, or complex messages
in large mural screens or in small wearable displays may be
prohibitive for some users (Abascal, De Castro, Lafuente, &
Cia, 2008). Moreover, the multitude of devices, applications,
and environment intelligence can impose high cognitive
demands on users, if not properly designed. This complexity
is further increased by the fact that technology “disappears” in
the environment and interaction becomes implicit (Streitz,
2007). Another cause of complexity refers to the potential of
the environment to make decisions and learn from users’
behavior. As already discussed in Section 2.2.1, intelligent
environments need to be transparent to their users, however
this constitutes a new accessibility challenge on its own, especially for older people and individuals with cognitive disabilities. Moreover, complexity is also induced by the increased
involvement and interaction with digital artifacts that will be
in abundance. In this context, the challenge for intelligent
environments is to not be too interaction intensive, despite
the fact that humans will be surrounded by a wide range of
computing devices of different functionality and scale.
Another important concern pertains to the different levels of
accessibility that need to be ensured: accessibility of the individual devices for their owners and potentially other users
with different requirements, accessibility of the environment
as a whole (devices, content, and functions provided), as well
as the combination of accessibility of the virtual and physical
world. Finally, ethical issues (Section 4), as well as issues
related to the symbiosis of human and technology
(Section 2), are obviously of utmost significance.
6.2.4. Methods, techniques, and tools
Strategies followed for the development of Assistive
Technologies will no longer be appropriate in the context of
component-based, distributed technological environments
(Treviranus, Clark, Mitchell, & Vanderheiden, 2014). Taking
into account past knowledge and experience, as well as the
opportunities and difficulties inherent in technologically
enriched environments, it is evident that several research
directions need to be explored. Understanding the evolving
human needs and context of use, developing appropriate user
models (Casas et al., 2008), as well as advancing knowledge of
user requirements and of the appropriateness of the various
solutions for the different combinations of user and environment characteristics (Margetis, Antona, Ntoa, & Stephanidis,
2012) constitutes a path that should be explored. The development of appropriate architectures, ready-to-use accessibility
solutions, and appropriate tools are also essential elements of
pursuing universal accessibility in technologically enriched
environments (Margetis et al., 2012; Smirek, Zimmermann,
& Ziegler, 2014). Automatic generation of accessible user
interfaces is another research direction worth exploring for
its potential to create accessible personalized interfaces
(Jordan & Vanderheiden, 2017). Finally, new evaluation
methodologies and tools need to be targeted, so that all the
aspects of a user’s experience (including accessibility) in such
environments are assessed, way beyond usability and UX
assessment in typical evaluation setups.
6.2.5. Universal access in future technological
environments
Forthcoming technological environments will not be simply
the smart home or workplace, but entire smart cities, such as
the “Humane, Sociable and Cooperative” hybrid cities envisioned by Streitz (2011). Taking into account that technology
will be a vital component of any daily activity and it will cater
for humans’ needs, prosperity, and well-being (see
Section 4.3), the stake for universal access is now much higher
than in the past.
Particular attention needs to be paid to any individual at
risk of exclusion, so that the new futures do not exclude,
isolate, or exploit anyone (Chatterton & Newmarch, 2017).
Research has identified several factors that have an impact on
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
digital inequality, including race and ethnicity, gender, and
socioeconomic status (Robinson et al., 2015). Other influential
factors include age, education, occupational status, health,
social connectedness, and availability of infrastructures (e.g.
people in rural areas have lower levels of access to highquality internet connections) (Van Deursen & Helsper,
2015). Further elaborating on the causes of the digital divide
is a subject of social science research and beyond the scope of
this paper. Whatever the reasons for which individuals may be
at risk of exclusion, the challenge is that universal access
becomes a matter of paramount importance in the forthcoming future, when access to technology will not only mean
access to information but to well-being and eudaimonia (see
also Section 4.3).
6.3. Summary of emerging requirements
In summary, as HCI has always focused on the human, in the
new technology-augmented environments, efforts in the field
will be extended towards improving the quality of life of
various populations, including the disabled and older persons.
Accessibility and Universal Access are not new concepts,
however in view of the population that is aging and of the
constantly increasing technological complexity, they become
not only timely, but also pivotal for the prosperity of future
societies. Undoubtedly, the advent of intelligent environments
entails universal access risks (see Table 6), nonetheless it also
offers new opportunities that should be exploited. What is
certain though, is that reactive approaches to accessibility will
fail at addressing the inherent complexity and scalability
requirements of future interactive environments. The requirement for more holistic approaches becomes now more prominent than ever, constituting a near-future research
direction for the HCI field.
23
7. Learning and creativity
7.1. Definitions and rationale
As technologies continue to mature, new opportunities for
fostering individual growth through multimodal stimulation
of how humans learn and apply creativity will emerge. People
with diverse backgrounds, skills, and interests will be able to
collaborate to solve challenging problems, by cooperatively
learning and creating knowledge together. In this new era,
technology will support and promote new learning styles,
multimodal learning affordances, as well as lifelong learning.
This flexibility encourages, nurtures, and amplifies human
creative skills in fields such as arts, science, and design. At
the same time, the discussion of how technology should be
applied in the learning context has become timelier than ever,
expanding to issues such as privacy and ethics, learning theories and models, and pedagogical aspects.
Learning assisted, enhanced and mediated by technology is
not a new concept, and many types of technologies are already
used in schools today. Hardware includes computers, projectors,
interactive whiteboards, and smart mobile devices, while software involves office applications (e.g. word processors and presentation tools), online learning management systems, social
networking sites and applications, online encyclopedias and
search engines, communication software, as well as serious
games (De Gloria, Bellotti, & Berta, 2014; Ng, 2015; Spector,
Merrill, Elen, & Bishop, 2014). The contribution of ICT in
education supports the sharing of material and teaching tasks,
while also enhancing connectivity, cooperative work, and
expands experiential learning opportunities outside of the classroom (Augusto, 2009). In addition, it has been argued that
digital technologies: (i) support learning by increasing students’
motivation, providing demonstrations of the topics taught and
adapting to each student’s individual pace; (ii) contribute to the
development of the so called twenty-first century skills, such as
Table 6. Accessibility and Universal Access challenges for technologically enriched environments.
Main Issues
Adoption of proactive approaches
Challenges
● Industry reluctance due to a number of reasons: niche market, commercial practicality of addressing a wide range of
human abilities, and cost
Population aging
●
●
●
●
●
Users perceive (in a literal sense) technology differently
Aging in place
Increased incidence of emergency situations
Active participation in the workforce
Age-friendly and age-ready public environments and transportation systems
Accessibility in technologically enriched ●
●
environments
●
●
Inadequacy of reactive approaches in the context of distributed technological environments
Exploitation of the technological infrastructure and multimodality offered
Natural interaction techniques may be prohibitive for some users
High cognitive demands imposed on users due to the technological complexity of the environment, implicit
interactions, and the need for AI transparency
● Different levels of accessibility: accessibility of systems for their owners and potentially other users with different
requirements, accessibility of the environment as a whole (devices, content, and functions provided), as well as the
combination of accessibility of the virtual and physical world
Methods, techniques, and tools
● Advancement of knowledge regarding human needs and context of use in technologically enriched environments
● Development of appropriate user models
● Analysis and classification of the of the appropriateness of the various solutions for the different combinations of user
and environment characteristics
● Development of appropriate architectures for universal accessibility in technologically enriched environments
● Development of ready-to-use accessibility solutions
● Automatic generation of accessible user interfaces
● New evaluation methodologies
Universal Access in future technological ● Technology will be a vital component of any daily activity
● Intelligent environments will cater for humans’ needs, prosperity, and well-being
environments
● Digital inequality and lack of universal access will not only pertain to access to information, but also to humans’
health, well-being, and eudaimonia
24
C. STEPHANIDIS ET AL.
communication, collaboration, problem-solving, critical and
creative thinking; and (iii) foster the development of digital
citizenship and lifelong learning (Ng, 2015).
Research has also debated customary educational
approaches and advocated that technology can now support
varying learning styles and approaches, be it situated learning,
authentic learning, or ubiquitous learning. Situated learning
argues that learning occurs from our experience of participating in daily life and is shaped by the activity, context and
culture in which it occurs (Lave & Wenger, 1991). Authentic
learning, requires students to address real-world problems in
contexts that mimic the work of professionals, and is typically
associated with the apprenticeship model in traditional educational paradigms (Burden & Kearney, 2016). Ubiquitous
learning, empowered by mobile technology, stands for learning anywhere and at any time (Hwang & Tsai, 2011). All these
approaches promote learning beyond the typical classroom
boundaries, a potential that can liberate education towards
new directions. After all, if civilization were to invent higher
education today, rather than centuries ago, it is not at all
certain if campuses would be dominated by lecture rooms,
libraries, and labs, or if learning would be organized in fixed
time blocks (Dede, 2005).
This section discusses the main issues regarding the use of
technology for learning and creativity, as they are shaped by
the advent of new technologies and are expected to be influenced by intelligent environments. Its focus is to highlight the
unique contributions of each technology to learning and to
discuss challenges and implications from the use of technology in learning.
7.2. Main research issues and state of the art
7.2.1. A new generation of learners
Besides technology advancements, it has been claimed that students’ interaction with learning materials has changed. Neomillennials, digital natives, or Generation Z, a generation
whose world was shaped by the Internet and who are fluent
technology users (Seemiller & Grace, 2016), show different
learning styles than previous learners. For neo-millennials,
learning is characterized by seeking and synthesizing, rather
than by assimilating a single “validated” source of knowledge
(e.g. books, lectures), multitasking is the norm, and personalization is pursued (Dede, 2005). Other attributes of neo-millennials
are that they prefer active learning based on experience (real and
simulated), and co-designing of learning experiences (Dede,
2005). But there is also the danger that students prefer a “copy
and paste” attitude, exploiting the wide range of information
available on the web (Comas, Sureda, & Santos, 2006), instead of
actually creating new content or drawing new inferences. There
are two aspects to it: (i) students might use wrong information,
because not all sources on the web are equally trustworthy, and
(ii) it may encourage a tendency to plagiarism, which is a big
problem so that special software had to be developed to identify
plagiarism.
Although it seems evident that technological pervasiveness
has and will influence humans in multiple levels, including
learning styles and behaviors, we should be careful in our
generalizations. The aforementioned characteristics of young
learners are apparently true for some of them (few or many –
it doesn’t matter), yet they are not valid for every learner
(Bennett, Maton, & Kervin, 2008). As a result, technological
developments in the field of learning should not adopt any
such generalizations and assume that all new learners are
technologically adept or that they exhibit specific behaviors
and preferences. New technologies can certainly accommodate new and emerging learning styles, providing additional
functionalities and tools to support tutors and learners.
However, they should keep the human at the center and
study the various needs and requirements of each individual,
avoiding the assumption that there is an “average learner”;
instead they should support personalized learning and adaptation to each individual learner. Therefore, there is a need to
understand the influences of human factors in order to design
digital learning environments that best suit each and every
learner (Chen & Wang, 2019).
7.2.2. Extended reality
Extended reality technologies (i.e., VR, AR, and Mixed Reality
(MR)) offer the opportunity for blending the digital world with
the physical, in a multimodal interactive fashion that can personalize and elevate the learner’s experience (Kidd & Crompton,
2016) (see also Section 3.2.5 for a discussion of the interactionrelated challenges in VR and AR environments). Several benefits
have been reported for the use of these technologies in learning.
An important advantage refers to their power to visualize complex and abstract concepts (e.g. airflow of magnetic fields, chemical reactions), as well as to support hands-on experiences with
places that could not be visited (e.g. a historical site) (Akçayır,
Akçayır, Pektaş, & Ocak, 2016; Lee & Wong, 2008). Additionally,
as an interactive technology medium, extended reality has the
potential to create stimulating experiences and increase learners’
motivation, support collaboration between students, and foster
student creativity and imagination (Yuen, Yaoyuneyong, &
Johnson, 2011).
The use of extended reality technologies in learning may also
interfere with learning and pedagogical goals, for example by
detracting students’ attention to the technology itself, or by
altering learning to accommodate for technology limitations
(FitzGerald et al., 2013). An additional concern refers to the
ease of use of the extended reality environment for learners
and tutors. For example, a common difficulty that learners
encounter in VR environments is navigation using a 3D interface, while creating educational content in an extended reality
environment can be a cumbersome task (Huang, Rauch, & Liaw,
2010; Yuen et al., 2011). Hence, a challenging aspect for the
design of these technologies is how to employ them in order to
appropriately support the entire educational process, from the
design of the content to the delivery of the learning experience.
Apparently, this requires a multi-disciplinary approach and the
active engagement of tutors, not merely as providers of educational content, but as users of these technologies and knowledge
conveyors.
7.2.3. Mobile learning
Mobile devices are often used in the context of AR applications, but also in a wider learning context, promoting situated,
authentic, and ubiquitous learning. Mobile devices’ suitability
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
towards the aforementioned learning approaches mainly
stems from their characteristics, and namely portability, connectedness and social interactivity, context sensitivity, and
individuality – as personal devices can fully support customization and personalization (Naismith, Lonsdale, Vavoula, &
Sharples, 2004).
Users are already familiar with mobile devices, henceforth
challenges mostly refer to how mobile technologies can be
effectively and efficiently employed in learning. Incorporating
mobile technologies in education raises concerns as to how
learning activities should be designed by instructors and how
educators’ and learners’ thinking is reconceptualized when
mobile devices are used seamlessly across the “traditional
boundaries between formal and informal contexts, virtual
and physical worlds and planned and emergent spaces”
(Burden & Kearney, 2016). A major concern is that although
several qualitative studies report positive results regarding
mobile learning and its relevance with new learning
approaches, there is a notable lack of quantitative reports on
long-term impact (Pimmer, Mateescu, & Gröhbiel, 2016). In
any case, the full potential of mobile learning will be reached
when technological developments intersect with ongoing educational trends, and when learning experiences are truly
mobile (Pegrum, 2016). As Naismith et al. put it, “the success
of learning and teaching with mobile technologies will be
measured by how seamlessly it weaves itself into our daily
lives, with the greatest success paradoxically occurring at the
point where we don’t recognize it as learning at all” (p. 36).
7.2.4. Serious games for learning
Several studies have discussed the positive impact of games in
the learning context. Positive effects include the high engagement of players, the presentation of complex notions and experiences, the collaboration among learners, the facilitation of deep
understanding, as well as the connection with young learners’
dispositions and orientations (Beavis, 2017). Other benefits
include the enhanced cognitive, skill-based, and affective outcomes that are achieved through games used for education and
training (Wilson et al., 2009). Despite claims however, real
evidence from long-term studies on the educational impact of
games is scarce (Bellotti, Berta, & De Gloria, 2010).
An important concern in the field is how to design for and
how to measure fun in the user experience of serious games
(Raybourn & Bos, 2005). The right balance between seriousness and gamification needs to be found: if too much educational content prevails, learners’ motivation may decrease; on
the contrary, if too much fun is incorporated, this can undermine learning (Gros, 2017). There are several questions that
remain to be answered regarding serious games for education,
which all highlight the need for deepening our understanding
in the field. In this respect, design processes, metrics and
evaluation tools focusing on the assessment of students’ learning progress need to be developed (Bellotti et al., 2010). In
these activities, all stakeholders (including teachers, policymakers, and parents, as well as commercial game companies
and educational researchers) should be involved and collaborate, so as to ensure that game objectives and learning objectives are aligned (Young et al., 2012).
25
7.2.5. Intelligent environments
The technological fabric of smart environments, AI, and big
data has made possible the realization of scenarios that might
seem fictional a few decades ago. Big data, and in particular
the availability of very large data sets from students’ interactions with educational software and online learning, constitutes the basis for the evolution of two research fields, namely
learning analytics and educational data mining (Konomi et al.,
2018; Siemens & Baker, 2012). The potential application areas
for these fields include supporting instructors through appropriate feedback, providing recommendations to students, predicting students’ performance, detecting undesirable student
behaviors, grouping students (according to personal characteristics or preferences), offering pedagogical support, or
developing concept maps (Romero & Ventura, 2010).
Advances in analytics can be used in the context of
Intelligent Tutoring Systems (ITS) or smart learning environments. ITSs provide individualized instruction and have the
potential for significantly higher achievement outcomes than
other modes of instruction (except small-group human tutoring and individual human tutoring when results were comparable) (Ma, Adesope, Nesbit, & Liu, 2014).
Smart environments for learning may well support contextaware and ubiquitous learning, by identifying learners’ context
and providing integrated, interoperable, pervasive, and seamless
learning experiences (Mikulecký, 2012). In such environments,
learning can be viewed as a lifetime process that happens whenever and wherever the learner chooses, in a collaborative and
self-paced style (Mikulecký, 2012), and by supporting multigenerational and co-creational learning activities (Konomi
et al., 2018). Under this perspective, the concept of selfregulated learning – according to which the learners are active
participants who monitor and apply strategies to control their
learning process – is highly relevant with the notion of technological intelligence (Guerra, Hosseini, Somyurek, & Brusilovsky,
2016). In ubiquitous learning environments, a major challenge is
how to provide learners with the right material at the right time
and in the right way, especially taking into account that when
many users are involved in such environments, the decisions of
one can be affected by the desires of others (Mikulecký, 2012).
From the teachers’ viewpoint, one of the most important difficulties that needs to be faced is how teachers can keep track of
the learning activities in progress, as in technologically-complex
and ubiquitous educational settings activities frequently involve
a number of separate groups of students interacting simultaneously from distant locations using different technologies
(Muñoz-Cristóbal et al., 2018).
Smart classrooms, on the other hand, are closely related with
formal education. They pertain to educational environments
equipped with smartness, therefore capable of detecting underperforming students, and providing advice and recommendations on how to better support them. Other smart classroom
functional characteristics include that they can profile students
according to their activity and tailor educational material, automatically identify students, and provide reminders and advice to
each student according to their goals, activities, and performance
(Augusto, 2009). In such environments, the technological infrastructure makes it possible to perceive undesired behaviors such
as mind wandering (Bixler & D’Mello, 2016) and also to detect
26
C. STEPHANIDIS ET AL.
affect through interaction data, facial expressions, and body
posture, in order to identify off-task behavior and provide
encouragement or stimulate the students’ interest through alternative learning materials (Bosch et al., 2015).
In brief, the technological infrastructure and advances in
fields such as big data and AI give the opportunity for monitoring the learning environment and the learners, delivering
adapted and personalized instruction, tracking affect and undesired behaviors, and proposing solutions to tackle them.
However, the main challenge is not what technology can do;
instead it is if it should do it and how. For instance, a common
student behavior that is criticized and attempted to be tackled
through technology is mind wandering; however, recent studies
have shown that mind wandering may play a crucial role in
a person’s life planning and creative problem solving
(Mooneyham & Schooler, 2013). Furthermore, not every experimental finding on mind wandering can be generalized over
every person and situation, as self-generated thought is
a process that depends on multiple cognitive processes, on an
individual’s profile of affective traits, cognitive ability, or motivation (Smallwood & Andrews-Hanna, 2013). In a nutshell, the
enhanced monitoring capabilities raise concerns regarding
ethics, privacy and human rights. Do students (and their parents) trust the privacy levels of a system that in order to provide
personalized learning demands to record data from them, access
their notes and requires a more open connectivity with peers and
tutors (Augusto, 2009)? Who is the owner of the data collected
and what rights do learners have with regard to their personal
data (Ferguson, 2012)? How acceptable and ethical it is to
monitor users, especially in the case of underage individuals?
How can excessive control (by the technological environment,
educators or parents) be avoided? How can/should persuasive
technologies be incorporated into learning environments
(Marcus, 2015b)? Last but not least in this series of questions,
do excessive monitoring and classroom statistics serve existing
and tangible educational needs of instructors and learners (e.g.
do instructors actually need a system to tell them when
a student’s mind is wandering)? The new learning technologies
should not only support the acquisition of knowledge, but also
enhance human learning abilities. New technologies are often
applied to make learning as easy as possible. By doing so, they
inadvertently make learners more dependent on technologies.
The challenge is how to improve human learning abilities while
trying to make learning effective and effortless.
7.2.6. Pedagogical impact of learning technologies
A big proportion of issues that are still open for discussion
refers to the pedagogical impact of these technologies and the
role of human tutors. A major concern is that technology
should focus on the perspectives and needs of learners themselves and push the boundaries of education towards other
criteria beyond grades, such as motivation, confidence, enjoyment, satisfaction, and meeting career goals (Ferguson, 2012).
It is also important to first consider what the learning goals of
an educator are before considering how to achieve any specific learning objectives through technology, and to admit that
in certain cases there may exist other more efficient, appropriate and resilient means to achieve it without the involvement of technology (FitzGerald et al., 2013). Another
challenge to the effective pedagogical use of technology in
education is that in different economic, social, and cultural
environments, the same technology may perform differently
(Spector et al., 2014); therefore, a “one solution to fit all”
approach would not be viable, and technology should be
customizable and adaptable. More importantly, in order to
mark real progress in the field and deliver technologies that
address existing problems of the educational process, educators themselves should be extensively involved in the development of relevant technology (Spector et al., 2014).
7.2.7. Creativity
Recently, a discussion that has been stimulated by a large part of
the society (artists, writers, teachers, psychologists, and philosophers) refers to the lack of creativity in educational curricula
(Loveless, 2007). Research on creativity identifies two trends: the
“big C” creativity of the genius, where the achievements are
unique, excelling, and novel, and the “little C” creativity, which
can be defined as a behavioral and mental attitude or the ability to
find new and effective solutions to everyday problems (Ferrari,
Cachia, & Punie, 2009). Although the first type of creativity is very
important for humanity, this discussion refers to the second type
of creativity which does not address only a few gifted and extraordinary individuals, but everyone. This type of creativity not only
depends on education and training, but it can also be fun, involves
play and discovery, and requires hard work, good field knowledge
and the development of thinking skills (Ferrari et al., 2009).
Creativity and learning are therefore two strongly associated
concepts that have shaped the notion of creative learning, which
should be pursued in the context of current and future education
curricula, and is aligned with the so-called twenty-first century
skills. Creative learning refers to any learning that is learnercentered and involves understanding and new awareness, which
allows the learner to go beyond knowledge acquisition, and
focuses on thinking skills (Cachia, Ferrari, Ala-Mutka, & Punie,
2010). Creative learning can be achieved by appropriately
designed educational environments, activities and teaching practices, while it can be supported by the use of digital technologies in
education (Ferrari et al., 2009; Loveless, 2007). For instance, digital
technologies can be used in the context of creative learning to
develop ideas, to facilitate connections with other people, projects,
information and resources, to collaborate and co-create, as well as
to present and promote one’s work (Loveless, 2007). A relevant
concept is that of creative classrooms which constitute live eco-systems that emphasize the need to develop and assess skills
beyond factual knowledge, numeracy and literacy, such as problem-finding, problem-solving, and collaboration (Bocconi,
Kampylis, & Punie, 2012). Such classrooms also value both formal
and informal learning, and give opportunities for creative and
personalized learning. Overall, creative learning, creative classrooms, and the support for creativity in education is totally
aligned with the notion of ubiquitous learning and can be fully
supported by the use of new technologies. An important concern
regarding creativity in educational environments is that creativity
goes beyond the “I-paradigm” to the “We-paradigm” (Glăveanu,
2018), therefore technology should not focus only on the individual learner. It should consider learners as creative actors that
collaborate within the learning environment and explore how
they can be supported (Glăveanu, Ness, Wasson, & Lubart, 2019).
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Beyond the learning context, human creativity is expected to
have a central role in the forthcoming intelligent era, while it has
been claimed that creativity, imagination, and innovation are three
key resources of the humankind that will assist in facing some of its
most daunting challenges (Robinson, 2011). Therefore, it is important to not only cultivate it, but also explore how it can be assisted.
Creativity support tools aim to assist users and extend their capabilities in making discoveries or inventions from the first stages of
the creation process (information gathering and hypothesis formulation) until its last stages (refinement, validation and dissemination), and pertain to any potential application domain, such as
sciences, design, engineering, or arts (Shneiderman, 2007). Such
systems can be classified with regard to the creation process of
participants as group or individual creativity support systems
(Wang & Nickerson, 2017), or according to the assistance provided as “coaches” that give advice and assistance, “nannies” that
monitor and provide an overall framework, or “colleagues” that
generate their own ideas and solutions (Gabriel, Monticolo,
Camargo, & Bourgault, 2016).
Although there is a considerable number of such systems, in
view of the future technological environments, this is a field that
has a long way to go in order to better support the complex nature
of human creativity. Indicative advancements include to fully
support the entire creative process, to provide automatic retrieval
and dynamic delivery of stimuli for different levels of relevance to
the creation task, and to ensure personalized support for each
contributor in a collaborative creation process (Gabriel et al., 2016;
Wang & Nickerson, 2017). Additionally, tools will need to reconsider how creation in real world environments can be fostered
when the digital will be blended with the physical. The role of big
data and AI towards empowering human creativity needs to be
further explored, taking into account the potential risks for creativity entailed by over-automation.
27
appropriate and at the discretion of the involved participants.
In this respect, the way that such technologies will blend in the
process and how they will support both digital and physical
worlds remain points of future research. In parallel, considerations regarding ethics, privacy and human rights should be
principal in the design of future learning and creativity technologies, regulating how big data and AI will come into play.
8. Social organization and democracy
8.1. Definitions and rationale
As humanity moves from smart environments to smart – yet
hopefully humane and sociable – cities, and eventually towards
smart societies where an abundance of ethical concerns arise,
social organization should be supported. With the appropriate
technological support, people will be able to better address contemporary fundamental problems such as energy use, pollution,
climate change, immigration, and poverty. This role becomes
even more crucial in an AI context, in which concerns and fears
regarding employment and poverty are already discussed.
HCI research will have a multifaceted pivotal role in the
forthcoming technological developments. In that respect, an
important strand will be to address major societal and environmental challenges towards societies where the ideals of
democracy, equality, prosperity, and stability are pursued
and safeguarded. This section explores issues regarding social
organization, and in particular sustainability, social justice,
and active citizen participation and the role of technology.
Furthermore, it discusses how democracy is influenced by the
new technological advancements. In all the issues discussed,
the challenges that arise for society – to which HCI can
actively contribute – are highlighted.
7.3. Summary of emerging requirements
8.2. Main research issues and state of the art
Overall, technology has traditionally been used in the contexts of
education and creativity, and henceforth this is not a novel field
of research. New technologies have the potential to support the
emerging learning styles, as they have recently evolved and been
influenced by the pervasiveness of technology in the everyday life
of the new generations. Personalization and individualization of
learning in the future will be paramount, and training that today
takes place in physical institutions will be the exception, with
learning occurring at the point of need. This transformation will
not be limited to lesson plans or even learning styles, but it will
also extend to the incorporation of intelligent tutors, AI-driven
instruction, targeted mentoring/tutoring, tailored timing and
pacing of learning, and collaborative teaming. In any case, the
success of technology in education depends to a large extent on
HCI issues. How to design learning systems that are appealing,
attractive, and engaging to learners of different ages, personalities, educational background, and cultures, is of paramount
importance. See Table 7 for a discussion of all the challenges
involved, as they have been discussed throughout this section.
In order to truly empower humans towards learning and
creativity, the new technological environments need to be truly
unobtrusive, avoiding taking the focus from the learning or
creativity activities and supporting such activities only when
8.2.1. Sustainability
The need for using the power of technology to develop more
sustainable patterns of production and consumption is evident through the richness of literature and the breadth of
relevant domains, including environmental informatics, computational sustainability, sustainable HCI, green IT and green
ICT, as well as ICT for sustainability (Hilty & Aebischer,
2015). Slow HCI is also a relevant research area promoting
well-being for individuals, society, and the natural environment (Coventry, 2012). In the face of the new technological
era and its associated consumer attitudes, as they have been
already shaped through the widespread use of mobile devices,
IoT, and big data – and are expected to be further escalated in
the future – sustainability research needs to address a number
of novel challenges.
The discussion on sustainability is rich, however, an approach
that is gaining interest is that of systems thinking. According to
this approach, “sustainability is the ability of systems to persist,
adapt, transform or transition in the face of constantly changing
conditions” (Williams, Kennedy, Philipp, & Whiteman, 2017, p.
13). Therefore, in terms of “systems thinking”, sustainability
needs to be reconsidered in the context of different scales,
including a greater diversity of stakeholders and ecologies of
28
C. STEPHANIDIS ET AL.
Table 7. Challenges for learning and creativity in technologically advanced and intelligent environments.
Main Issues
Challenges
New generation of learners
● New learning styles of young learners and novel attitudes
● Generalizations about the technology skills and learning styles of young learners may lead to exclusion
● Support for personalized learning and adaptation to each individual learner
Extended Reality
●
●
●
●
●
●
Mobile learning
● Design of learning activities to accommodate formal and informal contexts, as well as physical and virtual worlds
● Lack of quantitative studies regarding the long-term impact of mobile technologies for learning
● Learning experiences are mostly classroom-oriented and not yet mobile
Serious games for learning
● Scarcity of real evidence from long-term studies on the educational impact of serious games
● Design for fun and fun evaluation in the user experience
● Balance between seriousness and fun
● Development of design processes, metrics and evaluation tools focusing on the assessment of students’ learning progress
● Active involvement and collaboration of various stakeholders
Intelligent environments
●
●
●
●
●
●
●
●
Provision of the right material to learners, at the right time and in the right way
Potentially conflicting needs of co-located learners: the decisions of one can be affected by the desires of others
Privacy and ethics concerns, raised by the monitoring capabilities of the environment
Human rights concerns (e.g. avoid excessive control by the technological environment, educators or parents)
Use of persuasive technologies in learning environments
Over-abundance of technology without serving tangible educational needs
Risk of making learners dependent on technologies
Improvement of human learning ability making at the same time learning effective and effortless
Pedagogical impact of learning
technologies
●
●
●
●
●
Technology should focus on the perspectives and needs of learners themselves and push the boundaries of education
Consideration of learning goals before pursuing specific objectives through technology
Different performance of a technology in varying economic, social, and cultural environments
Adaptable and customizable technology to serve a wide range of user (learner and educator) and context attributes
Extensive involvement of educators in the design and development process
Creativity
● Support for learners not only as individual actors, but also as creative actors that collaborate with the learning environment
● Support for the complex nature of human creativity
● Automatic retrieval and dynamic delivery of stimuli for different levels of relevance to the creation task
● Personalized support for each contributor in a collaborative creation process
● Creation in technologically augmented environments, where the digital will be blended with the physical
Interference with learning and pedagogical goals
Detraction of students’ attention to the technology
Learning adaptation to accommodate technology limitations
Skills required by students and teachers to use the technology
Difficulty in creating content
Multi-disciplinary development approach
connected devices (Knowles, Bates, & Håkansson, 2018).
Climate rules and ecological limits in terms of natural resources,
should also be taken into account towards proofing the future
(Knowles et al., 2018). The population explosion, as well as food
sustainability constitute points of concern too (Norton et al.,
2017). To address this, HCI can contribute throughout the
process, by mapping requirements stemming from the analysis
to design ideas and solutions. As Coiera (2007) expressed it “we
need to put technical back to sociotechnical”, through a formal
and structured way of describing events and insights at the sociotechnical level, and associating them with system behaviors and
design specifications, in order to achieve better interventions.
Research should also be carried out towards the design of
technologies that will be appropriate for a future with
a scarcity of resources, handling crisis response issues and
designing for situations when availability on infrastructures
may be low (e.g. natural disasters), healthcare may be deficient, food supply may be unreliable and governments weak
or corrupted (Chen, 2016).
Sustainability also encompasses a variety of other issues
related, for example, to population, poverty, peace and security,
and social transformation (Silberman et al., 2014). In this context, technology should be used to support local communities
and infrastructures, such as through decentralized infrastructures and support for local activities (e.g. locally generated energy
supplies, or local agriculture) (Knowles et al., 2018). The technological interventions necessary in all the above sustainability
issues are not easy or straightforward. However, HCI can contribute with knowledge and expertise to analyze requirements,
address design and interaction issues, and assess the actual usage
and impact of the designed technologies.
8.2.2. Social justice
Technology can advocate for social justice and when possible,
reduce inequality and injustice (Knowles et al., 2018).
Currently, inequalities mainly pertain to specific social groups
(identifiable by gender, class, race, physical ability, etc.). In the
future, as it will be shaped by technological evolution, other
forms of inequalities may become alarming. Such inequalities
may be spatial (the rural-urban divide which may be further
increased as the new modes of transportation will not be
equally available everywhere), informational, and structural
(unequal power relationships determining who will eventually
impact the future) (Chatterton & Newmarch, 2017). In this
context, an acute current and future concern is to design
technologies for migrants, who face social difficulties with
communication and socialization, as well as with language
and literacy (Brown & Grinter, 2016; Fisher, Yefimova, &
Yafi, 2016). From this perspective, technology should not
only be designed for long-term use, but also in anticipation
of transient use when appropriate, assisting specific users to
overcome particular challenges they face at a given time without making them dependent for long-term on technology
(Brown & Grinter, 2016).
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Within a broader social justice perspective, when designing
a technology, a principal concern should be who benefits from
this technology and if there is a way to design it in a more
inclusive way, in order to benefit other social classes more
equitably (Ekbia & Nardi, 2016). A social justice orientation
can be enacted through appropriate design strategies, such as
designing for transformation in a constantly evolving context,
recognition of unjust practices and phenomena, reciprocity,
enablement of people to fulfill their potential, equitable distribution of resources, and accountability (Dombrowski,
Harmon, & Fox, 2016). Eventually, in the long run, future
narratives will also require an understanding of how inequalities could be changed, and the pursuit of social and cultural
changes along with the imminent technological changes
(Chatterton & Newmarch, 2017). Addressing inequalities
should also include issues regarding the digital divide (van
Dijk, 2012), and how it will evolve in the near future (see also
the discussion in Section 5.2.5). In any case, HCI will have an
active role in pursuing social change, through seeking to
understand the needs of individuals and communities at the
risk of exclusion, as well as through the design and development of appropriate solutions. More importantly, HCI can
contribute with novel design strategies and framework to
enact social justice design in practice.
8.2.3. Active citizen participation
The above highlight a public dimension of design, whereby
designers engage with issues that are relevant for the society in
which they live (Teli, Bordin, Blanco, Orabona, & De Angeli,
2015). Such a public design approach, dealing with complex and
diverse contexts and addressing societal and political issues,
cannot be fruitful without the active participation of technology
recipients themselves. Engaging citizens in design activities
regarding their environment and communities (aka digital citizens) can lead to valuable outcomes such as job creation, social
cohesion and inclusion, quality of life enhancement, and innovation capacity (Barricelli, Fischer, Mørch, Piccinno, &
Valtolina, 2015). For instance, the approach of digital or city
commons proposes the design of shared artifacts (e.g. community-managed resources) which can be taken over and selfgoverned by concerned people, thus nourishing social relations
and making technology an object of collaborative production
(Balestrini et al., 2017; Teli et al., 2015).
Active citizen participation is closely related to the concept of
Citizen Science (CS). CS broadly refers to involving citizens in
science and can involve participatory research and participatory
monitoring, that may be targeted to a specific research question or
be open-ended (Lukyanenko, Parsons, & Wiersma, 2016; Preece,
2016). In brief, CS – a field that has gained increased popularity
and attention over the course of the past few decades – is the
process in which citizens, including non-scientists, can contribute
to the scientific knowledge of a large-scale scientific study. The
involvement of citizens in design and science activities raises
numerous challenges regarding user participation, methods and
results, as well as technology itself. A major concern regarding
participation is how to engage citizens in impactful participation,
and how to encourage long-term participation and inclusion
(Knowles et al., 2018; Preece, 2016). An important aspect in this
context is the notion of “open data” as they are collected and
29
provided by different cities and organizations.4 Open data play
also an important role in the concept that being a “smart” city
means also to be a ‘self-aware’ city (Streitz, 2018), where the city
“knows” about its states and processes and provides these data in
a reciprocal way to the citizens. In turn, citizens are encouraged to
provide data to the cities’ data pool as part of what is called “civic
computing” (Konomi, Shoji, & Ohno, 2013).
To this end, and towards creating cultures of participation,
the role of trust, empathy, altruism, and reciprocity (Barricelli
et al., 2015), and differences of cultural attitude towards these
emotional motivations (Marcus, 2000, 2006) need to be studied.
Regarding the methodologies involved, a point that merits attention is the potential risk of information and collaboration overload to which citizens may be exposed (Barricelli et al., 2015),
while as far as results are concerned the quality and reliability of
the resulting artifacts are of essence (Barricelli et al., 2015;
Preece, 2016). Lastly, the role of technology itself needs to be
further explored, studying how technology can enable coordination between civic actors (Knowles et al., 2018), as well as what
kind of technology and infrastructure is more suitable according
to the involved users, tasks, and contexts of use (Preece, 2016).
However, beyond the vision of empowerment and participation and the potential that such approaches hold, it is often
the case that participants in such activities are mostly from the
upper and middle social classes; populations at the risk of
exclusion, such as women or minority populations, exhibit
low involvement (Ames et al., 2014). The role of social justice
and how it can really be aimed at and achieved through
technology is therefore a focal point, especially taking into
account the potential crisis and dark future scenarios that
fields related to sustainability deal with.
8.2.4. Democracy
Besides the aforementioned potential benefits resulting from
the active participation of citizens in technology design, citizens also have the potential to become “agents of democracy
with and through technologies and in dialogue with the institutions that can actualize public will” (Vlachokyriakos et al.,
2016). Pervasive technologies are believed to encourage and
facilitate citizen participation and collaboration with civic
authorities (Harding, Knowles, Davies, & Rouncefield, 2015).
In this respect, the ease of retrieving information and interacting with others (including policy makers) can help to
promote democracy. For example, for e-government to be
successful, HCI contribution is important, as systems need
to be usable, accessible, and useful to all citizens and even
non-citizens. However, currently the perceived value of civic
engagement technologies remains low, due to limited success
in addressing the needs of all the stakeholders involved (i.e.,
both citizens and authorities), in order to establish
a relationship of trust (Harding et al., 2015).
Democracy is an ideal which technology visions promise to
promote and make tangible in everyday life; however, reality
often contradicts such declarations and turns them to wishful
thinking. For instance, it has been argued that the Internet and
social media would increase the availability of perspectives,
ideas, and opinions, yet in reality information is eventually
“filtered” by algorithms that actually decrease information diversity (Bozdag & van Den Hoven, 2015). Besides “filter bubbles”,
30
C. STEPHANIDIS ET AL.
other technological perils to democracy include fake news, echo
chambers (i.e., shared social media bubble with like-minded
friends, resulting in restricted access to a diversity of views),
and agenda shaping by increased visibility of the most popular
stories in media (DiFranzo & Gloria-Garcia, 2017; Flaxman,
Goel, & Rao, 2016). What’s more, it has been stressed that
technological monopolies bring the threat of molding humanity
into their desired image of it (Foer, 2017). Such fears become
even worse with powerful big data and AI technologies that can
lead to an automated society with totalitarian features, where AI
would control what we know, what we think and how we act
(Helbing et al., 2019). Moreover, the extended use of surveillance
cameras in intelligent environments could, under circumstances,
pose threats to humans’ free will. Standing at crossroads, strategic decisions should be influenced by the principles of ΑΙ transparency, social and economic diversity, collective intelligence,
and technology decentralization, reducing information distortion and pollution (Helbing et al., 2019). In this new era, HCI is
called to play an important role for the design of technologies
focusing on human needs and rights, and providing the methods
and tools to achieve this.
8.3. Summary of emerging requirements
In sum, the critical times we live in, as well as future dark
scenarios, have already directed research towards creating
technology to assist humanity in coping with major problems,
such as resource scarcity, climate change, poverty and disasters. Social participation, social justice, and democracy are
ideals that should not only be desired in this context, but
also actively and systematically pursued and achieved. In this
respect, the dynamics of new technologies bring challenges
(Table 8), but also promises. For instance, it has been claimed
that the recent development of blockchain technology could
lead to a new era of genuinely participative democracy (Jacobs
et al., 2018). Current and future decisions and practices will
determine if promises will become guarantees or if challenges
will turn to dystopian realities.
9. Discussion and conclusions
This paper has discussed seven main challenges that arise in
the current socio-technological landscape, with a view to
exploiting the increasingly available interaction intelligence
in order to respond to compelling human and societal
needs. Although motivated by recent technological advancements and intelligence, the discussion has principally advocated a future technological fabric where intelligence will be
employed to better serve the needs of humans and truly
empower them. In this context, the HCI community is called
upon to undertake an important endeavor and safeguard the
design of a future in which intelligence integration does not
undermine human self-efficacy and control; instead it
becomes a powerful tool (Farooq, Grudin, Shneiderman,
Maes, & Ren, 2017).
A central issue in this context is that of the symbiosis
of humans with smart ecosystems (Challenge 1), which
extends well beyond technical boundaries and requires
multi-disciplinary approaches with the aim to also address
ethical, societal and philosophical compelling issues. This
entails a number of considerations, such as incorporating
human values in design methods and choices, revaluing
humanistic concerns, and considering social dynamics.
A number of important concerns also arise, including
designing for meaningful human control, ensuring systems’ transparency and accountability, and accounting
for intelligent systems’ inherent opacity and unpredictability. Ultimately, designing intelligent systems that can
work truly in concert with the user is anticipated to be
one of the key success factors of intelligent technologies.
To this end, intelligence means supporting humans,
anticipating their needs, recognizing and responding to
human emotions and fostering human safety.
Interaction in forthcoming technological environments
(Challenge 2) will radically shift. Information such as
users’ location, posture, emotions, habits, and intentions
will constitute input data. A variety of visible and
Table 8. Challenges of technologies for the support of social organization and democracy.
Main Issues
Sustainability
●
●
●
●
●
●
Challenges
Adopt a systems thinking approach
Escalated device ecologies and stakeholders involved in the design of technology
Climate change and ecological limits (e.g. natural resource limits)
Population explosion and food sustainability
Crisis response
Focus of sustainability undertakings on the social aspect of issues
Social justice
●
●
●
●
●
Digital divide
New forms of inequalities: spatial inequalities, information, and structural
Appropriate design strategies and frameworks for the enactment of social justice
Design of technology not only for long-term use, but also for transient use when appropriate
Pursuit of social and cultural changes along with technological change
Active citizen participation
●
●
●
●
●
●
Impactful and long-term citizen participation
Information and collaboration overload of the citizens
Quality and reliability of artifacts
Low involvement of minority populations
Creating cultures of participation
Coordination between civic actors
Democracy
●
●
●
●
Low civic engagement
Information control by technology monopolies and opinion forming
AI control of what we know, what we thing, and how we act, leading to an automated society with totalitarian features
Impact of increased surveillance on humans’ free will
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
invisible technological artifacts, as well as robotics and
autonomous agents will be embedded in the environment.
Information will be communicated, from one interaction
counterpart to the other, naturally, while the digital will
coexist with (and augment) the physical. This evolution
paves the way towards evolving existing design and evaluation methodologies, by taking into account the shift
from explicit to implicit interaction, the integration of
interaction devices into furniture and accessories, the
escalation of interaction towards involving ecologies of
artifacts, services and data, and addressing larger user
populations, the need to reduce information overload, as
well as the need to scale up and evolve existing methods
in terms of acquiring and framing contexts of use, eliciting and analyzing user requirements, producing designs,
and taking advantage of the inherent technological infrastructure towards assessing the user experience.
Ethics, privacy, trust and security (Challenge 3) have
always been important concerns in relation to technology,
acquiring yet new dimensions in the context of technologically augmented and intelligent environments. Such topics
span across all technological domains, with their main questions being common, although different domains may pose
supplementary concerns. In general, trust is hard to come by
and requires initial trust formation and continuous trust
development, requiring not only transparency, but also usability, collaboration and communication, data security and privacy, as well as goal congruence. To this end, intelligent
systems need to behave so that they are beneficial to people
beyond simply reaching functional goals or addressing technical problems, by serving human rights and the values of
their users. A new code of ethics should be pursued in three
directions: ethics by design, in design and for design. In this
new code, user privacy should be further shielded, especially
since the intelligent technological environments feature such
an abundance of information and knowledge about the user,
as well as automated information analysis and the potential
for opinion forming.
Today’s technological advances, coupled with advances in
medicine, offer the opportunity to provide more effective and
less expensive ways of fostering a healthy life by promoting
and supporting healthy behaviors, encouraging prevention,
offering new forms of therapy and managing chronic illness
(Challenge 4). Moreover, in a world where technology is
omnipresent, the question arises of how its role towards
enhancing well-being and human happiness can be optimized.
In fact, technology offers the opportunity to promote not only
health, but also psychological well-being through life goals’
fulfillment by helping to prevent, reduce and manage stress,
depression and psychiatric illness, as well as by fostering
purposefulness, self-confidence and positive feelings. To this
purpose, concise and useful indicators need to be developed to
measure well-being by considering not only individual and
community, but also environmental factors.
As HCI has always focused on the human, in the new
technology-augmented environments it will lead efforts towards
31
improving the quality of life of various populations, including
the disabled and older persons (Challenge 5). Forthcoming
technological environments will not be simply the smart home
or workplace, but entire smart cities. Taking into account that
technology will be a vital component of any daily activity and it
will cater for humans’ needs, prosperity, and well-being, the
stake for universal access is now much higher than in the past.
Particular attention needs to be paid to any individual at risk of
exclusion, so that the new technologies do not exclude or isolate
anyone. In this context, universal access becomes a matter of
paramount importance in the forthcoming future, when access
to technology will not only mean access to information, but to
well-being and eudaimonia.
Technology has traditionally been used in the context of
education and creativity. New technologies have the potential
to support the emerging learning styles of the neo-millennial
generation, as they have recently evolved and have been influenced by the pervasiveness of technology in the everyday life
(Challenge 6). However, in order to truly empower humans
towards learning and creativity, new technological environments
need to be truly unobtrusive, avoid taking the focus away from
learning or creative activities, and support such technological
activities only when appropriate and at the discretion of the
involved participants. In this respect, the way that such technologies will blend in the process and how they will support both
digital and physical worlds remain open research issues.
Finally, the critical times we live in, as well as potential
future dark scenarios, have already directed research towards
creating technology to assist humanity in coping with major
societal problems, such as resource scarcity, climate change,
poverty and disasters (Challenge 7). Social participation,
social justice, and democracy are ideals that should be
actively and systematically pursued and achieved in this
context. In this respect, the dynamics of new technologies
bring challenges, but also promises, in particular concerning
sustainability, citizens’ involvement and democracy. Current
and future decisions and practices will determine if promises
will be fulfilled, or if challenges will turn into dystopian
realities.
The main research issues for HCI, emerging from the
analysis conducted, are summarized in Table 9.
The above research issues are not exhaustive. Instead,
they summarize the views and research priorities of an
international group of 32 experts, reflecting different
scientific perspectives, methodological approaches and
application domains. There is a long way to go to adopt
a deeply human(e) perspective on intelligent technologies.
Tackling these challenges and investigating the emerging
research issues require synthetic activities under a broad
multidisciplinary scope: international and global research
collaboration, enhanced collaboration between academic
and research institutions and industry, revaluing the role
of humanities and social sciences, novel approaches to
HCI academic education and a rethinking of the role
and training of HCI professionals and practitioners at
a global level.
32
C. STEPHANIDIS ET AL.
Table 9. Main research issues stemming from the analysis of the seven Grand Challenges for living and interacting in technology augmented environments.
Human-Technology Symbiosis
● Foster meaningful human control supported by technology transparency, accountability, and understandability
● Design for humane intelligence that brings to the forefront human values
● Develop new methods and techniques for adaptation and personalization to human needs, based on big data, smart environments’ infrastructure and AI
● Support and enhancement of human skills
● Emotion detection: capturing and correlating human emotional expressions
● Affective technology exhibiting emotions and empathic behavior, without “nudging” or deceiving humans
● Human safety: ‘safety by design’ and new testing methodologies
Human-Environment Interactions
● Support for shifts of interaction and attention
● Design for interaction intensive experiences that are not invasive
● Avoid imposing high perceptual and cognitive demands, confusion, or frustration
● Ensure control and accountability of the intelligent environments
● Blend the physical with the digital and make everyday objects intuitively interactive
● Design natural, multimodal and multi-sensorial interactions
● Scale up and evolve existing HCI methods and approaches to address what the new complicated and escalated interactions dictate
● Use ‘intelligence-as-a-service’ as material for design
● Develop new evaluation methods and tools, taking advantage of the intelligent infrastructure
● Design for public interactions, accommodating a wide variety of users, technologies, styles of engagement, addressing issues related to privacy, transient use,
and collaboration
● Design virtual reality environments achieving realistic user experience and supporting social interactions in the VR environment
● Evolve UX evaluation in VR environments towards the assessment of VR attributes beyond subjective assessment
Ethics, Privacy and Security
● Ethics regarding HCI research in public spaces, involving vulnerable user populations, using public online data
● Address concerns regarding over-attachment to technology (e.g. Virtual Agents, eHealth technologies)
● Privacy and data ownership in new technological environments featuring IoT, big data, smart artifacts and AI
● Account for threats induced by IoT, big data and AI (e.g. advanced profiling, automated decision-making leading to discrimination, persuasion without user
awareness)
● Explore the design trade-offs between privacy and smartness
● Foster ethical decision making and responsibility of intelligent agents
● Support AI transparency in a usable manner
● Participate in the establishment of a new code of ethics in the era of robots and AI agents
● Pursue usable cybersecurity
● Raise security awareness in individuals and organizations
Well-being, Health and Eudaimonia
● Account for privacy and confidentiality of sensitive and personal data
● Address issues related to controllability, integration and accuracy of data from
● multiple self-tracking devices
● Account for high error potential from patients untrained in medical data interpretations
● Address ethical concerns stemming from the use of persuasive strategies in the context of health and well-being
● Bridge the high cost induced by the demanding user expectations and the need for multi-disciplinarily in serious games for health
● Evaluate the actual impact of technological interventions in health
● Develop novel design, evaluation and testing methodologies for patient-centric design, also ensuring the accuracy of technologies, eliminating risks for patients’
● well-being and life
● Advance towards fostering human eudaimonia in a more holistic approach
Accessibility and Universal Access
● Reduce the increased cognitive demands that will be imposed by the inherent complexity of forthcoming technological environments and the need for
transparency
● Exploit inherent features of technologically augmented environments for universal access
● Address the escalated accessibility needs, pertaining to each and every device and service, the whole environment, as well as the combination of the physical
and the virtual
● Understand the evolving human needs and context of use and develop appropriate models
● Advance knowledge of user requirements and of the suitability of solutions for the different combinations of user and context characteristics
● Develop appropriate architectures and tools, ready-to-use accessibility solutions
● Pursue universal accessibility (to information, well-being and eudaimonia)
Learning and Creativity
● Support and promote new learning styles, creative learning and lifelong learning
● Design learning technologies for all learners, avoiding to focus on tech-savvy generations
● Design technologies that are gracefully embedded in the educational process and do not disrupt learners, focusing on the needs of learners and educators
● Design serious games featuring the appropriate balance between seriousness and fun
● Design for tangible educational needs and not be steered by current technological capabilities
● Address privacy concerns regarding the extensive monitoring of students (potentially under-age)
● Address ethical concerns regarding data ownership and management
● Address human rights concerns (e.g. potential for excessive control restricting the freedom of the individual)
● Extensively involve educators in the design of learning technologies
● Evaluate the actual long-term impact of learning technologies
● Provide support for personalized creativity and for the amplification of human creative skills
● Provide support for the entire spectrum of creative activities and for the creative process
● Provide support for creativity in smart environments, blending digital and physical artifacts
Social Organization and Democracy
● Adopt a systems thinking approach to sustainability, bringing the technical perspective to sociotechnical analysis and mapping requirements to tangible
solutions
● Contribute methods and tools to achieve sustainable design
● Promote appropriate design strategies and frameworks for the enactment of social justice
● Engage citizens in design activities, supporting impactful participation, avoiding information and collaboration overload of the citizens and ensuring the quality
and reliability of the designed artifacts
● Design technologies for and with minority populations
● Design for civic engagement technologies taking into account all the stakeholders involved
● Promote the ideals of social participation, social justice and democracy through technology
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Notes
1. An indicative list of communities and organizations working on
ethics:
● The IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems: https://standards.ieee.org/industry-connec
tions/ec/autonomous-systems.html
● OCEANIS Open Community for Ethics in Autonomous and
Intelligent Systems: https://ethicsstandards.org/
● AI Now Institute: https://ainowinstitute.org/
● Partnership on AI Initiative: https://www.partnershiponai.org/
● Open Roboethics Institute: www.openroboethics.org/
● Foundation for Responsible Robotics: https://responsiblerobotics.
org/
AI Global: https://ai-global.org/.
2. https://nypost.com/2018/11/13/i-married-my-16-year-oldhologram-because-she-cant-cheat-or-age/.
3. https://en.wikipedia.org/wiki/Eudaimonia.
4. https://www.europeandataportal.eu.
Acknowledgments
Our deep appreciation goes to Ben Shneiderman for his insightful comments on an earlier version of this paper.
List of abbreviations
AAL
AI
AmI
AR
CCTV
GDPR
HCI
ICT
ITS
IoT
IT
MR
OSN
PMD
UX
VR
Ambient Assisted Living
Artificial Intelligence
Ambient Intelligence
Augmented Reality
Closed Circuit Television
General Data Protection Regulation
Human – Computer Interaction
Information and Communication Technology
Intelligent Tutoring System
Internet of Things
Information Technology
Mixed Reality
Online Social Network
Personal Medical Device
User Experience
Virtual Reality
References
Abascal, J., De Castro, I. F., Lafuente, A., & Cia, J. M. (2008). Adaptive
interfaces for supportive ambient intelligence environments.
Proceedings of the 11th Conference on Computers Helping People
with Special Needs (ICCHP 2008) (pp. 30–37). Springer, Berlin,
Heidelberg. doi: 10.1007/978-3-540-70540-6_4
Acampora, G., Cook, D. J., Rashidi, P., & Vasilakos, A. V. (2013).
A survey on ambient intelligence in healthcare. Proceedings of the
IEEE. Institute of Electrical and Electronics Engineers, 101(12),
2470–2494. doi:10.1109/JPROC.2013.2262913
Adapa, A., Nah, F. F. H., Hall, R. H., Siau, K., & Smith, S. N. (2018).
Factors influencing the adoption of smart wearable devices.
International Journal of Human–Computer Interaction, 34(5),
399–409. doi:10.1080/10447318.2017.1357902
Agger, B. (2011). iTime: Labor and life in a smartphone era. Time &
Society, 20(1), 119–136. doi:10.1177/0961463X10380730
Akçayır, M., Akçayır, G., Pektaş, H. M., & Ocak, M. A. (2016). Augmented
reality in science laboratories: The effects of augmented reality on university students’ laboratory skills and attitudes toward science laboratories.
Computers in Human Behavior, 57, 334–342. doi:10.1016/j.chb.2015.12.054
33
Alaieri, F., & Vellino, A. (2016). Ethical decision making in robots:
Autonomy, trust and responsibility. Proceedings of the 10th
International Conference on Social Robotics (ICSR 2016) (pp.
159–168). Springer, Cham. doi: 10.1007/978-3-319-47437-3_16
Alterman, A. (2003). “A piece of yourself”‘: Ethical issues in biometric
identification. Ethics and Information Technology, 5(3), 139–150.
doi:10.1023/B:ETIN.0000006918.22060.1f
Ames, M. G., Bardzell, J., Bardzell, S., Lindtner, S., Mellis, D. A., &
Rosner, D. K. (2014). Making cultures: Empowerment, participation,
and democracy-or not? CHI’14 Extended Abstracts on Human Factors
in Computing Systems (pp. 1087–1092). New York, NY, USA: ACM.
doi:10.1145/2559206.2579405
Ananny, M., & Crawford, K. (2018). Seeing without knowing:
Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. doi:10.1177/
1461444816676645
Anderson, S. L. (2008). Asimov’s “three laws of robotics” and machine
metaethics. AI & Society, 22(4), 477–493. doi:10.1007/s00146-007-0094-5
Andrade, A. O., Pereira, A. A., Walter, S., Almeida, R., Loureiro, R.,
Compagna, D., & Kyberd, P. J. (2014). Bridging the gap between
robotic technology and health care. Biomedical Signal Processing and
Control, 10, 65–78. doi:10.1016/j.bspc.2013.12.009
Ardito, C., Buono, P., Costabile, M. F., & Desolda, G. (2015). Interaction
with large displays: A survey. ACM Computing Surveys (CSUR), 47(3)
Article No. 46, 1–38. Doi: 10.1145/2682623.
Arnrich, B., Mayora, O., Bardram, J., & Tröster, G. (2010). Pervasive
healthcare: Paving the way for a pervasive, user-centered and preventive healthcare model.. Methods of Information in Medicine, 49(1),
67–73. doi:10.3414/ME09-02-0044
Asimov, I. (1950). Runaround. In I. Asimov (Ed.), I, Robot Collection
(pp. 40–54). New York, NY: Doubleday.
Augusto, J. C. (2009). Ambient intelligence: Opportunities and consequences of its use in smart classrooms. Innovation in Teaching and
Learning in Information and Computer Sciences, 8(2), 53–63.
doi:10.11120/ital.2009.08020053
Aylett, M. P., & Quigley, A. J. (2015). The broken dream of pervasive
sentient ambient calm invisible ubiquitous computing. Proceedings of
the 33rd Annual ACM Conference Extended Abstracts on Human
Factors in Computing Systems (CHI EA ‘15) (pp. 425–435).
New York, NY, USA, ACM. doi: 10.1145/2702613.2732508
Azuma, R. T. (2016). The most important challenge facing augmented
reality. Presence: Teleoperators and Virtual Environments, 25(3),
234–238. doi:10.1162/PRES_a_00264
Bakker, S., Hoven, E., & Eggen, B. (2015). Peripheral interaction:
Characteristics and considerations. Personal and Ubiquitous
Computing, 19(1), 239–254. doi:10.1007/s00779-014-0775-2
Bakker, S., & Niemantsverdriet, K. (2016). The interaction-attention
continuum: Considering various levels of human attention in interaction design. International Journal of Design, 10(2), 1–14. Retrieved
from: http://ijdesign.org/index.php/IJDesign/article/view/2341
Balestrini, M., Rogers, Y., Hassan, C., Creus, J., King, M., & Marshall, P.
(2017). A city in common: A framework to orchestrate large-scale citizen
engagement around urban issues. Proceedings of the 2017 CHI Conference
on Human Factors in Computing Systems (CHI ‘17) (pp. 2282–2294).
New York, NY, USA, ACM. doi: 10.1145/3025453.3025915
Barricelli, B. R., Fischer, G., Mørch, A., Piccinno, A., & Valtolina, S.
(2015). Cultures of participation in the digital age: Coping with
information, participation, and collaboration overload. In P. Díaz, V.
Pipek, C. Ardito, C. Jensen, I. Aedo, & A. Boden (Eds.), Proceedings of
the 5th International Symposium on End User Development (IS-EUD
2015) (pp. 271–275). Cham, Switzerland: Springer. doi:10.1007/9783-319-18425-8_28
Bastug, E., Bennis, M., Médard, M., & Debbah, M. (2017). Toward
interconnected virtual reality: Opportunities, challenges, and
enablers. IEEE Communications Magazine, 55(6), 110–117.
doi:10.1109/MCOM.2017.1601089
Beavers, A. F., & Slattery, J. P. (2017). On the moral implications and
restrictions surrounding affective computing. Emotions and Affect in
34
C. STEPHANIDIS ET AL.
Human Factors and Human-Computer Interaction, 143–161.
doi:10.1016/b978-0-12-801851-4.00005-7
Beavis, C. (2017). Serious play: Literacy, learning and digital games. In
C. Beavis, M. Dezuanni, & J. O’Mara (Eds.), Serious Play (pp. 17–34).
New York, NY: Routledge.
Becker, L. (2008). Design and ethics: Rationalizing consumption through
the graphic image. PhD Dissertation, University of California/
Berkeley.
Bellotti, F., Berta, R., & De Gloria, A. (2010). Designing effective serious
games: Opportunities and challenges for research. International
Journal of Emerging Technologies in Learning (Ijet), 5(2010), 22–33.
Retrieved from https://www.learntechlib.org/p/44949/
Bellotti, V., Back, M., Edwards, W. K., Grinter, R. E., Henderson, A., &
Lopes, C. (2002). Making sense of sensing systems: Five questions for
designers and researchers. Proceedings of the SIGCHI conference on
Human factors in computing systems (CHI ‘02) (pp. 415–422).
New York, NY, USA, ACM. doi: 10.1145/503376.503450
Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate:
A critical review of the evidence. British Journal of Educational
Technology, 39(5), 775–786. doi:10.1111/j.1467-8535.2007.00793.x
Beye, M., Jeckmans, A. J., Erkin, Z., Hartel, P., Lagendijk, R. L., & Tang, Q.
(2012). Privacy in online social networks. In A. Abraham (Ed.),
Computational Social Networks: Security and Privacy (pp. 87–113).
London, UK: Springer-Verlag. doi:10.1007/978-1-4471-4051-1_4
Bibri, S. E. (2015). The human face of ambient intelligence: Cognitive,
emotional, affective, behavioral and conversational aspects. In series:
I. Khalil (Ed.), Atlantis Ambient and Pervasive Intelligence (Vol. 9.).
Paris, France: Atlantis Press. doi:10.2991/978-94-6239-130-7
Biondi, F., Alvarez, I., & Jeong, K. A. (2019). Human–Vehicle cooperation in automated driving: A multidisciplinary review and appraisal.
International Journal of Human–Computer Interaction, Published
online: 24 Jan 2019. doi: 10.1080/10447318.2018.1561792
Bixler, R., & D’Mello, S. (2016). Automatic gaze-based user-independent
detection of mind wandering during computerized reading. User
Modeling and User-Adapted Interaction, 26(1), 33–68. doi:10.1007/
s11257-015-9167-1
Blackman, S., Matlo, C., Bobrovitskiy, C., Waldoch, A., Fang, M. L.,
Jackson, P., … Sixsmith, A. (2016). Ambient assisted living technologies for aging well: A scoping review. Journal of Intelligent Systems, 25
(1), 55–69. doi:10.1515/jisys-2014-0136
Bocconi, S., Kampylis, P. G., & Punie, Y. (2012). Innovating learning: Key
elements for developing creative classrooms in Europe. Luxembourg:
JRC-Scientific and Policy Reports. http://publications.jrc.ec.europa.eu/
repository/handle/JRC72278
Boddington, P. (2017). Towards a Code of Ethics for Artificial Intelligence (pp.
27–37). Cham, Switzerland: Springer. doi:10.1007/978-3-319-60648-4_3
Bosch, N., D’Mello, S., Baker, R., Ocumpaugh, J., Shute, V.,
Ventura, M., … Zhao, W. (2015). Automatic detection of
learning-centered affective states in the wild. Proceedings of the 20th
international conference on intelligent user interfaces (IUI ‘15) (pp.
379–388). New York, NY, USA, ACM. doi: 10.1145/2678025.2701397
Bozdag, E., & van Den Hoven, J. (2015). Breaking the filter bubble:
Democracy and design. Ethics and Information Technology, 17(4),
249–265. doi:10.1007/s10676-015-9380-y
Brandtzæg, P. B., Lüders, M., & Skjetne, J. H. (2010). Too many Facebook
“friends”? Content sharing and sociability versus the need for privacy in
social network sites. International Journal of Human–Computer
Interaction, 26(11–12), 1006–1030. doi:10.1080/10447318.2010.516719
Breazeal, C., . L., Ostrowski, A., . K., Singh, N., & Park, H., . W. (2019).
Designing social robots for older adults. The Bridge, 49(1), 22–31.
Retrieved from https://www.nae.edu/205212/Spring-Bridge-onTechnologies-for-Aging
Brown, B., Bødker, S., & Höök, K. (2017). Does HCI scale?: Scale hacking and
the relevance of HCI. Interactions, 24(5), 28–33. doi:10.1145/3125387
Brown, D., & Grinter, R. E. (2016). Designing for transient use: A
human-in-the-loop translation platform for refugees. Proceedings of
the 2016 CHI Conference on Human Factors in Computing Systems
(CHI ‘16) (pp. 321–330). New York, NY, USA. ACM. doi: 10.1145/
2858036.2858230
Bühler, C. (2009). Ambient intelligence in working environments.
Proceedings of the 5th International Conference on Universal Access
in Human-Computer Interaction (UAHCI 2009) (pp. 143–149).
Springer, Berlin, Heidelberg. doi: 10.1007/978-3-642-02710-9_17
Burden, K., & Kearney, M. (2016). Conceptualising authentic mobile
learning. In D. Churchill, J. Lu, T. Chiu, & B. Fox (Eds.), Mobile
learning design (pp. 27–42). Singapore: Springer. doi:10.1007/978-98110-0027-0_2
Burzagli, L., Emiliani, P. L., & Gabbanini, F. (2007). Ambient intelligence
and multimodality. Proceedings of the 4h International Conference on
Universal Access in Human-Computer Interaction (UAHCI 2007) (pp.
33–42). Springer, Berlin, Heidelberg. doi: 10.1007/978-3-540-73281-5_4
Cachia, R., Ferrari, A., Ala-Mutka, K., & Punie, Y. (2010). Creative
learning and innovative teaching: Final report on the study on creativity and innovation in education in the EU member states. Publications
Office of the European Union. doi: 10.2791/52913
Carvalho, R. M., de Castro Andrade, R. M., de Oliveira, K. M., de Sousa
Santos, I., & Bezerra, C. I. M. (2017). Quality characteristics and
measures for human–Computer interaction evaluation in ubiquitous
systems. Software Quality Journal, 25(3), 743–795. doi:10.1007/
s11219-016-9320-z
Casas, R., Marín, R. B., Robinet, A., Delgado, A. R., Yarza, A. R.,
Mcginn, J., … Grout, V. (2008). User modelling in ambient intelligence for elderly and disabled people. Proceedings of the 11th
International Conference on Computers Helping People with Special
Needs (ICCHP 2008) (pp. 114–122). Springer, Berlin, Heidelberg. doi:
10.1007/978-3-540-70540-6_15
Chalmers, A., Howard, D., & Moir, C. (2009). Real virtuality: A step
change from virtual reality. Proceedings of the 25th Spring Conference
on Computer Graphics (SCCG ‘09) (pp. 9–16), Budmerice, Slovakia.
New York, NY: ACM. doi: 10.1145/1980462.1980466
Charness, N., & Boot, W. R. (2009). Aging and information technology
use: Potential and barriers. Current Directions in Psychological Science,
18(5), 253–258. doi:10.1111/j.1467-8721.2009.01647.x
Chatterton, T., & Newmarch, G. (2017). The future is already here: It’s
just not very evenly distributed. Interactions, 24(2), 42–45.
doi:10.1145/3041215
Chen, J. Y. C. (2016). A strategy for limits-aware computing. Proceedings
of the Second Workshop on Computing Within Limits (LIMITS ‘16) (p.
1). New York, USA, ACM. doi: 10.1145/2926676.2926692
Chen, J. Y. C., & Barnes, M. J. (2014). Human-agent teaming for
multi-robot control: A review of human factors issues. IEEE
Transactions on Human-Machine Systems, 44(1), 13–29. doi:10.1109/
THMS.2013.2293535
Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R.,
Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent
transparency and human-autonomy teaming effectiveness. Theoretical
Issues in Ergonomics Science, 19(3), 259–282. doi:10.1080/
1463922X.2017.1315750
Chen, S., . Y., & Wang, J., . H. (2019). Human factors and personalized
digital learning: An editorial. International Journal of Human–Computer
Interaction, 35(4–5), 297–298. doi:10.1080/10447318.2018.1542891
Christodoulou, N., Papallas, A., Kostic, Z., & Nacke, L. E. (2018).
Information visualisation, gamification and immersive technologies
in participatory planning. Extended Abstracts of the 2018 CHI
Conference on Human Factors in Computing Systems (CHI EA ‘18)
(p. SIG12). New York, USA, ACM. doi: 10.1145/3170427.3185363
Cohen, P. R., & Feigenbaum, E. A. (Eds.). (2014). The handbook of
artificial intelligence (Vol. 3). Los Alto, CA: William Kaufmann.
Coiera, E. (2007). Putting the technical back into socio-technical systems
research. International Journal of Medical Informatics, 76 Suppl 1,
SS98–S103. doi:10.1016/j.ijmedinf.2006.05.026
Comas, F., . R., Sureda, J., . N., & Santos, U., . R. (2006). The “copy and
paste” generation: Plagiarism amongst students, a review of existing
literature. The International Journal of Learning: Annual Review, 12
(2), 161–168. doi:10.18848/1447-9494/CGP/v12i02/47005
Consel, C., & Kaye, J., . A. (2019). Aging with the Internet of Things. The
Bridge, 49(1), 6–12. Retrieved from: https://www.nae.edu/205212/
Spring-Bridge-on-Technologies-for-Aging
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Conti, M., Das, S. K., Bisdikian, C., Kumar, M., Ni, L. M.,
Passarella, A., … Zambonelli, F. (2012). Looking ahead in pervasive
computing: Challenges and opportunities in the era of cyber–Physical
convergence. Pervasive and Mobile Computing, 8(1), 2–21.
doi:10.1016/j.pmcj.2011.10.001
Coughlin, J., . F., & Brady, S. (2019). Planning, designing, and engineering tomorrow’s user-centered, age-ready transportation system. The
Bridge, 49(1), 13–21. Retrieved from: https://www.nae.edu/205212/
Spring-Bridge-on-Technologies-for-Aging
Coventry, L. (Ed.) (2012). Interfaces Magazine, special issue on “Slow
HCI - Designing to promote well-being for individuals, society and
nature”, vol. 92. Retrieved from: https://www.bcs.org/upload/pdf/inter
faces92.pdf
Cowie, R. (2015). Ethical issues in affective computing. In R. A. Calvo,
S. D’Mello, J. Gratch, & A. Kappas (Eds.), The Oxford handbook of
affective computing (pp. 334–348). Oxford, New York: Oxford
University Press.
Crabtree, A., Chamberlain, A., Grinter, R. E., Jones, M., Rodden, T., &
Rogers, Y. (2013). Introduction to the special issue of “The turn to the
wild”. ACM Trans. Comput.-Hum. Interact., 20(3), 1–13. doi:10.1145/
2491500.2491501
Crawford, K., & Calo, R. (2016). There is a blind spot in AI research.
Nature, 538(7625), 311–313. doi:10.1038/538311a
Czaja, S. J., & Lee, C. C. (2007). The impact of aging on access to
technology. Universal Access in the Information Society, 5(4),
341–349. doi:10.1007/s10209-006-0060-x
Dallinga, J. M., Mennes, M., Alpay, L., Bijwaard, H., & de la FailleDeutekom, M. B. (2015). App use, physical activity and healthy lifestyle: A cross sectional study. BMC Public Health, 15(1), 833.
doi:10.1186/s12889-015-2165-8
David, M. E., Roberts, J. A., & Christenson, B. (2018). Too much of a good
thing: Investigating the association between actual smartphone use and
individual well-being. International Journal of Human–Computer
Interaction, 34(3), 265–275. doi:10.1080/10447318.2017.1349250
De Gloria, A., Bellotti, F., & Berta, R. (2014). Serious games for education
and training. International Journal of Serious Games, 1(1).
doi:10.17083/ijsg.v1i1.11
Dede, C. (2005). Planning for neomillennial learning styles. Educause
Quarterly, 28(1), 7–12. https://er.educause.edu/articles/2005/1/edu
cause-quarterly-magazine-volume-28-number-1-2005
Dellot, B. (2017, February 13). A hippocratic oath for AI developers? It
may only be a matter of time. Retrieved from: https://www.thersa.org/
discover/publications-and-articles/rsa-blogs/2017/02/a-hippocraticoath-for-ai-developers-it-may-only-be-a-matter-of-time
Denecke, K., Bamidis, P., Bond, C., Gabarron, E., Househ, M.,
Lau, A. Y. S., … Hansen, M. (2015). Ethical issues of social media
usage in healthcare. Yearbook of Medical Informatics, 24(01), 137–147.
doi:10.15265/IY-2015-001
Desmet, P. M. (2015). Design for mood: Twenty activity-based opportunities to design for mood regulation. International Journal of Design, 9
(2). http://www.ijdesign.org/index.php/IJDesign/article/view/2167
Diefenbach, S. (2018). Positive technology–A powerful partnership
between positive psychology and interactive technology:
A discussion of potential and challenges. Journal of Positive
Psychology and Wellbeing, 2(1), 1–22. Retrieved from: http://www.
journalppw.com/index.php/JPPW/article/view/19
DiFranzo, D., & Gloria-Garcia, K. (2017). Filter bubbles and fake news.
XRDS: Crossroads, the ACM Magazine for Students, 23(3), 32–35.
doi:10.1145/3055153
Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the
special issue. Ethics and Information Technology, 20(1), 1–3.
doi:10.1007/s10676-018-9450-z
Dishman, E. (2019). Supporting precision aging: Engineering health and
lifespan planning for all of us. The Bridge, 49(1), 47–56. Retrieved from:
https://www.nae.edu/205212/Spring-Bridge-on-Technologies-for-Aging
Dix, A. (2002). Managing the ecology of interaction. Proceedings of First
International Workshop on Task Models and User Interface Design
(Tamodia 2002) (pp. 1–9). Bucharest, Romania: INFOREC
Publishing House Bucharest.
35
Dodge, H., . H., & Estrin, D. (2019). Making sense of aging with data big
and small. The Bridge, 49(1), 39–46. Retrieved from: https://www.nae.
edu/205212/Spring-Bridge-on-Technologies-for-Aging
Dombrowski, L., Harmon, E., & Fox, S. (2016). Social justice-oriented
interaction design: Outlining key design strategies and commitments.
Proceedings of the 2016 ACM Conference on Designing Interactive
Systems (DIS ‘16) (pp. 656–671). New York, USA, ACM. doi:
10.1145/2901790.2901861
Dong, H., Keates, S., & Clarkson, P. J. (2004, June). Inclusive design in
industry: Barriers, drivers and the business case. Proceedings of the 8th
ERCIM International Workshop on User Interfaces for All (UI4ALL
2004) (pp. 305–319). Berlin, Heidelberg: Springer. doi: 10.1007/9783-540-30111-0_26
Došilović, F. K., Brčić, M., & Hlupić, N. (2018). Explainable artificial
intelligence: A survey. Proceedings of the IEEE 41st International
Convention on Information and Communication Technology,
Electronics and Microelectronics (MIPRO 2018) (pp. 0210–0215).
Opatija, Croatia: IEEE. doi: 10.23919/MIPRO.2018.8400040
Ducatel, K., Bogdanowicz, M., Scapolo, F., Leijten, J., & Burgelman, J. C.
(2001). ISTAG scenarios for ambient intelligence in 2010. European
Commission. Information Society Directorate-General. ISBN 92-8940735-2.
Ekbia, H., & Nardi, B. (2016). Social inequality and HCI: The view from
political economy. Proceedings of the 2016 CHI Conference on Human
Factors in Computing Systems (CHI ‘16) (pp. 4997–5002). New York,
USA, ACM. doi: 10.1145/2858036.2858343
Elhai, J. D., Dvorak, R. D., Levine, J. C., & Hall, B. J. (2017). Problematic
smartphone use: A conceptual overview and systematic review of
relations with anxiety and depression psychopathology. Journal of
Affective Disorders, 207, 251–259. doi:10.1016/j.jad.2016.08.030
Emiliani, P. L. (2006). Assistive technology (AT) versus mainstream
technology (MST): The research perspective. Technology and
Disability, 18(1), 19–29.
Emiliani, P. L., & Stephanidis, C. (2005). Universal access to ambient
intelligence environments: Opportunities and challenges for people
with disabilities. IBM Systems Journal, 44(3), 605–619. doi:10.1147/
sj.443.0605
European Commission (2016). Regulation (EU) 2016/679 of the European
Parliament and of the Council of 27 April 2016 on the protection of
natural persons with regard to the processing of personal data and on
the free movement of such data, and repealing Directive 95/46/EC
(General Data Protection Regulation). http://data.europa.eu/eli/reg/
2016/679/2016-05-04
Farooq, U., & Grudin, J. (2016). Human-computer integration.
Interactions, 23(6), 26–32. doi:10.1145/3001896
Farooq, U., Grudin, J., Shneiderman, B., Maes, P., & Ren, X. (2017).
Human computer integration versus powerful tools. Proceedings of the
2017 CHI Conference Extended Abstracts on Human Factors in
Computing Systems (pp. 1277–1282). New York, USA, ACM. doi:
10.1145/3027063.3051137
Federal Trade Commission. (2016). Big data: A tool for inclusion or
exclusion? Understanding the Issues. https://www.ftc.gov/system/files/
documents/reports/big-data-tool-inclusion-or-exclusionunderstanding-issues/160106big-data-rpt.pdf
Ferguson, R. (2012). Learning analytics: Drivers, developments and
challenges. International Journal of Technology Enhanced Learning, 4
(5/6), 304–317. doi:10.1504/IJTEL.2012.051816
Ferrari, A., Cachia, R., & Punie, Y. (2009). ICT as a driver for creative
learning and innovative teaching. In E. Villalba (Ed.), Measuring
Creativity: Proceedings of the conference “Can creativity be measured?”
(pp. 345–367). Publications Office of the European Union.
Ferscha, A. (2016). A research agenda for human computer confluence.
In A. Gaggioli, A. Ferscha, G. Riva, S. Dunne, & I. Viaud-Delmon
(Eds.), Human Computer Confluence Transforming Human Experience
Through Symbiotic Technologies (pp. 7–17). Warsaw, Berlin: De
Gruyter Open.
Fiesler, C., Hancock, J., Bruckman, A., Muller, M., Munteanu, C., &
Densmore, M. (2018). Research Ethics for HCI: A roundtable
discussion. Extended Abstracts of the 2018 CHI Conference on
36
C. STEPHANIDIS ET AL.
Human Factors in Computing Systems (CHI EA ‘18) (p. panel05),
Montreal QC, Canada. doi: 10.1145/3170427.3186321
Fisher, K. E., Yefimova, K., & Yafi, E. (2016). Future’s butterflies: Codesigning ICT wayfaring technology with refugee syrian youth.
Proceedings of the 15th International Conference on Interaction
Design and Children (IDC ‘16) (pp. 25–36). New York, NY, USA,
ACM. doi: 10.1145/2930674.2930701
FitzGerald, E., Ferguson, R., Adams, A., Gaved, M., Mor, Y., &
Thomas, R. (2013). Augmented reality and mobile learning: The
state of the art. International Journal of Mobile and Blended
Learning (IJMBL), 5(4), 43–58. doi:10.4018/ijmbl.2013100103
Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers,
and online news consumption. Public Opinion Quarterly, 80(S1),
298–320. doi:10.1093/poq/nfw006
Fleming, D. A., Edison, K. E., & Pak, H. (2009). Telehealth ethics.
Telemedicine Journal and E-Health : the Official Journal of the American
Telemedicine Association, 15(8), 797–803. doi:10.1089/tmj.2009.0035
Fleming, T. M., Bavin, L., Stasiak, K., Hermansson-Webb, E., Merry, S. N.,
Cheek, C., … Hetrick, S. (2017). Serious games and gamification for
mental health: Current status and promising directions. Frontiers in
Psychiatry, 7, 215. doi:10.3389/fpsyt.2016.00215
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P.,
Dignum, V., … Schafer, B. (2018). AI4People: An ethical framework
for a good AI society: Opportunities, Risks, Principles, and
Recommendations. Minds and Machines, 28(4), 689–707. doi:10.1007/
s11023-018-9482-5
Foer, F. (2017). World without mind: The existential threat of big tech.
New York, NY: Penguin Books.
Frauenberger, C., Bruckman, A. S., Munteanu, C., Densmore, M., &
Waycott, J. (2017). Research ethics in HCI: A town hall meeting.
Proceedings of the 2017 CHI Conference Extended Abstracts on Human
Factors in Computing Systems (CHI EA ‘17) (pp. 1295–1299). New York,
NY, USA, ACM. doi: 10.1145/3027063.3051135
Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value
sensitive design and information systems. In N. Doorn,
D. Schuurbiers, I. van de Poel, & M. Gorman Eds., Early engagement
and new technologies: Opening up the laboratory Philosophy of
Engineering and Technology Vol. 16, pp. 55–95. Dordrecht,
Netherlands: Springer. doi:10.1007/978-94-007-7844-3_4.
Gabriel, A., Monticolo, D., Camargo, M., & Bourgault, M. (2016).
Creativity support systems: A systematic mapping study. Thinking
Skills and Creativity, 21, 109–122. doi:10.1016/j.tsc.2016.05.009
Gaggioli, A. (2005). Optimal experience in ambient intelligence. In
G. Riva, F. Vatalaro, F. Davide, & M. Alcañiz (Eds.), Ambient intelligence (pp. 35–43). Amsterdam, The Netherlands: IOS Press.
George, C., Whitehouse, D., & Duquenoy, P. (2013). Assessing legal,
ethical and governance challenges in eHealth. In C. George,
D. Whitehouse, & P. Duquenoy (Eds.), eHealth: Legal, Ethical and
Governance Challenges (pp. 3–22). Springer, Berlin: Heidelberg.
doi:10.1007/978-3-642-22474-4_1
Gilhooly, M. L., Gilhooly, K. J., & Jones, R. B. (2009). Quality of life:
Conceptual challenges in exploring the role of ICT in active ageing (pp.
49–76). Amsterdam, The Netherlands: IOS Press.
Gill, K. S. (Ed..). (2012). Human machine symbiosis: The foundations of
human-centred systems design. London, UK: Springer-Verlag London.
doi: 10.1007/978-1-4471-3247-9
Glăveanu, V. P. (2018). Creativity in perspective: A sociocultural and
critical account. Journal of Constructivist Psychology, 31(2), 118–129.
doi:10.1080/10720537.2016.1271376
Glăveanu, V. P., Ness, I. J., Wasson, B., & Lubart, T. (2019). Sociocultural
perspectives on creativity, learning, and technology. In C. Mullen (Ed.),
Creativity Under Duress in Education? Creativity Theory and Action in
Education (Vol. 3, pp. 63–82). Springer: ChamCham, Switzerland:
Springer. doi:10.1007/978-3-319-90272-2_4
Google (2019). People+AI Guidebook. Retrieved from: https://pair.with
google.com/.
Gros, B. (2017). Game dimensions and pedagogical dimension in serious
games. In R. Zheng & M. K. Gardner (Eds.), Handbook of Research on
Serious Games for Educational Applications (pp. 402–417). Hershey, PA,
USA: IGI Global.
Guerra, J., Hosseini, R., Somyurek, S., & Brusilovsky, P. (2016). An
intelligent interface for learning content: Combining an open learner
model and social comparison to support self-regulated learning and
engagement. Proceedings of the 21st International Conference on
Intelligent User Interfaces (IUI ‘16) (pp. 152–163). New York, NY,
USA, ACM. doi: 10.1145/2856767.2856784
Harding, M., Knowles, B., Davies, N., & Rouncefield, M. (2015). HCI, civic
engagement & trust. Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (pp. 2833–2842). New York, NY,
USA, ACM. doi: 10.1145/2702123.2702255
Harper, R., Rodden, T., Rogers, Y., & Sellen, A. (2008). Being human:
HCI in 2020. Cambridge, UK: Microsoft.
Haux, R., Koch, S., Lovell, N. H., Marschollek, M., Nakashima, N., &
Wolf, K. H. (2016). Health-enabling and ambient assistive technologies: Past, present, future. Yearbook of Medical Informatics, 25(S 01),
S76–S91. doi:10.15265/IYS-2016-s008
Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M.,
Hofstetter, Y., … Zwitter, A. (2019). Will democracy survive big data
and artificial intelligence?. In D. Helbing (Ed.), Towards Digital
Enlightenment (pp. 73–98). Cham, Switzerland: Springer. doi:10.1007/
978-3-319-90869-4_7
Helle, P., Schamai, W., & Strobel, C. (2016, July). Testing of autonomous
systems–Challenges and current state-of-the-art. Proceedings of the
26th Annual INCOSE International Symposium (IS2016), 26(1),
571–584. doi:10.1002/j.2334-5837.2016.00179.x
Hendler, J., & Mulvehill, A. M. (2016). Social machines: The coming
collision of artificial intelligence, social networking, and humanity.
New York, NY: Apress. doi:10.1007/978-1-4842-1156-4
Hespanhol, L., & Dalsgaard, P. (2015). Social interaction design patterns for urban media architecture. Proceedings of the 13th
International Conference on Human-Computer Interaction
(INTERACT 2015) (pp. 596–613). Springer, Cham. doi: 10.1007/
978-3-319-22698-9_41
Hespanhol, L., & Tomitsch, M. (2015). Strategies for intuitive interaction
in public urban spaces. Interacting with Computers, 27(3), 311–326.
doi:10.1093/iwc/iwu051
Hexmoor, H., McLaughlan, B., & Tuli, G. (2009). Natural human role in
supervising complex control systems. Journal of Experimental &
Theoretical Artificial Intelligence, 21(1), 59–77. doi:10.1080/
09528130802386093
Hilty, L. M., & Aebischer, B. (2015). ICT for sustainability: An emerging
research field. In L. Hilty & B. Aebischer (Eds.), ICT Innovations for
Sustainability (Vol. 310, pp. 3–36). Springer, Cham: Advances in
Intelligent Systems and Computing. doi:10.1007/978-3-319-09228-7_1
Hochheiser, H., & Lazar, J. (2007). HCI and societal issues: A framework
for engagement. International Journal of Human Computer
Interaction, 23(3), 339–374. doi:10.1080/10447310701702717
Holmquist, L. E. (2017). Intelligence on tap: Artificial intelligence as
a new design material. Interactions, 24(4), 28–33. doi:10.1145/3085571
Hornecker, E., & Stifter, M. (2006). Learning from interactive museum
installations about interaction design for public settings. Proceedings
of the 18th Australia conference on Computer-Human Interaction:
Design: Activities, Artefacts and Environments (pp. 135–142).
New York, NY, USA, ACM. doi: 10.1145/1228175.1228201
Horvitz, E. (2017). AI, people, and society. Science, 357(6346), 7.
doi:10.1126/science.aao2466
Huang, H. M., Rauch, U., & Liaw, S. S. (2010). Investigating learners’
attitudes toward virtual reality learning environments: Based on
a constructivist approach. Computers & Education, 55(3),
1171–1182. doi:10.1016/j.compedu.2010.05.014
Husain, A. (2017). The sentient machine: The coming age of Artificial
Intelligence. New York, NY: Scribner.
Hwang, G. J., & Tsai, C. C. (2011). Research trends in mobile and
ubiquitous learning: A review of publications in selected journals
from 2001 to 2010. British Journal of Educational Technology, 42(4),
E65–E70. doi:10.1111/j.1467-8535.2011.01183.x
Jacobs, G., Caraça, J., Fiorini, R., Hoedl, E., Nagan, W. P., Reuter, T., &
Zucconi, A. (2018). The future of democracy: Challenges & prospects.
Cadmus, 3(4), 7–31. Retrieved from: http://www.cadmusjournal.org/
article/volume-3/issue-4/future-democracy-challenges-prospects
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Jacucci, G., Spagnolli, A., Freeman, J., & Gamberini, L. (2014). Symbiotic
interaction: A critical definition and comparison to other
human-computer paradigms. Proceedings of the 3rd International
Workshop on (Symbiotic 2014) (pp. 3–20). Springer, Cham. doi:
10.1007/978-3-319-13500-7_1
Janakiram, M. S. V. (2018, February 22). The rise of artificial intelligence
as a service in the public cloud. Retrieved from: https://www.forbes.
com/sites/janakirammsv/2018/02/22/the-rise-of-artificial-intelligenceas-a-service-in-the-public-cloud/#be799198ee93
Jang-Jaccard, J., & Nepal, S. (2014). A survey of emerging threats in
cybersecurity. Journal of Computer and System Sciences, 80(5),
973–993. doi:10.1016/j.jcss.2014.02.005
Janlert, L. E., & Stolterman, E. (2015). Faceless interaction: A Conceptual
examination of the notion of interface: Past, Present, and Future.
Human–Computer Interaction, 30(6), 507–539. doi:10.1080/
07370024.2014.944313
Janlert, L. E., & Stolterman, E. (2017). The meaning of interactivity—
Some proposals for definitions and measures. Human–Computer
Interaction, 32(3), 103–138. doi:10.1080/07370024.2016.1226139
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., … Wang, Y. (2017).
Artificial intelligence in healthcare: Past, present and future. Stroke
and Vascular Neurology, 2(4), 230–243. doi:10.1136/svn-2017-000101
Johnson, D., Deterding, S., Kuhn, K. A., Staneva, A., Stoyanov, S., &
Hides, L. (2016). Gamification for health and wellbeing: A systematic
review of the literature. Internet Interventions, 6, 89–106. doi:10.1016/
j.invent.2016.10.002
Jones, M. N. (2016). Developing cognitive theory by mining large-scale
naturalistic data. In M. N. Jones (Ed.), Big data in cognitive science
(pp. 10–21). New York, NY: Psychology Press.
Jordan, J. B., & Vanderheiden, G. C. (2017, July). Towards accessible
automatically generated interfaces part 1: An input model that bridges
the needs of users and product functionality. Proceedings of the 3rd
International Conference on Human Aspects of IT for the Aged
Population (ITAP 2017) (pp. 129–146), Vancouver, Canada. Cham,
Switzerland: Springer. doi: 10.1007/978-3-319-58530-7_9
José, R., Rodrigues, H., & Otero, N. (2010). Ambient intelligence: Beyond the
inspiring vision. J. Ucs, 16(12), 1480–1499. doi:10.3217/jucs-016-12-1480
Kachouie, R., Sedighadeli, S., Khosla, R., & Chu, M. T. (2014). Socially
assistive robots in elderly care: A mixed-method systematic literature
review. International Journal of Human-Computer Interaction, 30(5),
369–393. doi:10.1080/10447318.2013.873278
Kalantari, M. (2017). Consumers’ adoption of wearable technologies:
Literature review, synthesis, and future research agenda.
International Journal of Technology Marketing, 12(3), 274–307.
doi:10.1504/IJTMKT.2017.089665
Kaplan, J. (2016). Artificial intelligence: Think again. Communications of
the ACM, 60(1), 36–38. doi:10.1145/2950039
Kidd, S. H., & Crompton, H. (2016). Augmented learning with augmented reality. In D. Churchill, J. Lu, T. Chiu, & B. Fox (Eds.), Mobile
Learning Design (pp. 97–108). Singapore: Springer. doi:10.1007/978981-10-0027-0_6
Kilteni, K., Groten, R., & Slater, M. (2012). The sense of embodiment in
virtual reality. Presence: Teleoperators and Virtual Environments, 21
(4), 373–387. doi:10.1162/PRES_a_00124
Kizza, J. M. (2017). Ethical and social issues in the information age (Sixth
edition ed.). Cham, Switzerland: Springer. doi:10.1007/978-3-31970712-9
Klein, G., Shneiderman, B., Hoffman, R. R., & Ford, K. M. (2017). Why
expertise matters: A response to the challenges. IEEE Intelligent
Systems, 32(6), 67–73. doi:10.1109/MIS.2017.4531230
Kleinberger, T., Becker, M., Ras, E., Holzinger, A., & Müller, P. (2007,
July). Ambient intelligence in assisted living: Enable elderly people to
handle future interfaces. Proceedings of the 12th International
Conference on Universal Access in Human-Computer Interaction
(UAHCI 2007) (pp. 103–112). Springer, Berlin, Heidelberg. doi:
10.1007/978-3-540-73281-5_11
Kluge, E. H. W. (2011). Ethical and legal challenges for health telematics
in a global world: Telehealth and the technological imperative.
International Journal of Medical Informatics, 80(2), e1–e5.
doi:10.1016/j.ijmedinf.2010.10.002
37
Knowles, B., Bates, O., & Håkansson, M. (2018, April). This changes
sustainable HCI. Proceedings of the 2018 CHI Conference on Human
Factors in Computing Systems (CHI ‘18) (p. 471). New York, USA,
ACM. doi: 10.1145/3173574.3174045
Kokolakis, S. (2017). Privacy attitudes and privacy behaviour: A review of
current research on the privacy paradox phenomenon. Computers &
Security, 64, 122–134. doi:10.1016/j.cose.2015.07.002
Könings, B., Wiedersheim, B., & Weber, M. (2011). Privacy & trust in
ambient intelligence environments. In T. Heinroth & W. Minker
(Eds.), Next Generation Intelligent Environments: Ambient Adaptive
Systems (pp. 227–250). New York, NY: Springer Science & Business
Media.
Konomi, S., Shoji, K., & Ohno, W. (2013). Rapid development of civic
computing services: Opportunities and challenges. Proceedings of the
1st International Conference on Distributed, Ambient, and Pervasive
Interactions (DAPI 2013) (pp. 309–315). Berlin, Heidelberg: Springer.
doi: 10.1007/978-3-642-39351-8_34
Konomi, S. I., Hatano, K., Inaba, M., Oi, M., Okamoto, T., Okubo, F., …
Yamada, Y. (2018). Towards supporting multigenerational co-creation
and social activities: Extending learning analytics platforms and
beyond. Proceedings of the 6th International Conference on
Distributed, Ambient and Pervasive Interactions (DAPI 2018) (pp.
82–91). Springer, Cham. doi: 10.1007/978-3-319-91131-1_6
Kostkova, P. (2015). Grand challenges in digital health. Frontiers in
Public Health, 3, 134. doi:10.3389/fpubh.2015.00134
Lan, K., Wang, D. T., Fong, S., Liu, L. S., Wong, K. K., & Dey, N. (2018).
A survey of data mining and deep learning in bioinformatics, Journal
of Medical Systems, 42, 139, Published online: 28 June 2018.
doi:10.1007/s10916-018-1003-9
Lanier, J. (2017). Dawn of the new everything: Encounters with reality and
virtual reality. New York, NY: Henry Holt and Company.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral
participation. New York, NY: Cambridge University Press.
Lee, E. A. L., & Wong, K. W. (2008). A review of using virtual reality for
learning. In Z. Pan, A. D. Cheok, W. Müller, & A. El Rhalibi (Eds.),
Transactions on Edutainment I. Lecture Notes in Computer Science
(Vol. 5080, pp. 231–241). Berlin, Heidelberg: Springer. doi:10.1007/
978-3-540-69744-2_18
Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions on
Human Factors in Electronics, HFE-1(1), 4–11. doi:10.1109/
THFE2.1960.4503259
Lin, X., Hu, J., & Rauterberg, M. (2015). Review on interaction design for
social context in public spaces. Proceedings of the 7th International
Conference on Cross-Cultural Design (CCD 2015) (pp. 328–338).
Springer, Cham. doi: 10.1007/978-3-319-20907-4_30
Loveless, A. (2007). Literature review in creativity, new technologies and
learning. A NESTA Futurelab research report - report 4. https://telearn.
archives-ouvertes.fr/hal-00190439 doi: 10.1094/PDIS-91-4-0467B
Lukyanenko, R., Parsons, J., & Wiersma, Y. F. (2016). Emerging problems of data quality in citizen science. Conservation Biology : the
Journal of the Society for Conservation Biology, 30(3), 447–449.
doi:10.1111/cobi.12706
Lupton, D. (2016). The quantified self. Cambridge, UK: Polity Press.
Lynch, R., & Farrington, C. (Eds.). (2017). Quantified lives and vital
data: Exploring health and technology through personal medical
devices. London, UK: Palgrave Macmillan. doi: 10.1057/978-1-34995235-9
Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent
tutoring systems and learning outcomes: A meta-analysis. Journal of
Educational Psychology, 106(4), 901–918. doi:10.1037/a0037123
Marcus, A. (2000). International and intercultural user interfaces. In C.
Stephanidis (Ed.), User interfaces for all: Concepts, methods, and tools
(pp. 47–63). Mahwah, NJ: Lawrence Erlbaum Associates.
Marcus, A. (2006). Cross-cultural user-experience design. Proceedings of
the 4th International Conference on Diagrammatic Representation and
Inference (Diagrams 2006) (pp. 16–24). Berlin, Heidelberg: Springer.
doi: 10.1007/11783183_4
Marcus, A. (2015a). HCI and user-experience design: Fast-forward to the
past, present, and future. London, UK: Springer. doi:10.1007/9781-4471-6744-0
38
C. STEPHANIDIS ET AL.
Marcus, A. (2015b). Mobile persuasion design. London, UK: Springer.
doi:10.1007/978-1-4471-4324-6
Marcus, A. (2015c). The health machine: Combining information design/
visualization with persuasion design to change people’s nutrition and
exercise behavior. In A. Marcus (Ed.), Mobile Persuasion Design (pp.
35–77). London, UK: Springer. doi:10.1007/978-1-4471-4324-6_3
Marcus, A. (2015d). The happiness machine: Combining information
design/visualization with persuasion design to change behavior. In
A. Marcus (Ed.), Mobile Persuasion Design (pp. 539–604). London,
UK: Springer. doi:10.1007/978-1-4471-4324-6_10
Marcus, A., Kurosu, M., Ma, X., & Hashizume, A. (2017). Cuteness
engineering: Designing adorable Products and services. Cham,
Switzerland: Springer. doi:10.1007/978-3-319-61961-3
Marenko, B., & Van Allen, P. (2016). Animistic design: How to reimagine digital interaction between the human and the nonhuman.
Digital Creativity, 27(1), 52–70. doi:10.1080/14626268.2016.1145127
Margetis, G., Antona, M., Ntoa, S., & Stephanidis, C. (2012). Towards
accessibility in ambient intelligence environments. Proceedings of the
3rd International Joint Conference on Ambient Intelligence (AmI 2012)
(pp. 328–337). Springer, Berlin, Heidelberg. doi: 10.1007/978-3-64234898-3_24
Margetis, G., Ntoa, S., Antona, M., & Stephanidis, C. (2019). Augmenting
natural interaction with physical paper in ambient intelligence
environments. Multimedia Tools and Applications, First Online, 02
(January), 2019. doi:10.1007/s11042-018-7088-9
McCallum, S. (2012). Gamification and serious games for personalized
health. In B. Blobel, P. Pharow, F. Sousa (Eds.), Proceedings of the 9th
International Conference on Wearable Micro and Nano Technologies
for Personalized Health (PHealth 2012) (pp. 85–96), Porto, Portugal.
Amsterdam, The Netherlands: IOS Press.
McMillan, D. (2017). Implicit interaction through machine learning:
Challenges in design, accountability, and privacy. Proceedings of the
4th International Conference on Internet Science (INCI 2017) (pp.
352–358). Springer, Cham. doi: 10.1007/978-3-319-70284-1_27
Meiselwitz, G., Wentz, B., & Lazar, J. (2010). Universal usability: Past,
present, and future. Foundations and Trends in Human–Computer
Interaction, 3(4), 213–333. doi:10.1561/1100000029
Mekler, E. D., & Hornbæk, K. (2016). Momentary pleasure or lasting
meaning? Distinguishing eudaimonic and hedonic user experiences.
Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems (CHI ‘16) (pp. 4509–4520). San Jose, CA: ACM.
doi: 10.1145/2858036.2858225
Mikulecký, P. (2012, April). Smart environments for smart learning.
Proceedings of the 9th International Scientific Conference on Distance
Learning in Applied Informatics (DIVAI 2012) (pp. 213–222). Štúrovo,
Slovakia. Retrieved from: https://conferences.ukf.sk/public/confer
ences/1/divai2012_conference_proceedings.pdf
doi:
10.1177/
1753193412447497
Moallem, A. (Ed.). (2018). Human-Computer Interaction and cybersecurity handbook. Boca Raton, FL: CRC Press.
Mooneyham, B. W., & Schooler, J. W. (2013). The costs and benefits of
mind-wandering: A review. Canadian Journal of Experimental
Psychology/Revue Canadienne De Psychologie Expérimentale, 67(1),
11–18. doi:10.1037/a0031569
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley
[from the field]. IEEE Robotics & Automation Magazine, 19(2),
98–100. doi:10.1109/MRA.2012.2192811
Moschetti, A., Fiorini, L., Aquilano, M., Cavallo, F., & Dario, P. (2014).
Preliminary findings of the AALIANCE2 ambient assisted living
roadmap. Proceedings of the 4th Italian Forum on Ambient assisted living
(pp. 335–342). Springer, Cham. doi: 10.1007/978-3-319-01119-6_34
Müller, J., Alt, F., Michelis, D., & Schmidt, A. (2010). Requirements and
design space for interactive public displays. Proceedings of the 18th
ACM international conference on Multimedia (MM ‘10) (pp.
1285–1294). New York, NY, USA: ACM. doi: 10.1145/1873951.1874203
Muñoz-Cristóbal, J. A., Rodríguez-Triana, M. J., Gallego-Lema, V.,
Arribas-Cubero, H. F., Asensio-Pérez, J. I., & Martínez-Monés, A.
(2018). Monitoring for awareness and reflection in ubiquitous learning environments. International Journal of Human–Computer
Interaction, 34(2), 146–165. doi:10.1080/10447318.2017.1331536
Naismith, L., Lonsdale, P., Vavoula, G. N., & Sharples, M. (2004).
Literature review in mobile technologies and learning. Futurelab
Series, Report 11. ISBN: 0-9548594-1-3. Retrieved from: http://hdl.
handle.net/2381/8132
Ng, W. (2015). New digital technology in education (pp. 3–23). Cham,
Switzerland: Springer. doi:10.1007/978-3-319-05822-1_1
Norman, D. (1998). The invisible computer: Why good products can fail,
the personal computer is so complex, and information appliances are
the solution. Cambridge, MA: MIT press.
Norman, D. (1999). Affordance, conventions and design. Interactions, 6
(3), 38–43. doi:10.1145/301153.301168
Norman, D. (2007). The design of future things. New York, NY: Basic
Books.
Norman, D. (2014). Things that make us smart: Defending human attributes in the age of the machine. New York, NY: Diversion Books.
Norton, J., Raturi, A., Nardi, B., Prost, S., McDonald, S., Pargman, D., …
Dombrowski, L. (2017). A grand challenge for HCI: Food+
sustainability. Interactions, 24(6), 50–55. doi:10.1145/3137095
Ntoa, S., Antona, M., & Stephanidis, C. (2017). Towards technology
acceptance assessment in Ambient Intelligence environments.
Proceedings of the Seventh International Conference on Ambient
Computing, Applications, Services and Technologies (AMBIENT 2017)
(pp. 38–47). Barcelona, Spain. https://www.thinkmind.org/index.php?
view=article&articleid=ambient_2017_3_20_40022
Ntoa, S., Margetis, G., Antona, M., & Stephanidis, C. (2019). UXAmI
Observer: An automated User Experience evaluation tool for Ambient
Intelligence environments. Proceedings of the 2018 Intelligent Systems
Conference (IntelliSys 2018) (pp. 1350–1370). Springer, Cham. doi:
10.1007/978-3-030-01054-6_94
O’Leary, D. E. (2013). Artificial intelligence and big data. IEEE Intelligent
Systems, 28(2), 96–99. doi:10.1109/MIS.2013.39
Obrist, M., Velasco, C., Vi, C., Ranasinghe, N., Israr, A., Cheok, A., …
Gopalakrishnakone, P. (2016). Sensing the future of HCI: Touch,
taste, and smell user interfaces. Interactions, 23(5), 40–49.
doi:10.1145/2973568
Oh, J., & Lee, U. (2015, January). Exploring UX issues in quantified self
technologies. Proceedings of the 8th International Conference on
Mobile Computing and Ubiquitous Networking (ICMU 2015) (pp.
53–59). IEEE. doi: 10.1109/ICMU.2015.7061028
Orji, R., & Moffatt, K. (2018). Persuasive technology for health and
wellness: State-of-the-art and emerging trends. Health Informatics
Journal, 24(1), 66–91. doi:10.1177/1460458216650979
Parmaxi, A., Papadamou, K., Sirivianos, M., & Stamatelatos, M. (2017).
E-safety in Web 2.0 learning environments: A research synthesis and
implications for researchers and practitioners. Proceedings of the 4th
International Conference on Learning and Collaboration Technologies
(LCT 2017) (pp. 249–261). Springer, Cham. doi: 10.1007/978-3-31958509-3_20
Pegrum, M. (2016). Future directions in mobile learning. In D. Churchill,
J. Lu, T. Chiu, & B. Fox (Eds.), Mobile Learning Design (pp. 413–431).
Singapore: Springer. doi:10.1007/978-981-10-0027-0_24
Persson, H., Åhman, H., Yngling, A. A., & Gulliksen, J. (2015).
Universal design, inclusive design, accessible design, design for all:
Different concepts—One goal? On the concept of accessibility—
Historical, methodological and philosophical aspects. Universal
Access in the Information Society, 14(4), 505–526. doi:10.1007/
s10209-014-0358-z
Pfeil, U., Arjan, R., & Zaphiris, P. (2009). Age differences in online social
networking–A study of user profiles and the social capital divide
among teenagers and older users in MySpace. Computers in Human
Behavior, 25(3), 643–654. doi:10.1016/j.chb.2008.08.015
Pieper, M., Antona, M., & Cortés, U. (2011). Ambient assisted living.
Ercim News, 87, 18–19.
Pimmer, C., Mateescu, M., & Gröhbiel, U. (2016). Mobile and ubiquitous
learning in higher education settings. A systematic review of empirical
studies. Computers in Human Behavior, 63, 490–501. doi:10.1016/j.
chb.2016.05.057
Piwek, L., Ellis, D. A., Andrews, S., & Joinson, A. (2016). The rise of
consumer health wearables: Promises and barriers. PLoS Medicine, 13
(2), e1001953. doi:10.1371/journal.pmed.1001953
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Pohlmeyer, A. E. (2013). Positive design: New challenges, opportunities,
and responsibilities for design. Proceedings of the 2nd International
Conference on Design, User Experience, and Usability (DUXU 2013)
(pp. 540–547). Springer, Berlin, Heidelberg. doi: 10.1007/978-3-64239238-2_59
Poppe, R., Rienks, R., & van Dijk, B. (2007). Evaluating the future of
HCI: Challenges for the evaluation of emerging applications. In
T. S. Huang, A. Nijholt, M. Pantic, & A. Pentland (Eds.), Artifical
Intelligence for Human Computing (pp. 234–250). Springer, Berlin:
Heidelberg. doi:10.1007/978-3-540-72348-6_12
Preece, J. (2016). Citizen science: New research challenges for human–
Computer interaction. International Journal of Human-Computer
Interaction, 32(8), 585–612. doi:10.1080/10447318.2016.1194153
Pynadath, D. V., Barnes, M. J., Wang, N., & Chen, J. Y. C. (2018).
Transparency communication for Machine Learning in humanautomation interaction. In J. Zhou & F. Chen (Eds.), Human and
Machine Learning. Human–Computer Interaction Series (pp. 75–90).
Cham, Switzerland: Springer. doi:10.1007/978-3-319-90403-0_5
Rashidi, P., & Mihailidis, A. (2013). A survey on ambient-assisted living
tools for older adults. IEEE Journal of Biomedical and Health
Informatics, 17(3), 579–590. doi:10.1109/JBHI.2012.2234129
Rauschnabel, P. A., Hein, D. W., He, J., Ro, Y. K., Rawashdeh, S., &
Krulikowski, B. (2016). Fashion or technology? A fashnology perspective on the perception and adoption of augmented reality smart
glasses. I-Com, 15(2), 179–194. doi:10.1515/icom-2016-0021
Raybourn, E. M., & Bos, N. (2005). Design and evaluation challenges of
serious games. CHI’05 extended abstracts on Human factors in computing systems (pp. 2049–2050). New York, USA: ACM. doi: 10.1145/
1056808.1057094
Reeves, S. (2011). Designing Interfaces in Public Settings (pp. 9–27).
London, UK: Springer. doi:10.1007/978-0-85729-265-0_2
Richards, M. N., & King, H. J. (2016). Big data and the future of privacy.
In F. X. Olleros & M. Zhegu (Eds.), Research handbook on digital
transformations (pp. 272–290). Cheltenham, UK: Edward Elgar
Publishing.
Riek, L. D. (2017). Healthcare robotics. Communications of the ACM, 60
(11), 68–78. doi:10.1145/3127874
Robinson, K. (2011). Out of our minds: Learning to be creative. Oxford,
UK: Capstone Publishing Ltd.
Robinson, L., Cotten, S. R., Ono, H., Quan-Haase, A., Mesch, G.,
Chen, W., … Stern, M. J. (2015). Digital inequalities and why they
matter. Information, Communication & Society, 18(5), 569–582.
doi:10.1080/1369118X.2015.1012532
Romero, C., & Ventura, S. (2010). Educational data mining: A review of
the state of the art. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews), 40(6), 601–618.
doi:10.1109/TSMCC.2010.2053532
Rouse, W., . B., & McBride, D. (2019). A systems approach to assistive
technologies for disabled and older adults. The Bridge, 49(1), 32–38.
Retrieved from: https://www.nae.edu/205212/Spring-Bridge-onTechnologies-for-Aging
Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for
robust and beneficial artificial intelligence. AI Magazine, 36(4),
105–114. doi:10.1609/aimag.v36i4.2577
Ryan, M. L. (2015). Narrative as virtual reality 2: Revisiting immersion
and interactivity in literature and electronic media (Vol. 2). Baltimore,
MD: JHU Press.
Sakamoto, T., & Takeuchi, Y. (2014). Stage of subconscious interaction in embodied interaction. Proceedings of the second international conference on Human-agent interaction (HAI ‘14) (pp.
391–396). New York, NY, USA, ACM. doi: 10.1145/
2658861.2658876
Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial
intelligence: Understanding, visualizing and interpreting deep learning models. ITU Journal: ICT Discoveries, Special Issue (1), 39–48.
https://www.itu.int/en/journal/001/Pages/05.aspx
Santoni de Sio, F., & van Den Hoven, J. (2018). Meaningful human
control over autonomous systems: A philosophical account.
Frontiers in Robotics and AI, 15(5). doi:10.3389/frobt.2018.00015
39
Sarwar, M., & Soomro, T. R. (2013). Impact of smartphone’s on society.
European Journal of Scientific Research, 98(2), 216–226.
Schmidt, A., Langheinrich, M., & Kersting, K. (2011). Perception beyond
the Here and Now. IEEE Computer, 44(2), 86–88. doi:10.1109/
MC.2011.54
Schneider, O., MacLean, K., Swindells, C., & Booth, K. (2017). Haptic
experience design: What hapticians do and where they need help.
International Journal of Human-Computer Studies, 107, 5–21.
doi:10.1016/j.ijhcs.2017.04.004
Scholtz, J., & Consolvo, S. (2004). Toward a framework for evaluating
ubiquitous computing applications. IEEE Pervasive Computing, 3(2),
82–88. doi:10.1109/MPRV.2004.1316826
Scoble, R., & Israel, S. (2017). The fourth transformation: How augmented
reality and artificial intelligence change everything. New York, NY:
Patrick Brewster Press.
Seemiller, C., & Grace, M. (2016). Generation Z goes to college. San
Francisco, CA: Jossey-Bass.
Seymour, S., & Beloff, L. (2008). Fashionable technology–The next generation of wearables. In C. Sommerer, L. C. Jain, & L. Mignonneau
(Eds.), The Art and Science of Interface and Interaction Design. Studies
in Computational Intelligence (Vol. 141, pp. 131–140). Berlin,
Heidelberg: Springer.
Sharma, R. (2018, April 12). Understanding Artificial Intelligence as a
service. Retrieved from: https://hackernoon.com/understandingartificial-intelligence-as-a-service-aiaas-780f2e3f663c
Sharon, T. (2017). Self-tracking for health and the quantified self:
Re-articulating autonomy, solidarity, and authenticity in an age of
personalized healthcare. Philosophy & Technology, 30(1), 93–121.
doi:10.1007/s13347-016-0215-5
Sherman, W. R., & Craig, A. B. (2018). Understanding virtual reality:
Interface, application, and design. Cambridge, MA: Morgan
Kaufmann.
Shneiderman, B. (2007). Creativity support tools: Accelerating discovery
and innovation. Communications of the ACM, 50(12), 20–32.
doi:10.1145/1323688.1323689
Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., &
Diakopoulos, N. (2016). Grand challenges for HCI researchers.
Interactions, 23(5), 24–25. doi:10.1145/2977645
Siau, K. (2018). Education in the age of artificial intelligence: How will
technology shape learning. The Global Analyst, 7(3), 22–24.
Siau, K., Hilgers, M., Chen, L., Liu, S., Nah, F., Hall, R., & Flachsbart, B.
(2018). FinTech empowerment: Data science, Artificial Intelligence, and
Machine Learning. Cutter Business Technology Journal, 31(11/12), 12–18.
Siau, K., & Wang, W. (2018). Building trust in Artificial Intelligence,
Machine Learning, and Robotics. Cutter Business Technology Journal,
31(2), 47–53.
Sicari, S., Rizzardi, A., Grieco, L. A., & Coen-Porisini, A. (2015). Security,
privacy and trust in Internet of Things: The road ahead. Computer
Networks, 76, 146–164. doi:10.1016/j.comnet.2014.11.008
Siemens, G., & Baker, R., S., J., D. (2012). Learning analytics and educational data mining: Towards communication and collaboration.
Proceedings of the 2nd international conference on learning analytics
and knowledge (LAK ‘12) (pp. 252–254). New York, USA: ACM. doi:
10.1145/2330601.2330661
Silberman, M., Nathan, L., Knowles, B., Bendor, R., Clear, A.,
Håkansson, M., … Mankoff, J. (2014). Next steps for sustainable
HCI. Interactions, 21(5), 66–69. doi:10.1145/2651820
Simonite, T. (2018, August 2). Should data scientists adhere to
a Hippocratic oath? Retrieved from: https://www.wired.com/story/
should-data-scientists-adhere-to-a-hippocratic-oath/
Smallwood, J., & Andrews-Hanna, J. (2013). Not all minds that wander are
lost: The importance of a balanced perspective on the mind-wandering
state. Frontiers in Psychology, 4, 441. doi:10.3389/fpsyg.2013.0044.1
Smirek, L., Zimmermann, G., & Ziegler, D. (2014). Towards universally
usable smart homes - how can myui, urc and openhab contribute to
an adaptive user interface platform? Proceedings of the 7th
International Conference on Advances in Human-oriented and
Personalized Mechanisms, Technologies, and Services (CENTRIC
2014) (pp. 29–38). IARIA, Nice, France. doi:10.1.1.684.2511
40
C. STEPHANIDIS ET AL.
Spector, J. M., Merrill, M. D., Elen, J., & Bishop, M. J. (Eds.). (2014).
Handbook of research on educational communications and technology
(pp. 413–424). New York, NY: Springer.
Spence, C., Obrist, M., Velasco, C., & Ranasinghe, N. (2017). Digitizing
the chemical senses: Possibilities & pitfalls. International Journal of
Human-Computer Studies, 107, 62–74. doi:10.1016/j.ijhcs.2017.06.003
Steinicke, F. (2016). Being really virtual. Cham, Switzerland: Springer.
doi:10.1007/978-3-319-43078-2
Stephanidis, C. (2012). Human factors in ambient intelligence environments. In G. Salvendy (Ed.), Handbook of Human Factors and
Ergonomics (4th ed.) (pp. 1354–1373). Hoboken, NJ: John Wiley and
Sons.
Stephanidis, C., Antona, M., & Grammenos, D. (2007). Universal access
issues in an ambient intelligence research facility. Proceedings of the
4th International Conference on Universal Access in Human-Computer
Interaction (UAHCI 2007) (pp. 208–217). Beijing, P.R. China:
Springer. doi: 10.1007/978-3-540-73281-5_22
Stephanidis, C., & Emiliani, P. L. (1999). Connecting to the information
society: A European perspective. Technology and Disability, 10(1), 21–44.
Stephanidis, C., Salvendy, G., Akoumianakis, D., Bevan, N., Brewer, J.,
Emiliani, P. L., … Ziegler, J. (1998). Toward an Information Society for
all: An international R&D agenda. International Journal of HumanComputer Interaction, 10(2), 107–134. doi:10.1207/s15327590ijhc1002_2
Still, J. D. (2016). Cybersecurity needs you!. Interactions, 23(3), 54–58.
doi:10.1145/2899383
Stoyanovich, J., Abiteboul, S., & Miklau, G. (2016). Data, responsibly:
Fairness, neutrality and transparency in data analysis. Proceedings of
the 19th International Conference on Extending Database Technology
(EDBT 2016) (pp. 718–719). Bordeaux, France. doi: 10.5441/002/
edbt.2016.103
Streitz, N. (2001). Augmented reality and the disappearing computer. In
M. Smith, G. Salvendy, D. Harris, & R. Koubek (Eds.), Usability
Evaluation and Interface Design: Cognitive engineering, intelligent
agents and virtual reality (pp. 738–742). Mahwah, NJ: Lawrence
Erlbaum.
Streitz, N. (2007). From human–Computer interaction to human–
Environment interaction: Ambient intelligence and the disappearing
computer. Proceedings of the 9th ERCIM Workshop on User Interfaces
for All (pp. 3–13). Springer, Berlin, Heidelberg. doi: 10.1007/9783-540-71025-7_1
Streitz, N., Prante, T., Röcker, C., van Alphen, D., Stenzel, R.,
Magerkurth, C., … Plewe, D. (2007). Smart artefacts as affordances for
awareness in distributed teams. In N. Streitz, A. Kameas, &
I. Mavrommati (Eds.), The Disappearing Computer, LNCS vol 4500 (pp.
3–29). Berlin, Heidelberg: Springer. doi:10.1007/978-3-540-72727-9_1
Streitz, N. (2011). Smart cities, ambient intelligence and universal access.
Proceedings of the 6th International Conference on Universal Access in
Human-Computer Interaction (UAHCI 2011) (pp. 425–432). Springer,
Berlin, Heidelberg. doi: 10.1007/978-3-642-21666-4_47
Streitz, N. (2017). Reconciling humans and technology: The role of
ambient intelligence. In European Conference on Ambient
Intelligence. Proceedings of the 13th European Conference on
Ambient Intelligence (AmI 2017) (1–16). Springer, Cham. doi:
10.1007/978-3-319-56997-0_1
Streitz, N. (2018). Beyond ‘smart-only’ cities: Redefining the ‘smarteverything’ paradigm. Journal of Ambient Intelligence and
Humanized Computing, 1–22. doi:10.1007/s12652-018-0824-1
Streitz, N., Charitos, D., Kaptein, M., & Böhlen, M. (2019). Grand
challenges for ambient intelligence and implications for design contexts and smart societies. Tenth Anniversary Issue, Journal of Ambient
Intelligence and Smart Environments, 11(1), 87–107. doi:10.3233/AIS180507
Streitz, N., Geißler, J., & Holmer, T. (1998). Roomware for cooperative
buildings: Integrated design of architectural spaces and information
spaces. Proceedings of the 1st International Workshop on Cooperative
Buildings (CoBuild’98) (pp. 4–21). Springer, Heidelberg. doi: 10.1007/
3-540-69706-3_3
Suciu, G., Vulpe, A., Craciunescu, R., Butca, C., & Suciu, V. (2015). Big
data fusion for eHealth and ambient assisted living cloud applications.
Proceedings of the 2015 IEEE International Black Sea Conference on
Communications and Networking (BlackSeaCom) (pp. 102-106).
Constanta, Romania. doi: 10.1109/BlackSeaCom.2015.7185095
Sun, N., Rau, P. L. P., Li, Y., Owen, T., & Thimbleby, H. (2016). Design
and evaluation of a mobile phone-based health intervention for
patients with hypertensive condition. Computers in Human
Behavior, 63, 98–105. doi:10.1016/j.chb.2016.05.001
Susi, T., Johannesson, M., & Backlund, P. (2007). Serious games: An
overview. Retrieved from: http://www.diva-portal.org/smash/record.
jsf?pid=diva2%3A2416&dswid=−9292 Doi: 10.1094/PDIS-91-4-0467B
Sutcliffe, A. G., Poullis, C., Gregoriades, A., Katsouri, I., Tzanavari, A., &
Herakleous, K. (2019). Reflecting on the design process for virtual
reality applications. International Journal of Human–Computer
Interaction, 35(2), 168–179. doi:10.1080/10447318.2018.1443898
Tandler, P., Streitz, N., & Prante, T. (2002). Roomware - moving toward
Ubiquitous Computers. IEEE Micro, 22(6), 36–47. doi:10.1109/
MM.2002.1134342
Tegmark, M. (2017). Life 3.0 – Being human in the age of Artificial
Intelligence. New York, NY: Knopf.
Teli, M., Bordin, S., Blanco, M. M., Orabona, G., & De Angeli, A. (2015).
Public design of digital commons in urban places: A case study.
International Journal of Human-Computer Studies, 81, 17–30.
doi:10.1016/j.ijhcs.2015.02.003
Tene, O., & Polonetsky, J. (2013). Big data for all: Privacy and user
control in the age of analytics. Northwestern Journal of Technology
and Intellectual Property, 11(5), 239–273. Retrieved from: https://scho
larlycommons.law.northwestern.edu/njtip/vol11/iss5/1
The IEEE Global Initiative on Ethics of Autonomous and Intelligent
Systems. (2019). Ethically aligned design: A vision for prioritizing
human well-being with autonomous and intelligent systems. First
Edition. IEEE. Retrieved from https://standards.ieee.org/industryconnections/ec/autonomous-systems.html
Tragos, E. Z., Fragkiadakis, A., Kazmi, A., & Serrano, M. (2018). Trusted
IoT in ambient assisted living scenarios. In A. Moallem (Ed.), HumanComputer Interaction and Cybersecurity Handbook (pp. 191–208).
Boca Raton, FL: CRC Press.
Treviranus, J., Clark, C., Mitchell, J., & Vanderheiden, G. C. (2014).
Prosperity4All–Designing a multi-stakeholder network for economic
inclusion. Proceedings of the 8th International Conference on Universal
Access in Human-Computer Interaction (UAHCI 2014) (pp. 453–461).
Springer, Cham. doi: 10.1007/978-3-319-07509-9_43
Van Deursen, A. J., & Helsper, E. J. (2015). The third-level digital divide:
Who benefits most from being online? In L. Robinson, S. R. Cotten, J.
Schulz, T. M. Hale, & A. Williams (Eds.), Communication and information technologies annual (pp. 29–52). Emerald Group Publishing
Limited. doi:10.1108/s2050-206020150000010002
van Dijk, J. (2012). The evolution of the digital divide. In J. Bus (Ed.),
Digital Enlightenment Yearbook 2012 (pp. 57–75). Amsterdam, The
Netherlands: IOS Press. doi:10.3233/978-1-61499-057-4-57
Van Krevelen, D. W. F., & Poelman, R. (2010). A survey of augmented
reality technologies, applications and limitations. International
Journal of Virtual Reality, 9(2), 1–20.
Vanderheiden, G. (1990). Thirty-something million: Should they be
exceptions?. Human Factors, 32(4), 383–396. doi:10.1177/
001872089003200402
Vanderheiden, G. (2000). Fundamental principles and priority setting for
universal usability. Proceedings on the 2000 conference on Universal
Usability (CUU ‘00) (pp. 32–37). New York, NY, USA, ACM. doi:
10.1145/355460.355469
Vassli, L. T., & Farshchian, B. A. (2018). Acceptance of health-related
ICT among elderly people living in the community: A systematic
review of qualitative evidence. International Journal of Human–
doi:10.1080/
Computer
Interaction,
34(2),
99–116.
10447318.2017.1328024
Verbeek, P. P. (2015). Beyond interaction: A short introduction to
mediation theory. Interactions, 22(3), 26–31. doi:10.1145/2751314
Versteeg, M., van Den Hoven, E., & Hummels, C. (2016, February).
Interactive jewellery: A design exploration. Proceedings of the Tenth
International Conference on Tangible, Embedded, and Embodied
Interaction (TEI’16) (pp. 44–52). New York, NY, USA, ACM. doi:
10.1145/2839462.2839504
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
Vimarlund, V., & Wass, S. (2014). Big data, smart homes and ambient
assisted living. Yearbook of Medical Informatics, 9(1), 143–149.
doi:10.15265/IY-2014-0011
Vitak, J., Shilton, K., & Ashktorab, Z. (2016). Beyond the Belmont principles: Ethical challenges, practices, and beliefs in the online data research
community. Proceedings of the 19th ACM Conference on ComputerSupported Cooperative Work & Social Computing (CSCW ‘16) (pp.
941–953). New York, NY, USA, ACM. doi: 10.1145/2818048.2820078
Vlachokyriakos, V., Crivellaro, C., Le Dantec, C. A., Gordon, E., Wright, P.,
& Olivier, P. (2016). Digital civics: Citizen empowerment with and
through technology. Proceedings of the 2016 CHI conference extended
abstracts on human factors in computing systems (CHI EA ‘16) (pp.
1096–1099). New York, NY, USA, ACM. doi: 10.1145/2851581.2886436
Vogel, D., & Balakrishnan, R. (2004). Interactive public ambient displays:
Transitioning from implicit to explicit, public to personal, interaction
with multiple users. Proceedings of the 17th annual ACM symposium
on User interface software and technology (UIST ‘04) (pp. 137–146).
New York, NY, USA, ACM. doi: 10.1145/1029632.1029656
Wadhwa, K., & Wright, D. (2013). eHealth: Frameworks for assessing
ethical impacts. In C. George, D. Whitehouse, & P. Duquenoy (Eds.),
eHealth: Legal, Ethical and Governance Challenges (pp. 183–210).
Heidelberg, Germany: Springer. doi:10.1007/978-3-642-22474-4_8
Wang, K., & Nickerson, J. V. (2017). A literature review on individual
creativity support systems. Computers in Human Behavior, 74,
139–151. doi:10.1016/j.chb.2017.04.035
Wang, Y. (2014). From information revolution to intelligence revolution:
Big data science vs. intelligence science. Proceedings of the 13th IEEE
International Conference on Cognitive Informatics & Cognitive
Computing (ICCI* CC 2014) (pp. 3–5). London, UK: IEEE. doi:
10.1109/ICCI-CC.2014.6921432
Waterman, A. S., Schwartz, S. J., Zamboanga, B. L., Ravert, R. D.,
Williams, M. K., Bede Agocha, V., … Brent Donnellan, M. (2010).
The questionnaire for eudaimonic well-being: Psychometric properties, demographic comparisons, and evidence of validity. The Journal
of Positive Psychology, 5(1), 41–61. doi:10.1080/17439760903435208
Waycott, J., Munteanu, C., Davis, H., Thieme, A., Moncur, W.,
McNaney, R., … Branham, S. (2016). Ethical encounters in
human-computer interaction. Proceedings of the 2016 CHI
Conference Extended Abstracts on Human Factors in Computing
Systems (CHI EA ‘16) (pp. 3387–3394). New York, NY, USA, ACM.
doi: 10.1145/2851581.2856498
Weiser, M. (1991). The computer for the 21st century. Scientific
American, 265(3), 94–105. doi:10.1038/scientificamerican0991-94
Weld, S. D., & Bansal, G. (2019, to appear). The challenge of crafting
intelligible intelligence. Communications of ACM. Retrieved from:
https://arxiv.org/abs/1803.04263
41
Whitehead, L., & Seaton, P. (2016). The effectiveness of self-management
mobile phone and tablet apps in long-term condition management:
A systematic review. Journal of Medical Internet Research, 18(5), e97.
doi:10.2196/jmir.4883
Wiles, J. L., Leibing, A., Guberman, N., Reeve, J., & Allen, R. E. (2012).
The meaning of “aging in place” to older people. The Gerontologist, 52
(3), 357–366. doi:10.1093/geront/gnr098
Williams, A., Kennedy, S., Philipp, F., & Whiteman, G. (2017). Systems
thinking: A review of sustainability management research. Journal of
Cleaner Production, 148, 866–881. doi:10.1016/j.jclepro.2017.02.002
Williamson, J. R., & Sundén, D. (2016). Deep cover HCI: The ethics of
covert research. Interactions, 23(3), 45–49. doi:10.1145/2897941
Wilson, C., Draper, S., Brereton, M., & Johnson, D. (2017). Towards
thriving: Extending computerised cognitive behavioural therapy.
Proceedings of the 29th Australian Conference on Computer-Human
Interaction (OZCHI ‘17), 285–295. New York, NY, USA, ACM. doi:
10.1145/3152771.3152802. doi:10.1177/1046878108321866
Wilson, K. A., Bedwell, W. L., Lazzara, E. H., Salas, E., Burke, C. S.,
Estock, J. L., … Conkey, C. (2009). Relationships between game
attributes and learning outcomes: Review and research proposals.
Simulation & Gaming, 40(2), 217–266. doi:10.1177/1046878108321866
Wouters, N., Downs, J., Harrop, M., Cox, T., Oliveira, E., Webber, S., …
Vande Moere, A. (2016). Uncovering the honeypot effect: How audiences engage with public interactive systems. Proceedings of the 2016
ACM Conference on Designing Interactive Systems (DIS ‘16) (pp.
5–16). New York, NY, USA, ACM. doi: 10.1145/2901790.2901796
Yilmaz, R. M., & Goktas, Y. (2017). Using augmented reality technology
in storytelling activities: Examining elementary students’ narrative
skill and creativity. Virtual Reality, 21(2), 75–89. doi:10.1007/s10055016-0300-1
Young, M. F., Slota, S., Cutter, A. B., Jalette, G., Mullin, G., Lai, B., …
Yukhymenko, M. (2012). Our princess is in another castle: A review
of trends in serious gaming for education. Review of Educational
Research, 82(1), 61–89. doi:10.3102/0034654312436980
Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V. R., & Yang, Q. (2018).
Building ethics into Artificial Intelligence. Proceedings of the TwentySeventh International Joint Conference on Artificial Intelligence (IJCAI
2018) (pp. 5527–5533). Stockholm, Sweden: AAAI Press. doi:
10.24963/ijcai.2018/779
Yuen, S. C. Y., Yaoyuneyong, G., & Johnson, E. (2011). Augmented
reality: An overview and five directions for AR in education. Journal
of Educational Technology Development and Exchange (JETDE), 4(1),
Article 11, 119–140. doi:10.18785/jetde.0401.10
Ziegeldorf, J. H., Morchon, O. G., & Wehrle, K. (2014). Privacy in the
Internet of Things: Threats and challenges. Security and
Communication Networks, 7(12), 2728–2742. doi:10.1002/sec.795