Augmented Reality and Mobile Learning: The State of The Art: October 2012
Augmented Reality and Mobile Learning: The State of The Art: October 2012
Augmented Reality and Mobile Learning: The State of The Art: October 2012
net/publication/236616178
CITATIONS READS
46 1,862
6 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Elizabeth FitzGerald on 16 May 2014.
ABSTRACT
In this paper, we examine the state of the art in augmented reality (AR) for mobile learning. Previous work in the field of
mobile learning has included AR as a component of a wider toolkit for mobile learning but, to date, little has been done
that discusses the phenomenon in detail or that examines its potential for learning, in a balanced fashion that identifies
both positive and negative aspects of AR. We seek to provide a working definition of AR and examine how it is
embedded within situated learning in outdoor settings. We also attempt to classify AR according to several key aspects
(device/technology; mode of interaction; type of media involved; personal or shared experiences; if the experience is
portable or static; and the learning activities/outcomes). We discuss the technical and pedagogical challenges presented
by AR before looking at ways in which AR can be used for learning. Lastly, the paper looks ahead to what AR
technologies may be on the horizon in the near future.
Author Keywords
Augmented reality, mobile learning, education, situated learning, technology enhanced reality, state of the art, taxonomy.
INTRODUCTION
Augmented reality (AR) is a growing phenomenon on mobile devices, reflected by the increase in mobile computing in
recent years and the common ubiquity of Internet access across the world. The NMC Horizon Report for 2011 named
augmented reality as the highest-rated topic by its Advisory Board, with widespread time-to-adoption being only two to
three years (Johnson et al., 2011). What was once seen by many as being a mere gimmick with few applications outside
of training, marketing/PR or sport and entertainment, is now becoming more mainstream with real opportunities for it to
be used for educational purposes.
This paper examines the state of the art in the application of augmented reality for education, with a particular focus on
mobile learning that occurs in specific locations and outdoor settings. One of the most compelling affordances of AR is
its resonance with immediate surroundings and the way in which information can be overlaid on top of these
surroundings, enabling us not only to learn about our environment but also giving us the tools to annotate it.
The paper begins with a definition of what we mean by augmented reality, before then exploring the pedagogical theories
that underpin AR. We offer a framework that can be used to classify AR in mobile learning, before examining the
criticisms and limitations of AR. Lastly, we suggest how AR can be embedded within mobile learning. We make two
important contributions to the field: firstly, a discussion of the underlying pedagogies surrounding the use of AR; and
secondly the taxonomy that seeks to classify the different aspects of mobile AR for learning in outdoor situations.
1
middle, where the two extremes meet, and is considered a blend of both the virtual and the real:
2
Recent advancement in GPS and networks have now enabled location accuracy to within 5-10 metres for single-point
receivers (Ordnance Survey, 2012); with carrier positioning accuracy (or ‘survey grade GPS’), this can be improved to
less than 1 centimetre. Larger, thinner and lighter touch-sensitive screens and advances in cameras and sensors also
increase the potential for creating and viewing information anytime and anywhere. Combining these technologies has led
to the recent emergence of mobile applications that utilise location sensing to provide users with relevant geo-referenced
information. Smartphone apps, such as Wikitude and Layar, orient users to information about their surrounding area (e.g.
approximate distance and direction to a point of interest). What these systems need now are appropriate visualisations (or
other methods of feedback), representations and guidance to provide or enhance situated meaning-making.
3
Method of
Device or Personal or Fixed/static
Project sensory Learning activities
technology Mode of interaction shared or portable
(in date order) feedback to or outcomes
used experience experience
the user
Passive/assimilative
Both
(information layer) + Interpreting the geological
Zapp Visual personal
active/exploratory features of a rural landscape
(Meek et al., Smartphone overlay: and shared Portable
(data through situated inquiry and
2012) label/text (small
logging/querying collaboration
groups)
tool)
Collaborative inquiry-based
learning, to enable sharing of
Out There, In
Laptops, Passive/assimilative Mixed: Shared data; development of
Here
tablet devices, (information layer) + visual; (small Portable hypotheses; access to
(Adams et al.,
smartphones exploratory auditory; text groups) information/resources etc.
2011)
between in-field students and
those indoors in a lab
PDA Archaeological and
Both
(Personal Mixed: visual architectural surveying of the
personal
CONTSENS Digital Constructionist (AR- (3D ruins of an abbey; specifically,
and shared Portable
(Cook, 2010) Assistant); based modelling) wireframe providing different visual
between 2
mobile model); video perspectives on the
users
phones mobile devices
Comparing different
Augmenting the PDAs; mobile Both technologies/techniques to
Visitor phones; tablet Passive/assimilative Mixed: personal Both static provide information about the
Experience devices; head- (information layer) + visual, audio, and shared and landscape to the casual visitor;
(Priestnall et al., up display active/exploratory text, video between 2- portable student-generated criteria
2010) (HUD) 3 users focused upon usability and
sustainability
Informal learning about the
Castello region of Venice, via
History Unwired Smartphones, Passive/assimilative Both a walking tour that uses local
PDAs (Pocket Mixed: audio,
(Epstein and (information layer) + personal Portable citizens to depict a local
PC) + video
Vergani, 2006) active/exploratory and shared experiences of art and craft,
headphones
history and folklore, public and
private spaces
Students acted as co-designers
Shared to help create local tour guides
Mudlarking in Passive/assimilative
PDAs + Mixed: text; (small on mobile devices, using
Deptford (information layer) + Portable
headphones audio; visual groups + multimedia relating to the local
(Futurelab, 2006) active/exploratory
pairs) area and their own
observations
The modes of interaction have generally focused around either providing passive information overlays to the learner,
depending on their physical location or movements/gestures, or engaging the learner in an exploratory mode where they
are encouraged to actively discover or create/log media nearby in order to e.g. solve a problem or meet characters from a
story. More modes of interaction may evolve in the future although some, such as the ‘constructionist’ mode, may be
more relevant to specific knowledge domains (e.g. architecture or structural engineering). The ‘active/exploratory’ mode
is also akin to AR games such as Environmental Detectives (Klopfer and Squire, 2008), although we have concentrated
on non-gaming examples here as this is another research area in its own right.
The media used has primarily been a mixture of visual (still images), video, audio and text, although the ‘Augmenting the
Visitor Experience’ project also used a non-digital format, that of printed acetates showing an outline of the landscape at
fixed viewpoints with written annotations/labels. Whilst this is not strictly AR, according to our earlier definitions, it
nevertheless presents an interesting vision for the future, as its transparent properties mean that information can be
overlaid whilst the user is still able to literally see beyond the augmentation and still perceive the landscape behind it.
This property is being exploited by Google with their ‘Google Glass’ product, that is discussed later in this paper. It is
interesting to see how the nature of the device or technology used for engaging with AR has also changed; early projects
tended towards the use of PDAs and mobile phones, whereas more recent projects are, not surprisingly, utilising
smartphones and tablet devices. What is exciting is how personal smartphones contain an increasingly sophisticated array
of sensors, thus enabling AR to become more personally meaningful and situated, so that what was once the domain of a
4
researcher intervention loaning specialist equipment for a short user trial, is now much more likely to be accessible by the
students or the general public on an everyday basis, thus providing a more sustainable technology for everyday learning.
5
subject domains); the ease of use in reference to installation/mobility of hardware; and the immersive and engaging
nature of three-dimensional AR visualisations (Luckin and Stanton Fraser, 2011).
Several projects have focused on the way in which AR can be used to encourage problem solving and independent
research amongst learners. For example, Squire (2010) notes that “from a classroom management perspective, the
narrative elements of the unit enabled teachers to create a dramatically different classroom culture, one that was built
around students performing as scientists. … Most noteworthy to teachers was how the technology-enhanced curriculum
enacted students' identities as problem solvers and knowledge builders rather than as compliant consumers of
information…”. The idea of learners engaging in collaborative problem solving has also been examined by Cook and his
work on Augmented Contexts for Development (ACD) (Cook, 2010) that extends the original Vygotskyian concept of
Zones of Proximal Development (ZPD) (Vygotsky, 1978). Cook suggests that mobile devices and their surrounding
physical environment enable learners to generate their own contexts for development, which can be interpreted or
assisted through AR. From the analysis of a video blog recorded by students on an architecture field trip, it seems that
students used physical and digital representations to synchronously interact and inform one other, leading to a co-
constructed knowledge that formed as a result of the interaction of the learners with the AR tools and media. In this
respect, the mobile devices acted as contextual sensors, enabling particular visualisations to be portrayed to the learners
in a situated manner. Another example of how situated visualisations and mobile connectivity to larger processors and
server infrastructure can enable learning can be seen through translational overlays of AR, when viewing a foreign
language text through e.g. smartphone apps.
Another way in which AR can be used to support learning is through haptic interfaces, particularly when used with blind
or visually-impaired users. The ‘Haptic Lotus’ used a handheld plastic flower that contained embedded sensors that
responded to its user position in an indoor gallery by opening its petals and delivering haptic feedback, and was used as a
way of encouraging exploration whilst also providing reassurance of the nearby environment for both sighted and blind
people(van der Linden et al., 2012). Mehigan (2009) also discusses the use of sensors in smartphones – particularly
accelerometers – to develop opportunities for mobile learning for vision-impaired students and also reduce the ‘digital
divide’ that exists between sighted and blind students. Audio may also be a valuable AR tool here: for example, a sound-
rendering system can transform the visual data of objects and places into auditory information, overcoming a major
difficulty currently experienced by visually impaired learners. In addition, being able to integrate visual labels and audio
tracks into learning environments offers teaching opportunities for all learners and may be particularly helpful for pupils
with learning difficulties, whose cognitive abilities may not allow them to visualise abstract or hidden parts of systems
(Lin et al., 2012). Sprake also talks about haptic referencing as a means of connecting more fully with our local
surroundings (Sprake, 2012).
The use of haptic referencing and the notion that embodiment in mobile learning can be facilitated by AR is an intriguing
one. Becket and Morris (2001) argue that learning has become disconnected both physically and conceptually from the
student and that educational research must return to the physicality of the students’ bodies for two reasons: firstly, the
growth of the corporatized virtual university (and thus a diminished importance attached to a physical place of study by
corporate managers); and secondly the commoditisation and packaging of learning (digital content is placed online and
"left to die" in a VLE - see timbuckteeth, 2011). Mobile learning where learning is situated, or embodied in a particular
reality, which is itself augmented, could help to counteract these problems.
In the last decade, augmented reality has progressed from a specialist, relatively expensive technology to one that is now
commonly available to the general public, due to technological advances in mobile computing and sensor integration.
Although it could be argued that AR has yet to become completely mainstream, smartphone apps such as Layar and
Wikitude mean that its adoption and use is becoming increasingly commonplace. Large corporations such as Google
have shown its commitment in developing AR technology, e.g. through the Google Goggles product (visual recognition
of landmarks in photos taken by e.g. a smartphone camera and subsequently overlaying relevant information) and more
recently the Google Project Glass, a proposed ‘heads-up display’ that will overlay contextual information on top of clear
transparent glass, through e.g. AR spectacles (Eddy, 2012). EyeTap uses a similar premise to Project Glass, with the
device worn in front of the eye like a pair of conventional glasses, that can record what the wearer is seeing but also
superimpose computer-generated imagery on top of their normal physical world (see http://eyetap.org for details).
One use of AR that will increasingly become available and promoted through Higher Education institutions builds on the
campus map and providing navigation for newcomers (the 'augmented campus', see e.g. Genco et al., 2005), as well as
linking in with friendship circles to arrange common meet-up times and locations. This extends prior experiences of the
use of mobile devices for more immediate use of ‘dead time’ (Pettit and Kukulska-Hulme, 2007). While this has
previously been managed through plan views and mapping tied with social networks, overlays are increasingly used to
convey this information, as can be seen by the recent advent of Blackboard Mobile Central AR features (Blackboard,
2012).
SUMMARY
We have presented what we consider to be the current state of the art of augmented reality for mobile learning. We have
discussed the theoretical underpinning of AR in relation to situated learning and created a taxonomy of AR mobile
6
learning projects as an interesting way of analysing current trends and exploring the potential for future development. We
have also discussed the limitations and challenges inherent in the application of AR for mobile learning experiences, as
well as offering some suggestions of how AR can enhance learning and how it might be used by students and educators.
The use of AR in education, and particularly mobile learning, is still in its infancy and it remains to be seen how useful it
is for creating effective learning experiences. From examining the learning activities shown in Table 1, it is clear that AR
can be used very successfully for situated and constructivist learning, particular where collaboration and student inquiry
form key aspects. It can also be used for informal learning and more touristic experiences.
We hope to revisit this work in another few years and report on new innovations and the way in which they have been
adopted (or not) by the learning community. What is clear is that we currently have the opportunity to provide
immersive, compelling and engaging learning experiences through augmented reality, which are situated in real world
contexts and can provide a unique and personal way of making sense of the world around us. We believe this is a
powerful tool, provided we can harness it appropriately, and look forward to seeing how other academics and
practitioners advance this research field further.
ACKNOWLEDGMENTS
The authors are grateful to friends and colleagues who have commented on earlier drafts of this paper and provided
invaluable feedback and advice.
REFERENCES
Adams, A., Coughlan, T., Lea, J., Rogers, Y., Davies, S.-J. and Collins, T. (2011). Designing interconnected distributed
resources for collaborative inquiry based science education. Proceedings of ACM/IEEE Joint Conference on Digital
Libraries, (pp. 395-396).
Azuma, R. (1997). "A survey of augmented reality." Presence: Teleoperators and Virtual Environments 6(4): 355-385.
Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S. and MacIntyre, B. (2001). "Recent advances in augmented
reality." IEEE Computer Graphics and Applications 21(6): 34-47.
Barab, S., Zuiker, S., Warren, S., Hickey, D., Ingram-Goble, A., Kwon, E.-J., Kouper, I. and Herring, S. C. (2007).
"Situationally embodied curriculum: relating formalisms and contexts." Science Education 91(5): 750-782.
Beckett, D. and Morris, G. (2001). "Ontological performance: Bodies, identities and learning." Studies in the Education
of Adults 33(1): 35-48.
Benford, S., Seager, W., Flintham, M., Anastasi, R., Rowland, D., Humble, J., Stanton, D., Bowers, J., Tandavanitj, N.,
Adams, M., Farr, J. R., Oldroyd, A. and Sutton, J. (2004). The Error of Our Ways: The experience of Self-Reported
Position in a Location-Based Game. Proceedings of the 6th International Conference on Ubiquitous Computing
(UbiComp 2004), (pp. 70-87) LNCS/Springer Press.
Bezemer, J. and Kress, G. (2008). "Writing in Multimodal Texts A Social Semiotic Account of Designs for Learning."
Written Communication 25(2): 166-195.
Blackboard (2012) Blackboard Launches Augmented Reality for Mobile Campus Apps. [Online] Available at:
http://www.blackboard.com/About-Bb/News-Center/Press-Releases.aspx?releaseid=122627 [Accessed 25 May 2012].
Bowker, G., C. and Star, S., L. (2000). Sorting Things Out: Classification and Its Consequences. Cambridge, MA, MIT
Press.
Bruner, J. (1990). Acts of Meaning : Four Lectures on Mind and Culture (Jerusalem-Harvard Lectures). Cambridge, MA,
Harvard University Press.
Clough, G. (2010). "Geolearners: Location-Based Informal Learning with Mobile and Social Technologies." IEEE
Transactions on Learning Technologies 3(1): 33-44.
Cook, J. (2010). "Mobile Phones as Mediating Tools within Augmented Contexts for Development." International
Journal of Mobile and Blended Learning 2(3): 1-12.
Coulton, P., Lund, K. and Wilson, A. (2010). Harnessing Player Creativity to Broaden the Appeal of Location-Based
Games. Proceedings of the 24th BCS Conference on Human-Computer Interaction (BCS '10), (pp. 143-150) British
Computer Society, Swinton, UK.
Davies, S.-J., Collins, T., Gaved, M., Bartlett, J., Valentine, C. and McCann, L. (2010). Enabling remote activity: using
mobile technology for remote participation in geoscience fieldwork. Proceedings of the European Geosciences Union
General Assembly 2010 (EGU 2010).
Eddy, N. (2012) Google Releases Project Glass Images. [Online] Available at:
http://www.techweekeurope.co.uk/news/google-releases-project-glass-images-80115 [Accessed 29 May 2012].
Epstein, M. and Vergani, S. (2006). History Unwired: Mobile Narrative in Historic Cities. Proceedings of the Working
Conference on Advanced Visual Interfaces (AVI '06), (pp. 302-305) ACM.
Futurelab (2006) Mudlarking in Deptford: mini report. [Online] Available at:
http://www2.futurelab.org.uk/resources/documents/project_reports/mini_reports/mudlarking_mini_report.pdf
[Accessed 20 May 2012].
Gaved, M., Mulholland, P., Kerawalla, L., Collins, T. and Scanlon, E. (2010). More notspots than hotspots: strategies for
undertaking networked learning in the real world. Proceedings of the 9th World Conference on Mobile and Contextual
Learning (mLearn2010), (pp. 1-4).
7
Genco, A., Reina, G., Raccuglia, P., Santoro, G., Lovecchio, L., Sorce, S., Messineo, R. and Stefano, G. D. (2005). An
augmented campus design for context-aware service provision. Proceedings of the 33rd annual ACM SIGUCCS Fall
Conference (SIGUCCS '05), (pp. 92-97) ACM, New York, USA.
Höllerer, T., Feiner, S., Terauchi, T., Rashid, G. and Hallaway, D. (1999). "Exploring MARS: developing indoor and
outdoor user interfaces to a Mobile Augmented Reality System." Computers and Graphics 23(6): 779-785.
Johnson, L., Smith, R., Willis, H., Levine, A. and Haywood, K. (2011). The Horizon Report: 2011 Edition. Austin,
Texas, The New Media Consortium.
Kanjo, E., Benford, S., Paxton, M., Chamberlain, A., Stanton Fraser, D., Woodgate, D., Crellin, D. and Woolard, A.
(2008). "MobGeoSen: facilitating personal geosensor data collection and visualization using mobile phones." Personal
and Ubiquitous Computing 12(8): 599-607.
Klopfer, E. and Squire, K. (2008). "Environmental Detectives – The Development of an Augmented Reality Platform for
Environmental Simulations." Educational Technology Research and Development 56: 203-228.
Latour, B. (1999). Pandora’s Hope. Essays on the Reality of Science Studies. Cambridge, MA, Harvard University Press.
Leinhardt, G., Crowley, K. and Knutson, K., Eds. (2002). Learning Conversations in Museums, Routledge.
Lin, C.-Y., Lin, C.-C., Chen, C.-J. and Huang, M.-R. (2012) Real-Time Interactive Teaching Materials for Students with
Disabilities. In Y. Zhang (ed.) Future Communication, Computing, Control and Management: Volume 2. Springer
Berlin Heidelberg, pp. 369-375.
Luckin, R. and Stanton Fraser, D. (2011). "Limitless or pointless? An evaluation of augmented reality technology in the
school and home." International Journal of Technology Enhanced Learning 3(5): 510-524.
Meek, S., Goulding, J. and Priestnall, G. (2012). The influence of Digital Surface Model choice on visibility-based
Mobile Geospatial Applications. Proceedings of the GIS Research UK 20th Annual Conference (GISRUK 2012), (pp.
125-131) The University of Lancaster, Lancaster, United Kingdom.
Mehigan, T. J. (2009). Harnessing accelerometer technology for inclusive mobile learning. Proceedings of the 11th
International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '09), (pp. 1-
2) ACM, New York, NY, USA.
Milgram, P. and Kishino, F. A. (1994). "Taxonomy of Mixed Reality Visual Displays." IECE Transactions on
Information and Systems (Special Issue on Networked Reality) E77-D(12): 1321-1329.
Milgram, P., Takemura, H., Utsumi, A. and Kishino, F. (1994). Augmented Reality: A class of displays on the reality-
virtuality continuum. Proceedings of Telemanipulator and Telepresence Technologies, (pp. 282-292) SPIE Press.
Núñez, R., Edwards, L. and Matos, J. F. (1999). "Embodied cognition as grounding for situatedness and context in
mathematics education." Educational Studies in Mathematics 39(1-3): 45-65.
Ofcom (2009) Ofcom 3G coverage maps. [Online] Available at: http://licensing.ofcom.org.uk/binaries/spectrum/mobile-
wireless-broadband/cellular/coverage_maps.pdf [Accessed 20 May 2012].
Ordnance Survey (2012) Accuracies (single GPS receiver). [Online] Available at:
http://www.ordnancesurvey.co.uk/oswebsite/gps/information/gpsbackground/beginnersguidetogps/whatisgps_08.html
[Accessed 29 May 2012].
Papert, S. and Harel, I. (1991) Situating Constructionism. In S. Papert and I. Harel (eds.) Constructionism. Norwood, NJ:
Ablex Publishing Corporation, pp. 1-11.
Pettit, J. and Kukulska-Hulme, A. (2007). "Going with the grain: mobile devices in practice." Australasian Journal of
Educational Technology 23(1): 17-33.
Priestnall, G., Brown, E., Sharples, M. and Polmear, G. (2010) Augmenting the field experience: A student-led
comparison of techniques and technologies. In E. Brown (ed.) Education in the wild: contextual and location-based
mobile learning in action. Nottingham, UK: University of Nottingham: Learning Sciences Research Institute (LSRI),
pp. 43-46.
Radford, L. (2005). Body, Tool, and Symbol: Semiotic Reflections on Cognition. Proceedings of the 2004 Annual
Meeting of the Canadian Mathematics Education Study Group, (pp. 111-117).
Robinson, S., Eslambolchilar, P. and Jones, M. (2008). Point-to-GeoBlog: gestures and sensors to support user generated
content creation. Proceedings of the 10th international Conference on Human Computer interaction with Mobile
Devices and Services (MobileHCI '08), (pp. 197-206) ACM, New York, NY.
Sprake, J. (2012) Haptic Referencing. In J. Sprake (ed.) Learning-Through-Touring: Mobilising Learners and Touring
Technologies to Creatively Explore the Built Environment. SensePublishers, pp. 149-183.
Squire, K. D. (2010). "From Information to Experience: Place-Based Augmented Reality Games as a Model for Learning
in a Globally Networked Society." Teachers College Record 112(10): 2565-2602.
Sutherland, I. E. (1968). A head-mounted three-dimensional display. Proceedings of Fall Joint Computer Conference of
American Federation of Information Processing Societies (AFIPS), (pp. 757-764) ACM, New York, NY, USA.
timbuckteeth (2011). The institutional VLE is where content goes to die [Twitter post], Retrieved from
https://twitter.com/timbuckteeth/status/107064421716733952.
van der Linden, J., Braun, T., Rogers, Y., Oshodi, M., Spiers, A., McGoran, D., Cronin, R. and O'Dowd, P. (2012).
Haptic lotus: a theatre experience for blind and sighted audiences. Proceedings of the 2012 ACM Annual Conference -
Extended Abstracts on Human Factors in Computing Systems (CHI EA '12), (pp. 1471-1472) ACM, New York, USA.
Vygotsky, L. S. (1978). Mind and society: the development of higher mental processes. Cambridge, MA, Harvard
University Press.