Academia.eduAcademia.edu

Empathic Robotic Tutors

2015, Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts

In this demonstration we describe a scenario developed in the EMOTE project 1. The overall goal of the project is to develop an empathic robot tutor for 11-13 year old school students in an educational setting. We are aiming to develop an empathic robot tutor to teach map reading skills with this scenario on a touch-screen device.

Empathic Robotic Tutors - Map Guide Amol Deshmukh1 , Aidan Jones2 , Srinivasan Janarthanam1 , Mary Ellen Foster1 , Tiago Ribeiro4 , Lee J. Corrigan2 , Ruth Aylett1 , Ana Paiva4 , Fotios Papadopoulos2 , Ginevra Castellano2,3 School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK1 School of Electronic, Electrical and Computer Engineering, University of Birmingham, UK2 Department of Information Technology of Uppsala University, Sweden3 GAIPS, INESC-ID and Instituto Superior Tecnico, Lisboa, Portugal4 a.deshmukh@hw.ac.uk ABSTRACT 2. ARCHITECTURE In this demonstration we describe a scenario developed in the EMOTE project1 . The overall goal of the project is to develop an empathic robot tutor for 11-13 year old school students in an educational setting. We are aiming to develop an empathic robot tutor to teach map reading skills with this scenario on a touch-screen device. Figure 1 explains the architecture components and the data flow between the modules in the system. Categories and Subject Descriptors I.2.9 [Artificial Intelligence]: Robotics Keywords Robotic Tutors, Human-robot interaction, Empathy 1. INTRODUCTION We will demonstrate a map task scenario, where the robotic tutor (NAO Torso robot) presents the learner with a series of clues in an art trail on a map application installed on a large touch-screen device. These tasks have been designed to allow the learner to develop map reading skills: directions, distance and map symbols. The objective is to obtain clues that help finding a treasure. In each task, the learner is asked to find a feature based on its symbol, distance from current location and direction. The robot tutor uses empathic and pedagogical strategies wherein it helps the learner using pedagogical actions such as prompts, pumps, splices, etc. conceding different skills in order to move forward when stuck. It also monitors learner’s engagement levels and adapts accordingly by taking the necessary actions. This interactive demo using a large tablet and the NAO will allow attendees (one user at a time) to complete the art trail through interacting with the robot and select the correct location for a new landmark based on clues found in the trail. 1 http://www.emote-project.eu/ Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the author/owner(s). HRI’15, March 2–5, 2015, Portland, Oregon, USA. ACM 978-1-4503-2883-8/15/03. http://dx.doi.org/10.1145/2701973.2702693. • Messages from the Activity (Map application) are sent to the Learner Model. • The Learner Model uses this information, along with information from the Perception and Affect Perception modules, to estimate the current state of the interaction. • The Interaction Manager (IM) uses this state information to select an appropriate next high-level system action. • The Skene module transforms the high-level action specification into a concrete set of words and behaviours for the Robot to perform. • Skene module also uses low level information from other modules to gaze at the learner and map locations on the touchscreen device. Figure 1: Architecture of the system The scenario to be deployed in the School is being developed with the robot and big touch-screen device 55”. Given the logistical challenges in transporting a full-size touch table, this scenario can be demonstrated with a small touch-enabled 18” tablet and the NAO robot (figure 1), which will be brought to the event. 1 Acknowledgements: This work was supported by the European Commission (EC) and was funded by the EU FP7 ICT-317923 project EMOTE. The authors are solely responsible for the content of this publication. It does not represent the opinion of the EC, and the EC is not responsible for any use that might be made of data appearing therein.