Cast01 Alice Toappear
Cast01 Alice Toappear
Cast01 Alice Toappear
Introduction
One important aspect of living with mixed reality is to show how the techniques of mixed reality can define new media forms for artistic expression and for entertainment. Such forms will help to convince our society of the cultural and even commercial value of the new technologies of mixed reality. For this reason, our work focuses on creating the formal conventions and the technology to support dramatic and narrative experiences in augmented reality (AR). AR combines the physical world with virtual elements. Typically, the user wears a head-mounted display (HMD) that mixes the view of the physical environment with computer-generated elements through the use of semi-transparent HMDs or opaque videomixed HMDs (real-time video of the physical environment is mixed with virtual elements and displayed on an opaque HMD) [7]. Often a new medium such as AR develops from the work of technical innovators. Initial research focuses on the mechanics of the technology, while issues of effective use of the technology as a medium are often secondary. The new medium may enjoy some initial success as an entertainment form based completely on the novelty of the technology. In contrast, established media rarely depend solely on technology to provide a gratifying experience. The development of an experience for an established medium is more often a synergy of technical mechanics and storytelling; narrative conventions are accepted and understood by the audience culture, while production tools and methods are in place to support the creation of experiences. A new medium faces the challenges of
Aims of Research
Our research has three main goals. The first goal is to borrow and refashion a sense of authenticity from one or more earlier media, such as film and interactive CDROMS. We are drawing here on the theory of remediation by Bolter and Grusin [1]. Remediation is important because it promotes acceptance and understanding of AR by showing how it relates to earlier and established media. The second goal is to "surpass" the earlier media in some way in this case by exploring interaction techniques to which AR is particularly well suited, namely interaction between virtual and physical elements in the user's environment. Finally, we are developing tools that enable both artists and technologists to work, experiment and collaborate in AR as a new interactive narrative form.
Figure 1. The Mad Hatter (at the users left) has just been splashed with tea.
1. Project Description
The experience is based on A Mad Tea Party, a chapter from Lewis Carrolls Alices Adventures in Wonderland [8]. The user assumes the role of Alice and sits at the tea party with three interactive characters: the Mad Hatter, the Dormouse, and the March Hare. The characters are computer-controlled video actors displayed in the users HMD and appear to be sitting at the same physical table as the user (we describe video actors in [9]). The characters can interact with the user and with each other. The users objective is to get directions to the garden, located somewhere in Wonderland. The characters view the user as an interruption to the party already in progress and continue about their own business. They are frequently reluctant to acknowledge the user and often ignore the user altogether the user discovers that she cannot simply ask for directions and must participate in the tea party. Each character has a set of primitive actions that they can perform, including serving tea, receiving tea, sipping tea, asking riddles, and various reactions to events that may occur in the story environment. If properly provoked, a character may splash the user (or another character) with tea. Procedural behaviors govern how the character will react to events that occur in the environment (instigated by the user or by other characters). An example is shown in Figures 1-3. In Figure 1, the user splashes the Mad Hatter with tea. The March Hare reacts with laughter in Figure 2. Finally the sleepy Dormouse is awakened by all the noise (Figure 3). The user also has a range of gestures for virtually serving, receiving, sipping and throwing tea. The user can also address a character through simple audio level sensing. The characters have procedural behaviors that govern how each character acts or reacts to the user or other characters in the scene. Each action represents a primitive story element the progression of these elements builds the overall narrative experience.
Figure 2. The March Hare (at the users right) reacts with laughter.
Figure 3. The Dormouse (opposite the user) will soon awaken from the noise.
disjointed from real time, frozen while the user contemplates a choice. In VR, the environment can change in an instant, transporting the participant from one place to another. Users have grown accustomed to this convention. Many VR environments strive to be fantasy-like rather than simulations of reality [3]. In contrast, most virtual objects and characters in AR are world-stabilized to the physical realm, a world that is not completely replaced by the machine and continues whether the machine is on or off. The medium depends on a delicate tension between the virtual and the physical to immerse the user in the story. If the video actors in Alice have to freeze and reload new video segments each time the user interacts, this discontinuity would disrupt the immersive experience for the user. Alice is an experience that requires story time to be insync with real time. Alice strives to create an environment responsive to the user, as well as the illusion that the story world exists whether or not the user takes action. While these goals may seem contradictory at first, they are complimentary they each help to synchronize story time with real time. The interactivity and spatial characteristics also distinguish AR from film. Film has the advantage of a well-defined form of linear writing embodied in the formulaic screenplay. A non-linear equivalent is needed to enable widespread production of AR experiences.
Procedural authorship has roots in the oral bard storytelling tradition of ancient Greece. The bardic tradition worked by using a formulaic system to substitute basic story elements or phrases to construct a coherent narrative [10]. In a procedurally authored story, the author creates basic building blocks, or primitives, that can be arranged differently to construct a coherent story. In Murrays model, the primitives are the basic actions or gestures of the user as structured by the author. The computer as story-presenter responds to users gestures first by capturing and analyzing the gestures, then by applying procedural rules to determine the appropriate story element to present [10]. Rather than producing several varied linear scenes as in a cul-de-sac, this project focuses on developing primitive story elements attached to the basic scripted actions of the user and the interactive characters. The actions and the corresponding story primitives fit within the framework of a linear narrative spine. The users actions vary the arrangements of story primitives and influence the actions of the other characters.
The intent is to create the illusion of independent character action, not to create truly intelligent agents.
gesture recognition and only a limited set of procedural behaviors has been completed. Work continues with full implementation as a goal. Additionally, our use of bitmap sequences extracted from video to create character actions allows us to integrate novel video playback techniques. In particular, we are attempting to address looping video artifacts using Video Textures [12], a technique that can be used to generate infinitely long, non-repetitious video sequences, potentially providing more natural-looking transitions between character actions and idle states.
References
[1] Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 1999. [2] Bonime, Andrew and Ken C. Pohlmann. Writing for New Media: The Essential Guide to Writing for Interactive Media, CD-ROMs, and the Web. New York: John Wiley & Sons, 1998. [3] Craven, Mike, Ian Taylor, Adam Drozd, Jim Purbrick, Chris Greenhalgh, Steve Benford, Mike Fraser, John Bowers, Kai-Mikael J-Aro, Bernd Lintermann, Michael Hoch. Exploiting Interactivity, Influence, Space and Time to Explore non-Linear Drama in Virtual Worlds. In Proc. CHI 2001, Seattle, WA, March 31 April 5, 2001. [4] Decker, Dan. Anatomy of a Screenplay: Writing the American Screenplay from Character Structure to Convergence. Chicago: The Screenwriters Group, 1998. [5] Feiner, Steven, Blair MacIntyre, and Tobias Hllerer. Wearing It Out: First Steps Toward Mobile Augmented Reality Systems. In Proc. International Symposium on Mixed Reality, pp.363377, Yokohama, Japan, March 9-11, 1999. [6] Field, Syd. Screenplay: The Foundations of Screenwriting. New York: Dell Pub, 1982. [7] Fuchs, Henry and Jeremy Ackerman. Displays for Augmented Reality: Historical Remarks and Future Prospects. In Proc. International Symposium on Mixed Reality, pp.31-41, Yokohama, Japan, March 9-11, 1999. [8] Gardner, Martin ed. The Annotated Alice: The Definitive Edition. New York: W. W. Norton and Company, 2000. [9] MacIntyre, B., Lohse, M., Bolter, J.D., and Moreno, E. (2001) Ghosts in the Machine: Integrating 2D Video Actors into a 3D AR System. In Proc. International Symposium on Mixed Reality, Yokohama, Japan, March 1415, 2001. [10] Murray, Janet. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: Simon and Schuster, 1997. [11] Samsel, Jon and Darryl Wimberley. Writing for Interactive Media: The Complete Guide. New York: Allworth P, 1998. [12] Schdl, A., R. Szeliski, D. Salesin, and I. Essa. Video Textures, In Proc. ACM SIGGRAPH 2000, pp.489-498, New Orleans, LA, January 2000 [13] Smith, Geoff. http://PhysicalBits.com/Xtras. June 2, 2000.