Academia.eduAcademia.edu

Animating Dreams and Future Dream Recording

2017, International Association for the Study of Dreams, 34th Annual International Conference

Poster presented at the International Association for the Study of Dreams, 34th Annual International Conference, June 16-20, 2017. The poster outlines a study on avatar animation of dreamer motor behavior from electromyography collected at the University of Texas, Austin, Cognitive Neuroscience Lab in 2016.

Animating Dreams and Future Dream Recording Daniel Oldis Current Project Introduction It has been well documented that dream speech elicits corresponding phasic muscle potential in facial, laryngeal and chin muscles (McGuigan, 1971; Shimizu, 1986), and muscles associated with dream motor behavior (such as leg or arm movement) demonstrate associated muscle potential (Dement and Kleitman, 1957; Wolpert, 1960)—though discernable speech/movement is largely inhibited. Measurement of such musculature electrical activity is in the domain of electromyography (EMG)—though near-infrared spectroscopy has also recently been employed. This project, a dream animation prototype, is intended to be a proof of concept for dream movement simulation and is situated as a partial implementation of an ambitious goal of digitally recording, i.e. reconstructing a dream (dream imagery, transcribed dream speech and dream motor behavior—a dream movie). It is intended that this animation project will provide a demonstration of the feasibility of including dream motor behavior simulation in a combined protocol directed at full, though approximate, dream reconstruction. Method The EMG/EOG data that powers the animation program was collected at the University of Texas, Austin, Cognitive Neuroscience Lab in March, 2016, under the direction of David Schnyer and funded by DreamsBook, Inc. Two sleep subjects were monitored and scored with polysomnography for one night each for a total of seven recorded REM cycles. The EMG right and left leg electrodes were positio ed o the uad iceps, with the ight a EMG’s o the late al head of the t iceps, a d the [speech] EMG’s placed o the chi . Dream Imagery and Speech Decoding Summary Dream visualization using functional magnetic resonance imaging (fMRI) and transcribed sub-vocal speech using EMG have established early successes (Kamitani, 2008; Gallant, 2011; Horikawa, 2013; Jorgensen, 2005; Bandi, 2016; Khan and Jahan, 2016). Dream image reconstruction using fMRI consists of training software on mapping visual pattern activity in the awake brain. If the software can then correlate the dreamed image or image features, it can reverse-engineer, constructing a graphical representation of the dreamed image. From Science, 2013: Decodi g models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. The findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement. Dream speech transcription is a category of sub-vocal (silent or imagined) speech decoding, which utilizes trained pattern recognition of EMG signals emanating from speech muscles to synthesize or transcribe words and sentences. While sub-vocal speech transcription has mostly focused on medical applications for the physically impaired or military applications for special acoustic environments, the same techniques can be applied to dreamed speech, which is generally reported as coherent (Kilroe, 2016). Decipherable EMG patterns associated with counting in dreams and simple sentences have been observed. My own research using laryngeal EMG correlated with dream reports further suggests the interesting possibility that we covertly vocalize other d ea cha acte s’ speech! UT provided the REM-scored data to me in EDF and text format which comprised six 500 Hz data points (eyes, chin, arm and legs). Initially, I loaded single points into specific muscle data columns of OpenSim software leg and arm models through cut-and-paste into sample input templates. I was then able to visualize upper leg muscle activity and simple arm movement. Yet, this method was limited in its attempt to achieve full body simulation of dream movement. I enrolled my brother, David Oldis, an iOS programmer, to create an animation from the data files provided by UT. He wanted an animation that could be played on an iPad or iPhone, so he selected Apple's 3D rendering tool, SceneKit. *The avatar here is assumed upright due to limited sensors, though, in fact, the dreamer may be sitting, lying—or flying in the dream. ** Eye movements in some of the simulation models used in this project are represented by head movements. The Magical Mystery Dream Tour Will lucid dreamers be the first dream video stars, escorting the world through the magical land of dreams?