“Ayanna did a really fantastic job for us at D. H. Hill Library. Ayanna provided great customer service alongside expert technology and software support. I particularly enjoyed Ayanna's penchant for thinking of and developing ideas to further enhance the ways in which the library can support our users. ”
Ayanna Seals, Ph.D.
New York, New York, United States
892 followers
500+ connections
About
Leveraging deep expertise in UX strategy and applied human-computer interaction research,…
Activity
-
Our new Predator launched yesterday! Here’s the sizzle I sequenced in Unreal Engine and Premiere. Final audio and lighting by some other…
Our new Predator launched yesterday! Here’s the sizzle I sequenced in Unreal Engine and Premiere. Final audio and lighting by some other…
Liked by Ayanna Seals, Ph.D.
-
I'm thrilled to announce the public release of the Speech Emotion Intensity Recognition Database (SEIR-DB)! This comprehensive, multilingual dataset…
I'm thrilled to announce the public release of the Speech Emotion Intensity Recognition Database (SEIR-DB)! This comprehensive, multilingual dataset…
Liked by Ayanna Seals, Ph.D.
-
Somebody asked me how an interview went, and I really don't know. It was a very intense (and traumatic) time. I was waiting for the verdict in the…
Somebody asked me how an interview went, and I really don't know. It was a very intense (and traumatic) time. I was waiting for the verdict in the…
Liked by Ayanna Seals, Ph.D.
Experience
Education
Licenses & Certifications
Publications
-
Effects of Self-focused Augmented Reality on Health Perceptions During the COVID-19 Pandemic: A Web-Based Between-Subject Experiment
Journal of Medical Internet Research
Self-focused augmented reality (AR) technologies are growing in popularity and present an opportunity to address health communication and behavior change challenges. We aimed to examine the impact of self-focused AR and vicarious reinforcement on psychological predictors of behavior change during the COVID-19 pandemic. In addition, our study included measures of fear and message minimization to assess potential adverse reactions to the design interventions. A between-subjects web-based…
Self-focused augmented reality (AR) technologies are growing in popularity and present an opportunity to address health communication and behavior change challenges. We aimed to examine the impact of self-focused AR and vicarious reinforcement on psychological predictors of behavior change during the COVID-19 pandemic. In addition, our study included measures of fear and message minimization to assess potential adverse reactions to the design interventions. A between-subjects web-based experiment was conducted to compare the health perceptions of participants in self-focused AR and vicarious reinforcement design conditions to those in a control condition. Participants were randomly assigned to the control group or to an intervention condition (ie, self-focused AR, reinforcement, self-focus AR × reinforcement, and avatar).
We found that participants who experienced self-focused AR and vicarious reinforcement scored higher in perceived threat severity (P=.03) and susceptibility (P=.01) when compared to the control. A significant indirect effect of self-focused AR and vicarious reinforcement on intention was found with perceived threat severity as a mediator (b=.06, 95% CI 0.02-0.12, SE .02). Self-focused AR and vicarious reinforcement did not result in higher levels of fear (P=.32) or message minimization (P=.42) when compared to the control. Augmenting one’s reflection with vicarious reinforcement may be an effective strategy for health communication designers. While our study’s results did not show adverse effects in regard to fear and message minimization, utilization of self-focused AR as a health communication strategy should be done with care due to the possible adverse effects of heightened levels of fear. -
Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations
Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Audio annotation is an important step in developing machinelistening systems. It is also a time consuming process, which has motivated investigators to crowdsource audio annotations. However, there are many factors that affect annotations, many of which have not been adequately investigated. In previous work, we investigated the effects of visualization aids and sound scene complexity on the quality of crowdsourced sound-event annotations. In this paper, we extend that work by investigating the…
Audio annotation is an important step in developing machinelistening systems. It is also a time consuming process, which has motivated investigators to crowdsource audio annotations. However, there are many factors that affect annotations, many of which have not been adequately investigated. In previous work, we investigated the effects of visualization aids and sound scene complexity on the quality of crowdsourced sound-event annotations. In this paper, we extend that work by investigating the effect of sound-event loudness on both sound-event source annotations and sound-event proximity
annotations. We find that the sound class, loudness, and annotator bias affect how listeners annotate proximity. We also find that loudness affects recall more than precision and that the strengths of these effects are strongly influenced by the sound class. These findings are not only important for designing effective audio annotation processes, but also for effectively training and evaluating machine-listening systems. -
Seeing Sound: Investigating the Effects of Visualizations and Complexity on Crowdsourced Audio Annotations.
Proceedings of the ACM on Human-Computer Interaction
Audio annotation is key to developing machine-listening systems; yet, effective ways to accurately and rapidly obtain crowdsourced audio annotations is understudied. In this work, we seek to quantify the reliability/redundancy trade-off in crowdsourced soundscape annotation, investigate how visualizations affect accuracy and efficiency, and characterize how performance varies as a function of audio characteristics. Using a controlled experiment, we varied sound visualizations and the complexity…
Audio annotation is key to developing machine-listening systems; yet, effective ways to accurately and rapidly obtain crowdsourced audio annotations is understudied. In this work, we seek to quantify the reliability/redundancy trade-off in crowdsourced soundscape annotation, investigate how visualizations affect accuracy and efficiency, and characterize how performance varies as a function of audio characteristics. Using a controlled experiment, we varied sound visualizations and the complexity of soundscapes presented to human annotators. Results show that more complex audio scenes result in lower annotator agreement, and spectrogram visualizations are superior in producing higher quality annotations at lower cost of time and human labor. We also found recall is more affected than precision by soundscape complexity, and mistakes can be often attributed to certain sound event characteristics. These findings have implications not only for how we should design annotation tasks and interfaces for audio data, but also how we train and evaluate machine-listening systems.
Projects
-
Lenses: an open-source tool for journalists to create data visualizations
- Present
Through an NYC Media Lab seed project, faculty and students from Columbia Journalism School and NYU Polytechnic School of Engineering worked with a product team from News Corp to develop open source components for and user test Lenses.
Other creatorsSee project
Honors & Awards
-
Bloomberg D4GX Immersion Fellow
Bloomberg Data For Good Exchange 2019
Supported the data science efforts of the My Brother’s Keeper Equity Intelligence Platform
-
Cornell Social Impact Summer Program
Cornell University
Selected to attend Cornell University's summer program for design and social impact.
-
Best MS Thesis In Integrated Digital Media
NYU Tandon School of Engineering
-
Peter Barker-Homek Women in Technology Fellowship
NYU Tandon School of Engineering
-
Gertrude M. Cox Award
North Carolina State University
-
Robert L. and Marilyn D. Blanton Enhancement Grant
North Carolina State University
Languages
-
English
Native or bilingual proficiency
Recommendations received
3 people have recommended Ayanna
Join now to viewOther similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More