Ayanna Seals, Ph.D.

Ayanna Seals, Ph.D.

New York, New York, United States
892 followers 500+ connections

About

Leveraging deep expertise in UX strategy and applied human-computer interaction research,…

Activity

Experience

  • OneSource Consulting, LLC Graphic
  • -

    Brooklyn, New York, United States

  • -

  • -

  • -

  • -

  • -

  • -

    Greater New York City Area

  • -

  • -

  • -

    Raleigh-Durham, North Carolina Area

  • -

    Washington D.C. Metro Area

Education

Licenses & Certifications

Publications

  • Effects of Self-focused Augmented Reality on Health Perceptions During the COVID-19 Pandemic: A Web-Based Between-Subject Experiment

    Journal of Medical Internet Research

    Self-focused augmented reality (AR) technologies are growing in popularity and present an opportunity to address health communication and behavior change challenges. We aimed to examine the impact of self-focused AR and vicarious reinforcement on psychological predictors of behavior change during the COVID-19 pandemic. In addition, our study included measures of fear and message minimization to assess potential adverse reactions to the design interventions. A between-subjects web-based…

    Self-focused augmented reality (AR) technologies are growing in popularity and present an opportunity to address health communication and behavior change challenges. We aimed to examine the impact of self-focused AR and vicarious reinforcement on psychological predictors of behavior change during the COVID-19 pandemic. In addition, our study included measures of fear and message minimization to assess potential adverse reactions to the design interventions. A between-subjects web-based experiment was conducted to compare the health perceptions of participants in self-focused AR and vicarious reinforcement design conditions to those in a control condition. Participants were randomly assigned to the control group or to an intervention condition (ie, self-focused AR, reinforcement, self-focus AR × reinforcement, and avatar).
    We found that participants who experienced self-focused AR and vicarious reinforcement scored higher in perceived threat severity (P=.03) and susceptibility (P=.01) when compared to the control. A significant indirect effect of self-focused AR and vicarious reinforcement on intention was found with perceived threat severity as a mediator (b=.06, 95% CI 0.02-0.12, SE .02). Self-focused AR and vicarious reinforcement did not result in higher levels of fear (P=.32) or message minimization (P=.42) when compared to the control. Augmenting one’s reflection with vicarious reinforcement may be an effective strategy for health communication designers. While our study’s results did not show adverse effects in regard to fear and message minimization, utilization of self-focused AR as a health communication strategy should be done with care due to the possible adverse effects of heightened levels of fear.

    See publication
  • Investigating the Effect of Sound-Event Loudness on Crowdsourced Audio Annotations

    Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

    Audio annotation is an important step in developing machinelistening systems. It is also a time consuming process, which has motivated investigators to crowdsource audio annotations. However, there are many factors that affect annotations, many of which have not been adequately investigated. In previous work, we investigated the effects of visualization aids and sound scene complexity on the quality of crowdsourced sound-event annotations. In this paper, we extend that work by investigating the…

    Audio annotation is an important step in developing machinelistening systems. It is also a time consuming process, which has motivated investigators to crowdsource audio annotations. However, there are many factors that affect annotations, many of which have not been adequately investigated. In previous work, we investigated the effects of visualization aids and sound scene complexity on the quality of crowdsourced sound-event annotations. In this paper, we extend that work by investigating the effect of sound-event loudness on both sound-event source annotations and sound-event proximity
    annotations. We find that the sound class, loudness, and annotator bias affect how listeners annotate proximity. We also find that loudness affects recall more than precision and that the strengths of these effects are strongly influenced by the sound class. These findings are not only important for designing effective audio annotation processes, but also for effectively training and evaluating machine-listening systems.

    See publication
  • Seeing Sound: Investigating the Effects of Visualizations and Complexity on Crowdsourced Audio Annotations.

    Proceedings of the ACM on Human-Computer Interaction

    Audio annotation is key to developing machine-listening systems; yet, effective ways to accurately and rapidly obtain crowdsourced audio annotations is understudied. In this work, we seek to quantify the reliability/redundancy trade-off in crowdsourced soundscape annotation, investigate how visualizations affect accuracy and efficiency, and characterize how performance varies as a function of audio characteristics. Using a controlled experiment, we varied sound visualizations and the complexity…

    Audio annotation is key to developing machine-listening systems; yet, effective ways to accurately and rapidly obtain crowdsourced audio annotations is understudied. In this work, we seek to quantify the reliability/redundancy trade-off in crowdsourced soundscape annotation, investigate how visualizations affect accuracy and efficiency, and characterize how performance varies as a function of audio characteristics. Using a controlled experiment, we varied sound visualizations and the complexity of soundscapes presented to human annotators. Results show that more complex audio scenes result in lower annotator agreement, and spectrogram visualizations are superior in producing higher quality annotations at lower cost of time and human labor. We also found recall is more affected than precision by soundscape complexity, and mistakes can be often attributed to certain sound event characteristics. These findings have implications not only for how we should design annotation tasks and interfaces for audio data, but also how we train and evaluate machine-listening systems.

    See publication

Projects

Honors & Awards

  • Bloomberg D4GX Immersion Fellow

    Bloomberg Data For Good Exchange 2019

    Supported the data science efforts of the My Brother’s Keeper Equity Intelligence Platform

  • Cornell Social Impact Summer Program

    Cornell University

    Selected to attend Cornell University's summer program for design and social impact.

  • Best MS Thesis In Integrated Digital Media

    NYU Tandon School of Engineering

  • Peter Barker-Homek Women in Technology Fellowship

    NYU Tandon School of Engineering

  • Gertrude M. Cox Award

    North Carolina State University

  • Robert L. and Marilyn D. Blanton Enhancement Grant

    North Carolina State University

Languages

  • English

    Native or bilingual proficiency

Recommendations received

View Ayanna’s full profile

  • See who you know in common
  • Get introduced
  • Contact Ayanna directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses