Continuous Response Evaluation of Digital Video Clips Over The Internet
Continuous Response Evaluation of Digital Video Clips Over The Internet
Continuous Response Evaluation of Digital Video Clips Over The Internet
Abstract
Continuous response evaluation of video and film has been a useful method in media. It has been
particularly prevalent in advertising as it allows producers to pinpoint events that evoke particularly
strong audience responses. When searching for a way to evaluate the effectiveness of digital video
clips (podcasts) a continuous response evaluation system was developed and deployed over the
internet. This paper discusses the preliminary results of testing the method with adult and high school
learners to evaluate a series of video podcasts on meta-cognitive success strategies. When fully
deployed this tool will enable the real time viewer evaluation of internet video formats such as
Youtube. The resulting data could be used to rate the overall interest of video programs or index
specific scenes for educational or entertainment contexts.
Keywords - Video. vidcast, evaluation, internet
1 INTRODUCTION
Broadband internet access has enabled the widespread transmission of digital video clips over the
internet. Indeed, Alexa.com, the provider of internet usage metrics rates Youtube.com as the third
most popular internet site world wide [1] behind Google and Yahoo. Youtube,com does provide a
simple five-star rating system by which any viewer can rate a video and also leave comments about
the video, however these popularity ratings provide little information about the usefulness of the
content for instruction. Recently, in the context of another project [2] the lead author had produced a
series of video clips for use in internet instruction and wanted to evaluate their design and the interest
level of the content for on-line learners. In traditional instructional settings, it would be possible to
assemble a small focus group representative of the target audience, and after a screening the video
discuss its merits. Technology-based methods have also been used in preparing advertisements and
instructional videos for some time. Continuous Response Measures (CRM) first appeared in the
1930’s for analyzing radio shows [3, 4]. In 1980 Nickerson [5] demonstrated a CRM system driven by
an Apple II microcomputer that allowed for second by second analysis of a video so that producers,
advertisers and educators can determine key incidents that evoke audience reactions. Baggaley [6]
reports the use of push-button data collection technology for gathering continuous audience
responses when evaluating video and live events. CRM systems have become more elaborate over
the years and at least one patent touts the correlation of EEG with galvanic skin response and facial
expressions to elicit the true audience reaction to a video event [7]. The goal of this paper is to
describe a prototype video evaluation tool that has been prototyped for simple user input CRM data
collection over the internet.
The Continuous Response Evaluation System (CRES) is built using the following technologies: Flash,
ActionScript 3.0, HTML, PHP, and MySQL as a database server. Figure-1 illustrates the system
architecture.
!
Figure 1. System Architecture
As depicted in Figure-1, flash video are embedded in an HTML window. Users access the videos
through their web browser. During the play time of the videos, users are provided an approach to
evaluate the videos by clicking any number of buttons that can be labelled “Like”, “Dislike” etcetera.
The system collects user input and sends the data to the server-side scripts, which, in turn, wrap the
data with video timecode, IP address and real time stamp information, and sends the data to the
database.
A. Constraints of the system
In the current prototype, videos are embedded in a Flash window. Users cannot access the videos
through iPhones or other mobile devices that do not support Flash. Another way to design the system
is to separate the video window from the evaluation window and input targets. While this is more
flexible, it becomes more difficult to communicate data such as timecode from the video part to the
evaluation part and leads to synchronization issues.
B. Video formats and encapsulation
To embed videos into Flash, in the system, we convert .mov format files into Flash video format files
(file extension is .flv), which, in turn, are embedded into a SWF file dynamically. The SWF file can be
enclosed in HTML files and runs inside. One advantage of this approach is that the largest source of
internet videos, Youtube.com, also uses the Flash format.
C. Data transmission and collation
In addition to multiple videos, the continuous response evaluation prototype has questionnaires and
text entry questions to allow users to respond. The current questionnaires have three parts: pre-video
questionnaire, video-follow-up questionnaire, and post-video questionnaire. Before finishing a
questionnaire or a video, user input is stored in the memory of client-side computers. Upon completing
a questionnaire or video, the collected data will be sent to the server-side scripts for persistent
storage. This messaging eliminates the constant polling that would needlessly consume bandwidth if a
large number of participants at a single site were engaged in formative evaluation activities.
An extract of a typical data stream appears in Figure 2. The general format of the data is
IP address-datetime-[session ID:type:data]
The data format will be different for each type of questions. The type can be one of the following
five types:
Q: means pre-video questionnaire
Vn: means the nth video
SQn: means the nth video corresponding questionnaire
VQ: means post-video questionnaire
Cn: means the nth text entry question
The data format of type Q, SQn, and VQ is question number-option number. Each data pair is
separated by a semicolon.
In the Like vs Dislike scenario the data format of type Vn is L or D @timecode. Each data pair is
separated by a comma. The data format of type Cn is user input text strings.
Figure 2. Chart of LIKE and DISLIKE data collated by 30 second time segments
4 DISCUSSION AND CONCLUSIONS
4.3 Conclusions
The purpose of this paper was to discuss our initial prototype of a continuous response evaluation
system for internet video clips. In summary we found that the prototype was effective in demonstrating
the potential for this type of a tool, however despite its face validity, the CRM methodology brings with
it all the analytical baggage and difficulties of interpretation that have plagued so many previous
investigators. Still, the method offers more potential for in-depth user analysis than the current five-star
rating system found on Youtube.com.
The method offers a way of tagging points of interest within video clips, which might be combined with
basic demographic information to note for example that “male viewers found segments A, B and C
more interesting while female viewers found these other segments X, Y and Z more interesting”. This
internal metadata could lead the way to selective viewing or “compression on demand” of video
segments by information seekers having neither the time nor the interest to view a video in its entirety.
This would perhaps be of more value when viewing archival footage of longer video events, such as
political speeches, debates, lectures, scientific presentations or videoconferences. The authors also
see potential in using the technique in combination with user annotations, class notes and other social
indexing artefacts.
References
[1] Alexa Top Sites. Internet:http://www.alexa.com/topsites, [May 19 2009].
[2] G. Richards and N. Ostashewski, “Strategies for success: Meta-cognitive vidcasts for orientation
of online learners,” in Proceedings, EDULEARN09, Barcelona, 2009 (in press).
[3] B. Gunther, Media Research Methods: Measuring audiences, reactions and impact. SAGE
Publications, 2000.
[4] F. Biocca, D. Prabu, and M. West, “Continuous Response Measurement (CRM): A computerized
tool for research on the cognitive processing of communication messages,” in Measuring
Psychological Responses to Media Messages. A. Lang, Ed. Lawrence Erlbaum Associates, 1994.
[5] R. Nickerson. “Personal Communication,” Demonstration of the PEAC Video Evaluation System.
1980.
[6] J. Baggeley, “Continual Response Measurement: Design and Validation,” Canadian Journal of
Educational Communication, vol 16, no. 3, pp. 217-38, 1987.
[7] J. Maier, M. Maurer, C. Reinemann, and T. Faas, “Reliability and validity of real-time response
measurement: A comparison of two studies of a televised debate in Germany,” International
Journal of Public Opinion Research, vol 19, no. 1, pp. 53-73, 2006.