Com 416 Multimedia
Com 416 Multimedia
Com 416 Multimedia
We perceive the universe through our senses. These senses—sight and hearing among them are brought into
play as soon as we interact with our surroundings. Our sensory organs send signals to the brain, which
constructs an interpretation of this interaction. The process of communication, of sending messages from one
person to another, is dependent on our understanding of the senses. In general, the more information that is
perceived by the receiver, the more likely it is that an effective communication will take place. For example,
suppose you are talking to a friend on the telephone. What is missing from this conversation, as opposed to a
regular face-to-face conversation? For one thing, you cannot see the other person's face. The expressions, and
the gestures that accompany what we say, have a lot to do with communication. Now consider a letter you have
written describing a fun trip you took. Here your friend only gets to read the text that you have written and
cannot hear your voice saying the things you have written. Besides, the communication is just one way. You
have to wait a while before finding out what your friend wishes to reply to you. Now suppose you send a picture
of yourself to your friend, along with the letter. Now you are sending some more visual information and your
friend can visualize the fun you are having. However, the impact would have been tremendous if you had sent a
video shot during the trip.
As you can see, the more information you send, the greater the impact of the communication. The medium of
communication—for example, a letter or a telephone call—restricts the usage of the various elements. Indeed,
the development of communication devices is aimed at increasing the amount of information that can be
transmitted. From the early letters involving just text, to the telephone where we could speak, we now are
seeing the development of video telephony. The development of computers is also moving in this direction.
Earlier, the computer was capable of giving only simple text as output, now we can get sound, pictures, and
more. At present, the multimedia computer—a personal computer that has the capability to play sounds,
accurately reproduce pictures, and play videos—is easily available and widely in use.
DEFINITION
Digital Multimedia is the field concerned with computer-controlled integration of text, graphics, images, videos,
audio, and any other medium where every type of information can be represented, transmitted and processed
digitally. The development of powerful multimedia computers and the evolution of the Internet have led to an
explosion of applications of multimedia worldwide. These days multimedia systems are used for education, in
presentations, as information kiosks, and in the gaming industry. In fact, multimedia has applications
everywhere: in businesses, at schools and universities, at home, and even in public places.
The word multimedia is a combination derived from multiple and media. The word medium (the singular of
media) means a transmission channel. For example, sound is transmitted through the medium of air, or
electricity is transmitted through the medium of wires. Similarly, poetry could be considered a medium for
transmitting our thoughts. Or for that matter, a painting is a medium for conveying what we observe. Similarly,
a Hollywood director uses the medium of movies to tell a story. Multimedia is also a medium. To use it
effectively, we have to understand not only how to create specific elements of multimedia, but also to design
our multimedia system so that the messages we wish to convey are conveyed effectively. To be able to create
effective multimedia, it is important for us to be sensitive to other multiple media—such as TV and films.
Nevertheless, it is also necessary to keep in mind that the two are different in many ways.
1
We will understand the differences and similarities between the two as we go along. The most important
difference between traditional multiple media such as radio and television and digital multimedia is the notion
of interactivity. The power of computers allows users to interact with the programs. Since interactivity is such a
powerful concept, many experts in the field of multimedia consider interactivity to be an integral part of
multimedia. We will also follow this convention. Thus, whenever we say the word multimedia, you should
understand that we are referring to digital, interactive multimedia.
INTERACTIVITY
In a multimedia system, if the user has the ability to control what elements are delivered and when, the system
is called an interactive system. Traditional mass media include television, film, radio, and newspapers. These
are called mass media, since the communication processes are one way, originating from a source and being
delivered to a mass audience. These technologies also combine audio, video, graphics, and text, but in a way
that is inflexible. For example, a film has a predefined beginning, middle, and end, irrespective of the audience
watching it. With the power of the computer, the same media could be manipulated by the audience. In this
manner, the audience does not need to remain passive, but becomes the user of the system. Thus, the key
difference between mass media and multimedia is the shift from audience to users, and one-way communication
to two-way communication. This is accomplished through interactivity.
To communicate with the system, the user can use a variety of devices such as the keyboard, mouse, tracking
ball, touch screen, and pen-based mouse. Thus while designing a multimedia application, we have to decide the
level of interactivity we wish to provide to the user of the system. For example, in a direct-sales application, you
can give different choices for a single product with different schemes. The buyers can select the products they
wish to buy.One important thing to notice is that well-designed products always give feedback to the user once
the user interacts with the computer. In our example, once the user selects the products to buy, the program can
provide feedback to the user, such as, "you will get your requested product within S6 hours from now."
2
MULTIMEDIA APPLICATIONS
Multimedia finds its application in various areas including, but not limited to, advertisements, art, education,
entertainment, engineering, medicine, mathematics, business, scientific research and spatial, temporal
applications. A few application areas of multimedia are listed below:
Creative industries
Creative industries use multimedia for a variety of purposes ranging from fine arts, to entertainment, to
commercial art, to journalism, to media and software services provided for any of the industries listed below.
An individual multimedia designer may cover the spectrum throughout their career. Request for their skills
range from technical, to analytical and to creative.
Commercial
Much of the electronic old and new media utilized by commercial artists is multimedia. Exciting presentations
are used to grab and keep attention in advertising. Industrial, business to business, and interoffice
communications are often developed by creative services firms for advanced multimedia presentations beyond
simple slide shows to sell ideas or liven-up training. Commercial multimedia developers may be hired to design
for governmental services and nonprofit services applications as well.
Entertainment and Fine Arts
In addition, multimedia is heavily used in the entertainment industry, especially to develop special effects in
movies and animations. Multimedia games are a popular pastime and are software programs available either as
CD-ROMs or online. Some video games also use multimedia features. Multimedia applications that allow users
to actively participate instead of just sitting by as passive recipients of information are called Interactive
Multimedia.
Education
In Education, multimedia is used to produce computer-based training courses (popularly called CBTs) and
reference books like encyclopedia and almanacs. A CBT lets the user go through a series of presentations, text
about a particular topic, and associated illustrations in various information formats. Edutainment is an informal
term used to describe combining education with entertainment, especially multimedia entertainment.
Engineering
Software engineers may use multimedia in Computer Simulations for anything from entertainment to training
such as military or industrial training. Multimedia for software interfaces are often done as collaboration
between creative professionals and software engineers.
Industry
In the Industrial sector, multimedia is used as a way to help present information to shareholders, superiors and
coworkers. Multimedia is also helpful for providing employee training, advertising and selling products all over
the world via virtually unlimited web-based technologies.
Mathematical and Scientific Research
In Mathematical and Scientific Research, multimedia is mainly used for modeling and simulation. For example,
a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new
substance. Representative research can be found in journals such as the Journal of Multimedia.
Medicine
In Medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the human body is
affected by diseases spread by viruses and bacteria and then develop techniques to prevent it.
3
CHARACTERISTICS OF A MULTIMEDIA SYSTEM
1. Computer Controlled
•Producingthecontentoftheinformation–e.g.byusingtheauthoringtools, image editor, sound and video editor
•Storingtheinformation–providinglargeandsharedcapacityformultimediainformation.
•Transmitting the information – through the network.
•Presenting the information to the end user make direct use of computer peripheral such as display device
(monitor) or sound generator (speaker).
2. Integrated
•All multimedia components (audio, video, text, graphics) used in the system must be somehow integrated.
•Every device, such as microphone and camera is connected to and controlled by a single computer.
•A single type of digital storage is used for all media type.
•Video sequences are shown on computer screen instead of TV monitor
3. Interactivity
•Level1: Interactivity strictly on information delivery. Users select the time at which the presentation starts, the
order, the speed and the form of the presentation itself.
•Level2: Users can modify or enrich the content of the information, and this modification is recorded.
•Level3: Actual processing of users input and the computer generate genuine result based on the users input
4. Digitally Represented
•Digitization: process involved in transforming an analog signal to digital signal.
CLASSIFICATION OF MEDIA
1. The perception media
2. The representation Media
3. The Presentation Media
4. The storage media
5. The transmission media
6. The information Exchange media
1. Perception media: Perception media help human to sense their environment. The central question is how
human perceive information in a computer environment. The answer is through seeing and hearing.
Seeing: For the perception of information through seeing such as text, image and video are used.
Hearing: For the perception of information through hearing media such as music noise and speech are used.
4
2. Representation media: Representation media are defined by internal computer representation of
information. The central question is how the computer information is coded? The answer is that various
format are used to represent media information in computer.
i. Text character is coded in ASCII code
ii. Graphics are coded according to CEPT or CAPTAIN video text standard.
iii. Image can be coded as JPEG format
iv. Audio video sequence can be coded in different TV standard format (PAL, NTSC, SECAM and
stored in the computer in MPEG format)
3. Presentation Media: Presentation media refer to the tools and devices for the input and output of the
information. The central question is, through which the information is delivered by the computer and is
introduced to the computer.
Output media: paper, screen and speaker are the output media.
Input Media: Keyboard, mouse, camera, microphone are the input media.
4. Storage media: Storage Media refer to the data carrier which enables storage of information. The
central question is, how will information be stored? The answer is hard disk, CD-ROM, etc.
5. Transmission media: Transmission Media are the different information carrier that enables continuous
data transmission. The central question is, over which information will be transmitted? The answer is
co-axial cable, fiber optics as well as free air.
6. Information exchange media: Information exchange media includes all information carrier for
transmission, i.e. all storage and transmission media. The central question is, which information carrier
will be used for information exchange between different places? The answer is combine uses of storage
and transmission media. E.g. Electronic mailing system.
REAL-TIME SYSTEM
Real time process is the process which delivers the result of processing in a given time. Main characteristics of
real time system are the correctness of computation and fixed response time. Deadline represent the latest
acceptable time for the presentation of the processing result. Real time system has both hard and soft deadline.
Soft deadline is the type of deadline which in some cases is missed and may be tolerated.
Hard deadline should never be violated. Hard deadline violation is the system failure.
5
5. Multimedia system uses the different scenario than traditional real time operating system in real time
requirements.
MULTIMEDIA PRODUCTION
Multimedia production is a complicated process, usually involving many people. Typically, one or more of the
following people may be involved in making a multimedia product: producer, multimedia designer/creative
designer, subject matter expert, programmer, instructional designer, scriptwriter, computer graphic artist,
audio/video specialist, and webmaster. A brief description of each of these roles follows:
• PRODUCER—The role of the producer is to define, coordinate, and facilitate the production of the project.
Other tasks performed by the producer include negotiating with the client; securing financial resources,
equipment, and facilities, and coordinating the development team. The person should be aware of the
capabilities and limitations of the technology, which helps in discussions with the client.
• MULTIMEDIA DESIGNER—A multimedia designer visualizes the system and determines its structure.
The designer defines the look, feel, format, and style of the entire multimedia system.
• SUBJECT MATTER EXPERT—The subject matter expert provides the program content for the multimedia
architect.
• PROGRAMMER/AUTHOR—The programmer integrates all the multimedia elements like graphics, text,
audio, music, photos, and animation, and codes the functionality of the product.
• INSTRUCTIONAL DESIGNER—The team may include a specialist who can take the information provided
by the content specialists and decide how to present it using suitable strategies and practices. The instructional
designer makes sure that the information is presented in such a manner that the audience easily understands it.
• SCRIPTWRITER—A script is a description of events that happen in a production. The scriptwriter makes
the flowchart of the entire system and decides the level of interactivity of the system.
• COMPUTER GRAPHIC ARTIST—The computer graphic artist creates the graphic elements of the
program such as backgrounds, photos, 3-D objects, logos, animation, and so on.
• AUDIO AND VIDEO SPECIALISTS—Audio and video specialists are needed when intensive use of
narration and digitized video are integrated into a multimedia presentation. The audio specialist is responsible
for recording and editing narration and for selecting, recording, or editing sound effects. The video specialist is
responsible for video capturing, editing, and digitizing.
• WEBMASTER—This individual has the responsibility of creating and maintaining a Webpage. The person
should be capable of converting a multimedia application into a Webpage or creating a Web page with
multimedia elements.
RESEARCH AND ANALYSIS—At this stage, we should find out as much as possible about the
audience: their education, technology skill level, needs, and so on. We also gather information on the content to
be presented and the system on which the multimedia product will be used.
SCRIPTING/FLOWCHARTING—Scripting (or flowcharting) involves deciding the overall structure
of the multimedia project. This is done by placing the various segments of the project in order, using arrows to
reflect flow and interactive decision making. A flowchart has information about the major headings/options
given to the user, what comes in the main menu of the program, and the subsequent branching when the user
6
takes an action. For example, if we were designing our home pages with information about our education, our
interests, and our favorite sites as subpages, we would draw a flowchart, starting with our main screen and
indicate the other screens and how they are linked up together.
STORYBOARDING—The storyboard is a detailed design plan that the designer creates, indicating
what each screen looks like, which media elements are used in the screen, and all the specifications of the media
elements. For example, a storyboard of a screen will contain information about the buttons being used on the
screen, what they look like (a rough sketch), and what happens when the user clicks on a button. The
storyboarding stage is where the detailed visualization of the multimedia system takes place.
PROGRAMMING—When the development team has created and collected the various interface and
content elements, they are assembled into a final product using a programming language like Visual Basic. One
trend has been the development of easy-to-use authoring programs such as the Macromedia Director,
HyperCard, and Authorware.
TESTING—The final production stage is the testing phase. It determines that everything works on the
system it is supposed to work on and also whether typical users will find the design intuitive enough.
7
2.0 VISUALISATION AND CREATIVE PROCESS
Visualization is the process of representing abstract business or scientific data as images that can aid in
understanding the meaning of the data.
Creative visualization is a mental technique that uses the imagination to make dreams and goals come true.
Used in the right way, creative visualization can improve your life and attract to you success and prosperity. It
is a power that can alter your environment and circumstances, cause events to happen, and attract money,
possessions, work, people and love into your life.
8
It is very important part because we only have a limited amount of time to do certain things. Often you find that
people who are called the most ‘creative people’ are often very good at this stage, the evaluation stage. They
have all these ideas but they can use self-criticism and reflection to say “these are the ones that have the most
merit and that I’m going to work on”.
5. ELABORATION
And then we have the final stage. This is called ELABORATION. This is where Edison said that it’s “1%
inspiration and 99% perspiration”. Now the elaboration stage is the 99% perspiration stage. This is where you
are actually doing the work. So many people out there think that the creative process is that insight, that ‘Aha’
moment, or the preparation part. But really a creative individual isn’t complete, and I don’t think they can do
anything that really lasts, unless they can go through that and actually put in the hard work. The elaboration;
testing the idea, working on the idea, those late nights in the studio, working at your desk, those hours in the
laboratory if you are scientist, those days testing and micro-testing products. This is the elaboration stage.
9
3.0 TEXT IN MULTIMEDIA
Words and symbols in any form, spoken or written, are the most common system of communication. They
deliver the most widely understood meaning to the greatest number of people. Most academic related text such
as journals, e-magazines are available in the Web Browser readable form.
Typefaces of fonts can be described in many ways, but the most common characterization of a typeface is serif
and sans serif. The serif is the little decoration at the end of a letter stroke. Times, Times New Roman,
Bookman are some fonts which comes under serif category. Arial, Optima, Verdana are some examples of sans
serif font. Serif fonts are generally used for body of the text for better readability and sans serif fonts are
generally used for headings. The following fonts shows a few categories of serif and sans serif fonts.
FF
(Serif Font) (Sans serif font)
Selecting Text fonts
It is a very difficult process to choose the fonts to be used in a multimedia presentation. Following are a few
guidelines which help to choose a font in a multimedia presentation.
i. As many number of typefaces can be used in a single presentation, this concept of using many fonts in a
single page is called ransom-note topography.
ii. For small type, it is advisable to use the most legible font.
iii. In large size headlines, the kerning (spacing between the letters) can be adjusted
iv. In text blocks, the leading for the most pleasing line can be adjusted.
v. Drop caps and initial caps can be used to accent the words.
vi. The different effects and colors of a font can be chosen in order to make the text look in a distinct manner.
vii. Anti aliased can be used to make a text look gentle and blended.
viii. For special attention to the text the words can be wrapped onto a sphere or bent like a wave.
ix. Meaningful words and phrases can be used for links and menu items.
x. In case of text links (anchors) on web pages the messages can be accented.
The most important text in a web page such as menu can be put in the top 320 pixels.
10
COMPUTERS AND TEXT
FONTS :
Postscript fonts are a method of describing an image in terms of mathematical constructs (Bezier curves), so it
is used not only to describe the individual characters of a font but also to describe illustrations and whole pages
of text. Since postscript makes use of mathematical formula, it can be easily scaled bigger or smaller. Apple and
Microsoft announced a joint effort to develop a better and faster quadratic curves outline font methodology,
called truetype In addition to printing smooth characters on printers, TrueType would draw characters to a low
resolution (72dpi or 96 dpi) monitor.
Unicode
Unicode makes use of 16-bit architecture for multilingual text and character encoding. Unicode uses about
65,000 characters from all known languages and alphabets in the world. Several languages share a set of
symbols that have a historically related derivation, the shared symbols of each language are unified into
collections of symbols (Called scripts). A single script can work for tens or even hundreds of languages.
Microsoft, Apple, Sun, Netscape, IBM, Xerox and Novell are participating in the development of this standard
and Microsoft and Apple have incorporated Unicode into their operating system.
11
Special font editing tools can be used to make your own type so you can communicate an idea or graphic
feeling exactly. With these tools professional typographers create distinct text and display faces.
1. Fontographer:
It is macromedia product, it is a specialized graphics editor for both Macintosh and Windows platforms. You
can use it to create postscript, truetype and bitmapped fonts for Macintosh and Windows.
4. Hypermedia Structures:
Two Buzzwords used often in hypertext are link and node. Links are connections between the conceptual
elements, that is, the nodes that may consists of text, graphics, sounds or related information in the
knowledgebase.
4.0 SOUND
12
Voice is the predominant method by which human beings communicate. We are so accustomed to speaking and
listening that we take sound for granted. But sound exists in many different forms and each form has its own
purpose and characteristics. Here are some things you can do:
How Do We Hear?
If a tree falls in the forest, and there is no one to hear it, will there be a sound? This is a very old philosophical
dilemma, which relies on using the word sound for two different purposes. One use is as a description of a
particular type of physical disturbance—sound is an organized movement of molecules caused by a vibrating
body in some medium—water, air, rock, etc.
The other use is as a description of a sensation—sound is the auditory sensation produced through the ear by the
alteration in pressure, particle displacement, or particle velocity which is propagated in an elastic medium. Both
these definitions are correct. They differ only in the first being a cause and the second being an effect. When an
object moves back and forth (vibrates), it pushes the air immediately next to it a bit to one side and, when
coming back, creates a slight vacuum. This process of oscillation creates a wave similar to the ripples that are
created when you throw a stone instill waters. The air particles that move in waves make the eardrum oscillate.
This movement is registered by a series of small bones—the hammer, the anvil, and the stirrup—that transmit
these vibrations to the inner ear nerve endings. These, in turn, send impulses to the brain, which perceives them
as sounds.
For example, consider what happens when you pluck a guitar string. The plucked string vibrates, generating
waves—periodic compressions and decompressions—in the air surrounding the vibrating string. These, sound
waves now move through the air. When these sound waves reach the ear, they cause the eardrum to vibrate,
which in turn results in signals being sent to the brain
Content sound provides information to audiences, for example, dialogs in movies or theater. Some examples of
content sound used in multimedia are:
• Narration: Narration provides information about an animation that is playing on the screen.
• Testimonials: These could be auditory or video sound tracks used in presentations or movies.
• Voice-overs: These are used for short instructions, for example, to navigate the multimedia application.
• Music: Music may be used to communicate (as in a song).
Ambient sound consists of an array of background and sound effects. These include:
• Message reinforcement: The background sounds you hear in real life, such as the crowds at a ball game, can
be used to reinforce the message that you wish to communicate.
• Background music: Set the mood for the audience to receive and process information by starting and ending
a presentation with music.
• Sound effects: Sound effects are used in presentations to liven up the mood and add effects to your
presentations, such as sound attached to bulleted lists.
Properties of Sound
Many of the terms that we learned in our high school physics class are used by audio experts. In this section we
review some of these terms. As we have seen, sound waves are disturbances in the air (or other mediums of
transmission). The wave consists of compressions and rarefactions of air and is a longitudinal wave. However,
all waves can be represented by a standard waveform depicting the compressions and rarefactions. The
compressions can map to the troughs and the rarefactions to crests in Figure 1, which depicts a typical
waveform. A waveform gives a measurement of the speed of the air particles and the distance that they travel
for a given sound in a given medium. The amplitude measures the relative loudness of the sound, which is the
distance between a valley and a crest as shown in Figure 1. The amplitude determines the volume of the sound.
The unit of measurement of volume is a decibel. Have you ever stood on the tarmac when an airplane takes off?
The sound produced is of such a high decibel value that you want to shut your ears because they hurt.
1. Frequency
The difference in time between the formation of two crests is termed as the period. It is measured in seconds
(see Figure 1). A number of crests (peaks) may occur within a second. The number of peaks that occur in one
second is the frequency. Another term associated with frequency is pitch. If an object oscillates rapidly, it
creates a "high-pitched" sound. A low-frequency sound on the other hand is produced by an object that vibrates
slowly, such as the thicker strings of a piano or guitar. Frequency is measured by the number of cycles
(vibrations) per second and the unit of frequency is hertz (Hz). Frequency may also be defined as the number of
waves passing a point in one second. The human ear can perceive a range of frequencies from 20-20,000 Hz (or
20 kHz).However, it is most sensitive to sounds in the range of 2-4 kHz.
14
Fig. 1: Frequency
2. Wavelength
Wavelength is the distance from the midpoint of one crest to the midpoint of the next crest. It is represented by
the symbol X (refer Figure 2).
3. Doppler Effect
Sound waves, as we said earlier, are compressions and rarefactions of air. When the object making the sound is
moving toward you, the frequency goes up due to the waves getting pushed more tightly together. The opposite
happens when the object moves away from you and the pitch goes down. This is called the Doppler effect. Why
does the horn of an approaching car sound high-pitched when it is coming close to you, yet suddenly becomes
low when it moves away? As a car and its horn move toward you, the pushes of sound—the sound waves—get
crammed together, which makes them higher pitched. On the other hand, when the car and the horn move away
from you, the sound waves are spread out further apart. That makes a lower pitched sound. This is depicted in
Figure 3.
4. Bandwidth
Bandwidth is defined as the difference between the highest and the lowest frequency
Wavelength
5. Harmonics
Few objects produce sound of a single frequency. Most musical instruments, for example, generate multiple
frequencies for each note. That is really the way one can tell the difference between musical instruments, for
example, a violin and a flute, even though both produce notes of precisely the same pitch. The combinations of
frequencies generated by an instrument are known as the timbre. The sounds that we hear from vibrating objects
are complex in the sense that they contain many different frequencies. This is due to the complex way the
15
objects vibrate. A "note" (say, Middle C) played on a piano sounds different from the same "note" played on a
saxophone. In both cases, different frequencies above the common fundamental note sounded are present. These
different frequencies along with the difference in timbre enable you to distinguish between different
instruments. The harmonic series is a series of frequencies that are whole number multiples of a fundamental
frequency. For example, taking the tone Middle C, whose fundamental frequency is approximately 261 Hz, the
harmonic series (HS) on this frequency is:
FIGURE 4
DIGITAL AUDIO
The sound heard by the ear (also called audio) is analog in nature and is a continuous waveform. Acoustic
instruments produce analog sounds. A computer needs to transfer the analog sound wave into its digital
representation, consisting of discrete numbers. In this section, we will try to understand the basic principles of
digital audio that are critical in understanding the storage, transmission, and applications of audio data. With the
Internet providing an unrestricted medium for audio transmission, a large amount of research is focused on
compression techniques, speed of transmission, and audio quality. A microphone converts the sound waves into
electrical signals. This signal is then amplified, filtered, and sent to an analog-to-digital converter. This
information can then be retrieved and edited using a computer. If you want to output this data as sound, the
stream of data is sent to the speakers via a digital-to-analog converter, a reconstruction filter, and the audio is
amplified. This produces the analog sound wave that we hear.
SAMPLING
The audio input from a source is sampled several thousand times per second. Each sample is a snapshot of the
original signal at a particular time. Let us make an analogy. Consider the making of a motion picture. A
dynamic scene is captured on film or videotape 24-30 times a second. The eye perceives a rapid succession of
individual photographic frames as movement on the screen. Due to the speed of display of the frames, the eye
perceives it as a continuum. Similarly, sound sampling transfers a continuous sound wave into discrete
numbers.
SAMPLING RATE
When sampling a sound, the computer processes snapshots of the waveform. The frequency of these snapshots
is called the sampling rate. The rate can vary typically from 5000-90,000 samples per second. Sampling rate is
an important (though not the only) factor in determining how accurately the digitized sound represents the
original analog sound. Let us take an example. Your mother is scolding you for breaking her precious vase kept
in the living room. Your sister hears only bits and pieces of the conversation because she is not interested in the
matter.
16
Later you ask your sister if the scolding was justified and your sister replies that she did not listen to the whole
conversation. This is because she sampled the voices at a very wide range.
DIGITIZATION
Digitization is the process of assigning a discrete value to each of the sampled values. It is performed by an
Integrated Chip (IC) called an A to D Converter. In the case of 8-bitdigitization, this value is between 0 and 255
(or -128 and 127). In 16-bit digitization, this value is between 0 and 65,535 (or -32,768 and 32,767). An
essential thing to remember is that a digitized signal can take only certain (discrete) values. The process of
digitization introduces noise in a signal. This is related to the number of bits per sample.
FIDELITY
Fidelity is defined as the closeness of the recorded version to the original sound. In the case of digital speech, it
depends upon the number of bits per sample and the sampling rate. A really high-fidelity (hi-fi) recording takes
up a lot of memory space (176.4 Kb for every second of audio of stereo quality sampled at 16 bits, 44.1 kHz per
channel). Fortunately for most computer multimedia applications, it is not necessary to have very high fidelity
sound.
QUALITY OF SOUND
Quality of Sound in a CD
CD-ROMs have become the media choice for the music industry in a very short period of time. The reasons are
as follows:
Ease of use and durability of the media
Random access capability as compared to audiotapes
Very high quality sound
Large storage volumes
CD-ROMs are becoming important media for multimedia applications. The sampling rate is typically 44 kHz
for each channel (left and right).For example, take an audiocassette and listen to a song by your favorite singer.
Then listen to the same song on a CD. Do you hear the difference? This difference in audio quality is because of
the difference in recording the song on the two different media
COMPRESSION
An important aspect of communication is transfer of data from the creator to the recipient. Transfer of data in
the Internet age is very time-dependent. Take for instance speech, which is nothing but changes in the intensity
of sound over a fixed period. This speech is transferred across networks in the form of sound files. If the size of
the sound files is too large, the time taken to transfer the files increases. This increase in the transfer time
deteriorates the quality of the sound at the receiver's end. The time taken to transfer a file can be decreased
using compression.
Compression in computer terms means reducing the physical size of data such that it occupies less storage space
and memory. Compressed files are, therefore, easier to transfer because there is a sizable amount of reduction in
the size of data to be transferred. This results in a reduction in the time needed for file transfer as well as a
reduction in the bandwidth utilization thus providing good sound quality even over a slow network. The
17
following examples of digital media show the amount of storage space required for one second of playback of
an audio file:
• An uncompressed audio signal of telephone quality (8-bit sampled at 8 kHz) leads to a bandwidth requirement
of 64 Kbps and storage requirement of 8 KB to store one second of playback.
• An uncompressed stereo audio signal of CD quality (16-bit sampled at 44.1 kHz) leads to a bandwidth
requirement of 44.1 kHz x 16 bits = 705.6 Kbps and storage requirement of 88.2 KB for one second of playback
Compression Requirements
In the case of audio, processing data in a multimedia system leads to storage requirements in the range of
several megabytes. Compressions in multimedia systems are subjected to certain constraints. These constraints
are:
• The quality of the reproduced data should be adequate for applications.
• The complexity of the technique used should be minimal, to make a cost-effective compression technique.
• The processing of the algorithm should not take too long.
• Various audio data rates should be supported. Thus, depending on specific system conditions the data rates can
be adjusted.
• It should be possible to generate data on one multimedia system and reproduce data on another system. The
compression technique should be compatible with various reproduction systems.
As many applications exchange multimedia data using communication networks, the compatibility of
compression is required. Standards like CCITT (International Consultative Committee for Telephone and
Telegraph), ISO (International Standard Organization), and MPEG (Moving Picture Experts Group) are used to
achieve this compatibility.
Lossless Compression
In lossless compression, data are not altered or lost in the process of compression or decompression.
Decompression produces a replica of the compressed object. This compression technique is used for text
documents, databases, and text-related objects. The following are some of the commonly used lossless
standards:
• Packbits encoding (run-length encoding)
• CCITT Group 3 1-D (compression standard based on run-length encoding scheme)
• CCITT Group 3 2-D (compression standard based on run-length encoding scheme modified by two-
dimensional encoding)
• CCITT Group 4 (compression standards based on two-dimensional compression)
• Lempel-Ziv and Welch algorithm LZW (Techniques used by ARJ/PKZIP)
Lossy Compression
There is loss of some information when lossy compression is used. The loss of this data is such that the object
looks more or less like the original. This method is used where absolute data accuracy is not essential. Lossy
compression is the most commonly used compression
19
AUDIO EDITING
One can record or manipulate audio files using various audio editors. You must have a sound card installed on
your PC to edit stored or recorded audio data. Recording sound for multimedia applications is only the first step
in the process of sound processing. After the audio has been recorded and stored, it has to be modified to
improve the level of quality. Unwanted sound or silences have to be removed. Mistakes in recording can be
erased or modified. Sounds can be mixed to get a better effect. Also adding effects to the audio file gives that
extra touch to the listener who will hear the audio. Some common audio editing software packages for Windows
are:
• Cool Edit
• Sound Forge XP
• Wave Flow
20
Using these wave editors, one can perform functions like copy and paste, just as one would use any text editor.
You can also concatenate, append, or mix two or more audio files. We assume that the audio files are saved in
the WAV format. So the files would have a .wav extension. This is a popular format for use on the windows
platform. However, many editors will allow editing in other formats as well. We have also used the Wave Flow
Editor (that is packaged with the sound blaster card) to illustrate some common editing options. You can also
experiment with these effects using other editors.
Special Effects
Sound and music effects are a part of our daily lives. Every environment, whether it be a highway, an office, or
a home, has its own set of sounds to characterize it. These include a car being driven, office equipment in
operation, or household appliances. Sounds not only provide a realistic sense of the scene but can also provide
important input for sensing a scene change. Sound effects can be incorporated in audio files using audio editors.
Sound effects are generated by simply manipulating the amplitude or wavelength of the audio waveform. There
is a variety of special effects built into audio editors. The most commonly used effects are echo, reverb, fade-in,
fade-out, and amplify.
21
3. Select the first 25% of the waveform by holding the left mouse button down and dragging the mouse until the
selection is made.
4. Click on the Tools option from the main menu toolbar and select the Fade-in option.
The fade-in dialog box is displayed (Figure 4).
5. Set the Initial Percent to 0% and select the Progression type as Linear. Click on the OK button.
6. Click on the waveform to remove the selection and play the changed wave file to the audience.
7. Note the steady increase in volume of the wave file over a period of time.
Note the change in amplitude of the selected waveform to the audience. Amplitude is linked to the volume of
the sound.
• COOL EDIT—This is shareware software used to edit audio files. This package supports a variety of sound
formats. It has a built-in CD player through which one can convert Red-Book audio (CD audio) into a
waveform and save it to any available format. Although this software is shareware, it has enough functions to be
used professionally. Being easy to use has made it one of the popular shareware audio editing software
packages. It is freely downloadable from the Internet.
• SOUND FORGE XP—It is as powerful as Cool Edit. It is not shareware.
• WAVE STUDIO—It is packaged along with the Sound Blaster Multimedia Kit. It is a powerful aid for
editing audio files. This editor supports most of the audio file formats. The only drawback is that this editor
works only if you have a Sound Blaster soundcard.
22
5.0 IMAGE
Still images are the important element of a multimedia project or a web site. In order to make a multimedia
presentation look elegant and complete, it is necessary to spend ample amount of time to design the graphics
and the layouts. Competent, computer literate skills in graphic art and design are vital to the success of a
multimedia project.
Digital Image
A digital image is represented by a matrix of numeric values each representing a quantized intensity value.
When I is a two-dimensional matrix, then I(r,c) is the intensity value at the position corresponding to row r and
column c of the matrix. The points at which an image is sampled are known as picture elements, commonly
abbreviated as pixels. The pixel values of intensity images are called gray scale levels (we encode here the
“color” of the image). The intensity at each pixel is represented by an integer and is determined from the
continuous image by averaging over a small neighborhood around the pixel location. If there are just two
intensity values, for example, black, and white, they are represented by the numbers 0 and 1; such images are
called binary-valued images. If8-bit integers are used to store each pixel value, the gray levels range from 0
(black) to 255(white).
Multiple Monitors
When developing multimedia, it is helpful to have more than one monitor, or a single high-resolution monitor
with lots of screen real estate, hooked up to your computer. In this way, you can display the full-screen working
area of your project or presentation and still have space to put your tools and other menus. This is particularly
important in an authoring system such as Macromedia Director, where the edits and changes you make in one
window are immediately visible in the presentation window-provided the presentation window is not obscured
by your editing tools.
Bitmap Software
The abilities and feature of image-editing programs for both the Macintosh and Windows range from simple to
complex. The Macintosh does not ship with a painting tool, and Windows provides only the rudimentary Paint
(see following figure), so you will need to acquire this very important software separately – often bitmap editing
or painting programs come as part of a bundle when you purchase your computer, monitor, or scanner.
23
Capturing and Editing Images
The image that is seen on a computer monitor is digital bitmap stored in video memory, updated about every
1/60 second or faster, depending upon monitor’s scan rate. When the images are assembled for multimedia
project, it may often be needed to capture and store an image directly from screen. It is possible to use the
PrtScr key available in the keyboard to capture a image.
Scanning Images
After scanning through countless clip art collections, if it is not possible to find the unusual background you
want for a screen about gardening. Sometimes when you search for something too hard, you don’t realize that
it’s right in front of your face. Open the scan in an image-editing program and experiment with different filters,
the contrast, and various special effects. Be creative, and don’t be afraid to try strange combinations –
sometimes mistakes yield the most intriguing results.
Vector Drawing
Most multimedia authoring systems provide for use of vector-drawn objects such as lines, rectangles, ovals,
polygons, and text. Computer-aided design (CAD) programs have traditionally used vector-drawn object
systems for creating the highly complex and geometric rendering needed by architects and engineers. Graphic
artists designing for print media use vector-drawn objects because the same mathematics that put a rectangle on
your screen can also place that rectangle on paper without jaggies. This requires the higher resolution of the
printer, using a page description language such as PostScript. Programs for 3-D animation also use vector-
drawn graphics. For example, the various changes of position, rotation, and shading of light required to spin the
extruded.
Color
Color is a vital component of multimedia. Management of color is both a subjective and a technical exercise.
Picking the right colors and combinations of colors for your project can involve many tries until you feel the
result is right.
Additive Color
In additive color model, a color is created by combining colored light sources in three primary colors: red, green
and blue (RGB). This is the process used for a TV or computer monitor
Subtractive Color
In subtractive color method, a new color is created by combining colored media such as paints or ink that
absorb (or subtract) some parts of the color spectrum of light and reflect the others back to the eye. Subtractive
24
color is the process used to create color in printing. The printed page is made up of tiny halftone dots of three
primary colors, cyan, magenta and yellow (CMY).
Editing features
The elements of multimedia – image, animation, text, digital audio and MIDI music and video clips –need to be
created, edited and converted to standard file formats and the specialized applications provide these capabilities.
Editing tools for these elements, particularly text and still images are often included in your authoring system.
Organizing features
The organization, design and production process for multimedia involves storyboarding and flowcharting. Some
authoring tools provide a visual flowcharting system or overview facility for illustrating your project’s structure
at a macro level. Storyboards or navigation diagrams too can help organize a project. Because designing the
interactivity and navigation flow of you project often requires a great deal of planning and programming effort,
your story board should describe not just graphics of each screen but the interactive elements as well. Features
that help organize your material, such as those provided by Super Edit, Authorware, Icon Author and other
authoring systems, are a plus.
25
Programming features
Authoring tools that offer a very high level language or interpreted scripting environment for navigation control
and for enabling user inputs – such as Macromedia Director, Macromedia Flash, HyperCard, Meta Card and
Tool Book are more powerful. The more commands and functions provided in the scripting language, the more
powerful the authoring system. As with traditional programming tools looks for an authoring package with good
debugging facilities, robust text editing and online syntax reference. Other scripting augmentation facilities are
advantages as well. In complex projects you may need to program custom extensions of the scripting language
for direct access to the computer’s operating system. Some authoring tools offer direct importing of
preformatted text, including facilities, complex text search mechanisms and hyper linkage tools. These
authoring systems are useful for development of CD-ROM information products online documentation
products, online documentation and help systems and sophisticated multimedia enhanced publications With
script you can perform computational tasks; sense and respond to user input; create character, icon and motion
animation; launch other application; and control external multimedia devices.
Interactivity features
Interactivity empowers the end users of your project by letting them control the content and flow of
information. Authoring tools should provide one or more levels of interactivity: Simple branching, which offers
the ability to go to another section of the multimedia production. Conditional branching, which supports a go-to
based on the result of IF-THEN decision or events. A structured language that supports complex programming
logic, such as nested IF-THENs, subroutines, event tracking and message passing among objects and elements.
Playback features
When you are developing multimedia project, your will continually assembling elements and testing to see how
the assembly looks and performs. Your authoring system should let you build a segment or part of your project
and then quickly test it as if the user were actually using it.
Delivery features
Delivering your project may require building a run-time version of the project using the multimedia authoring
software. A run-time version allows your project to play back without requiring the full authoring software and
all its tools and editors. Many times the runtime version does not allow user to access or change the content,
structure and programming of the project. If you are going to distribute your project widely, you should
distribute it in the run-time version.
Cross-Platform features
It is also increasingly important to use tools that make transfer across platforms easy. For many developers, the
Macintosh remains the multimedia authoring platform of choice, but 80% of that developer’s target market may
be Windows platforms. If you develop on a Macintosh, look for tools that provide a compatible authoring
system for Windows or offer a run-time player for the other platform.
Internet Playability
Due to the Web has become a significant delivery medium for multimedia, authoring systems typically provide
a means to convert their output so that it can be delivered within the context of HTML or DHTML, either with
special plug-in or embedding Java, JavaScript or other code structures in the HTML document.
26
Image file formats
Image file formats are standardized means of organizing and storing digital images. Image files are composed
of digital data in one of these formats that can be rasterized for use on a computer display or printer. An image
file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes
a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device
displaying it.
Lossless compression
Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed
image. Lossless compression generally, but not exclusively, results in larger files than lossy compression.
Lossless compression should be used to avoid accumulating stages of re-compression when editing images.
Lossy compression
Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to
be a perfect copy, but it is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than
lossless compression. Most lossy compression algorithms allow for variable compression that trades image
quality for file size.
In addition to straight image formats, Metafile formats are portable formats which can include both raster and
vector information. Examples are application-independent formats such as WMF and EMF. The metafile format
is an intermediate format. Most Windows applications open metafiles and then save them in their own native
format. Page description language refers to formats used to describe the layout of a printed page containing
text, objects and images. Examples are PostScript, PDF and PCL.
27
RASTER FORMATS
JPEG/JFIF
JPEG (Joint Photographic Experts Group) is a compression method; JPEG-compressed images are usually
stored in the JFIF (JPEG File Interchange Format) file format. JPEG compression is (in most cases) lossy
compression. The JPEG/JFIF filename extension is JPG or JPEG. Nearly every digital camera can save images
in the JPEG/JFIF format, which supports 8-bit grayscale images and 24-bit color images (8 bits each for red,
green, and blue). JPEG applies lossy compression to images, which can result in a significant reduction of the
file size. The amount of compression can be specified, and the amount of compression affects the visual quality
of the result. When not too great, the compression does not noticeably detract from the image's quality, but
JPEG files suffer generational degradation when repeatedly edited and saved. (JPEG also provides lossless
image storage, but the lossless version is not widely supported.)
JPEG 2000
JPEG 2000 is a compression standard enabling both lossless and lossy storage. The compression methods used
are different from the ones in standard JFIF/JPEG; they improve quality and compression ratios, but also
require more computational power to process. JPEG2000 also adds features that are missing in JPEG. It is not
nearly as common as JPEG, but it is used currently in professional movie editing and distribution (some digital
cinemas, for example, use JPEG 2000 for individual movie frames).
Exif
The Exif (Exchangeable image file format) format is a file standard similar to the JFIF format with TIFF
extensions; it is incorporated in the JPEG-writing software used in most cameras. Its purpose is to record and to
standardize the exchange of images with image metadata between digital cameras and editing and viewing
software. The metadata are recorded for individual images and include such things as camera settings, time and
date, shutter speed, exposure, image size, compression, name of camera, color information. When images are
viewed or edited by image editing software, all of this image information can be displayed. The actual Exif
metadata as such may be carried within different host formats, e.g. TIFF, JFIF (JPEG) or PNG. IFF-META is
another example.
TIFF
The TIFF (Tagged Image File Format) format is a flexible format that normally saves 8bits or 16 bits per color
(red, green, blue) for 24-bit and 48-bit totals, respectively, usually using either the TIFF or TIF filename
extension. TIFF's flexibility can be both an advantage and disadvantage, since a reader that reads every type of
TIFF file does not exist TIFFs can be lossy and lossless; some offer relatively good lossless compression for bi-
level (black & white) images. Some digital cameras can save in TIFF format, using the LZW compression
algorithm for lossless storage. TIFF image format is not widely supported by web browsers. TIFF remains
widely accepted as a photograph file standard in the printing business. TIFF can handle device-specific color
spaces, such as the CMYK defined by a particular set of printing press inks. OCR (Optical Character
Recognition) software packages commonly generate some (often monochromatic) form of TIFF image for
scanned text pages.
RAW
RAW refers to raw image formats that are available on some digital cameras, rather than to a specific format.
These formats usually use a lossless or nearly lossless compression, and produce file sizes smaller than the TIFF
formats. Although there is a standard raw image format, (ISO 12234-2, TIFF/EP), the raw formats used by most
cameras are not standardized or documented, and differ among camera manufacturers. Most camera
manufacturers have their own software for decoding or developing their raw file format, but there are also many
third-party raw file converter applications available that accept raw files from most digital cameras. Some
28
graphic programs and image editors may not accept some or all raw file formats, and some older ones have been
effectively orphaned already.
Adobe's Digital Negative (DNG) specification is an attempt at standardizing a raw image format to be used by
cameras, or for archival storage of image data converted from undocumented raw image formats, and is used by
several niche and minority camera manufacturers including Pentax, Leica, and Samsung. The raw image
formats of more than230 camera models, including those from manufacturers with the largest market shares
such as Canon, Nikon, Phase One, Sony, and Olympus, can be converted to DNG. DNG was based on ISO
12234-2, TIFF/EP, and ISO's revision of TIFF/EP is reported to be adding Adobe's modifications and
developments made for DNG into profile 2 of the new version of the standard. As far as video cameras are
concerned, ARRI's Arriflex D-20 and D-21 cameras provide raw 3K-resolution sensor data with Bayer pattern
as still images (one per frame) in a proprietary format (.ari file extension). Red Digital Cinema Camera
Company, with its Mysterium sensor family of still and video cameras, uses its proprietary raw format called
REDCODE (.R3D extension), which stores still as well as audio + video information in one lossy-compressed
file.
GIF
GIF (Graphics Interchange Format) is limited to an 8-bit palette, or 256 colors. This makes the GIF format
suitable for storing graphics with relatively few colors such as simple diagrams, shapes, logos and cartoon style
images. The GIF format supports animation and is still widely used to provide image animation effects. It also
uses a lossless compression that is more effective when large areas have a single color, and ineffective for
detailed images or dithered images.
BMP
The BMP file format (Windows bitmap) handles graphics files within the Microsoft Windows OS. Typically,
BMP files are uncompressed, hence they are large; the advantage is their simplicity and wide acceptance in
Windows programs.
PNG
The PNG (Portable Network Graphics) file format was created as the free, open-source successor to GIF. The
PNG file format supports 8 bit palette images (with optional transparency for all palette colors) and 24 bit true
color (16 million colors) or 48 bit true color with and without alpha channel - while GIF supports only 256
colors and a single transparent color. Compared to JPEG, PNG excels when the image has large, uniformly
colored areas. Thus lossless PNG format is best suited for pictures still under edition - and the lossy formats,
like JPEG, are best for the final distribution of photographic images, because in this case JPG files are usually
smaller than PNG files. The Adam7-interlacing allows an early preview, even when only a small percentage of
the image data has been transmitted.PNG provides a patent-free replacement for GIF and can also replace many
common uses of TIFF. Indexed-color, grayscale, and true color images are supported, plus an optional alpha
channel.PNG is designed to work well in online viewing applications like web browsers so it is fully streamable
with a progressive display option. PNG is robust, providing both full file integrity checking and simple
detection of common transmission errors. Also, PNG can store gamma and chromaticity data for improved
color matching on heterogeneous platforms. Some programs do not handle PNG gamma correctly, which can
cause the images to be saved or displayed darker than they should be. Animated formats derived from PNG are
MNG and APNG. The latter is supported by Mozilla Firefox and Opera and is backwards
compatible with PNG.
PAM
A late addition to the PNM family is the PAM format (Portable Arbitrary Format).
WEBP
WebP is a new open image format that uses both lossless and lossy compression. It was designed by Google to
reduce image file size to speed up web page loading: its principal purpose is to supersede JPEG as the primary
format for photographs on the web. WebP now supports animated images and alpha channel (transparency) in
lossy images. WebP is based on VP8's intra-frame coding and uses a container based on RIFF.
IFF-RGFX
IFF-RGFX the native format of SView5 provides a straightforward IFF-style representation of any kind of
image data ranging from 1-128 bit (LDR and HDR), including common meta data like ICC profiles, XMP,
IPTC or EXIF.
30
6.0 ANIMATION
Animation makes static presentations come alive. It is visual change over time and can add great power to our
multimedia projects. Carefully planned, well-executed video clips can make a dramatic difference in a
multimedia project. Animation is created from drawn pictures and video is created using real time visuals.
PRINCIPLES OF ANIMATION
Animation is the rapid display of a sequence of images of 2-D artwork or model positions in order to create an
illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can
be created and demonstrated in a number of ways. The most common method of presenting animation is as a
motion picture or video program, although several other forms of presenting animation also exist Animation is
possible because of a biological phenomenon known as persistence of vision and a psychological phenomenon
called phi.
An object seen by the human eye remains chemically mapped on the eye’s retina for a brief time after viewing.
Combined with the human mind’s need to conceptually complete a perceived action, this makes it possible for a
series of images that are changed very slightly and very rapidly, one after the other, to seemingly blend together
into a visual illusion of movement. The following shows a few cells or frames of a rotating logo. When the
images are progressively and rapidly changed, the arrow of the compass is perceived to be spinning. Television
video builds entire frames or pictures every second; the speed with which each frame is replaced by the next
one makes the images appear to blend smoothly into movement. To make an object travel across the screen
while it changes its shape, just change the shape and also move or translate it a few pixels for each frame.
ANIMATION TECHNIQUES
When you create an animation, organize its execution into a series of logical steps. First, gather up in your mind
all the activities you wish to provide in the animation; if it is complicated, you may wish to create a written
script with a list of activities and required objects. Choose the animation tool best suited for the job. Then build
and tweak your sequences; experiment with lighting effects. Allow plenty of time for this phase when you are
experimenting and testing. Finally, post-process your animation, doing any special rendering and adding sound
effects.
Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each frame, which have been
replaced today by acetate or plastic. Cels of famous animated cartoons have become sought-after, suitable-for-
framing collector’s items. Cel animation artwork begins with key frames (the first and last frame of an action).
For example, when an animated figure of a man walks across the screen, the balances the weight of his entire
body on one foot and then the other in a series of falls and recoveries, with the opposite foot and leg catching up
to support the body.
The animation techniques made famous by Disney use a series of progressively different on each frame of
movie film which plays at 24 frames per second. A minute of animation may thus require as many as 1,440
separate frames. The term cel derives from the clear celluloid sheets that were used for drawing each frame,
which is been replaced today by acetate or plastic. Cel animation artwork begins with key frames.
Computer Animation
Computer animation programs typically employ the same logic and procedural concepts as cel animation, using
layer, key frame, and tweening techniques, and even borrowing from the vocabulary of classic animators. On
the computer, paint is most often filled or drawn with tools using features such as gradients and anti aliasing.
The word links, in computer animation terminology, usually means special methods for computing RGB pixel
values, providing edge detection, and layering so that images can blend or otherwise mix their colors to produce
31
special transparencies, inversions, and effects. Computer Animation is same as that of the logic and procedural
concepts as cel animation and use the vocabulary of classic cel animation – terms such as layer, Keyframe, and
tweening. The primary difference between the animation software program is in how much must be drawn by
the animator and how much is automatically generated by the software In 2D animation the animator creates an
object and describes a path for the object to follow. The software takes over, actually creating the animation on
the fly as the program is being viewed by your user. In 3Danimation the animator puts his effort in creating the
models of individual and designing the characteristic of their shapes and surfaces. Paint is most often filled or
drawn with tools using features such as gradients and anti- aliasing.
Kinematics
It is the study of the movement and motion of structures that have joints, such as a walking man. Inverse
Kinematics is in high-end 3D programs, it is the process by which you link objects such as hands to arms and
define their relationships and limits. Once those relationships are set you can drag these parts around and let the
computer calculate the result.
Morphing
Morphing is popular effect in which one image transforms into another. Morphing application and other
modeling tools that offer this effect can perform transition not only between still images but often between
moving images as well. The morphed images were built at a rate of 8 frames per second, with each transition
taking a total of 4 seconds.
32
7.0 VIDEO
PAL
The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe, Australia, and South Africa.
PAL is an integrated method of adding color to a black-and-white television signal that paints 625 lines at a
frame rate 25 frames per second.
SECAM
The Sequential Color and Memory (SECAM) system is used in France, Russia, and few other countries.
Although SECAM is a 625-line, 50 Hz system, it differs greatly from both the NTSC and the PAL color
systems in its basic technology and broadcast method.
Video Tips
A useful tool easily implemented in most digital video editing applications is “blue screen,” “Ultimate,” or
“chromo key” editing. Blue screen is a popular technique for making multimedia titles because expensive sets
are not required. Incredible background scan be generated using 3-D modeling and graphic software, and one or
more actors, vehicles, or other objects can be neatly layered onto that background. Applications such as Video
Shop, Premiere, Final Cut Pro, and I Movie provide this capability. Recording Formats S-VHS video. In S-VHS
video, color and luminance information are kept on two separate tracks. The result is a definite improvement in
picture quality. This standard is also used in Hi-8. still, if your ultimate goal is to have your project accepted by
broadcast stations, this would not be the best choice.
Component (YUV)
In the early 1980s, Sony began to experiment with a new portable professional video format based on Betamax.
Panasonic has developed their own standard based on a similar technology, called “MII,” Betacam SP has
become the industry standard for professional video field recording. This format may soon be eclipsed by a new
digital version called “Digital Betacam.”
33
Digital Video
Full integration of motion video on computers eliminates the analog television form of video from the
multimedia delivery platform. If a video clip is stored as data on a hard disk, CD-ROM, or other mass-storage
device, that clip can be played back on the computer’s monitor without overlay boards, videodisk players, or
second monitors. This playback of digital video is accomplished using software architecture such as QuickTime
or AVI, a multimedia producer or developer; you may need to convert video source material from its still
common analog form (videotape) to a digital form manageable by the end user’s computer system. So an
understanding of analog video and some special hardware must remain in your multimedia toolbox. Analog to
digital conversion of video can be accomplished using the video overlay hardware described above, or it can be
delivered direct to disk using FireWire cables. To repetitively digitize a full-screen color video image every
1/30 second and store it to disk or RAM severely taxes both Macintosh and PC processing capabilities–special
hardware, compression firmware, and massive amounts of digital storage space are required.
Video Compression
To digitize and store a 10-second clip of full-motion video in your computer requires transfer of an enormous
amount of data in a very short amount of time. Reproducing just one frame of digital video component video at
24 bits requires almost 1MB of computer data; 30seconds of video will fill a gigabyte hard disk. Full-size, full-
motion video requires that the computer deliver data at about 30MB per second. This overwhelming
technological bottleneck is overcome using digital video compression schemes or codecs (coders/decoders). A
codec is the algorithm used to compress a video for delivery and then decode it in real-time for fast playback.
Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak, Sorenson,
ClearVideo, RealVideo, and VDOwave are available to compress digital video information. Compression
schemes use Discrete Cosine Transform (DCT), an encoding algorithm that quantifies the human eye’s ability
to detect color and image distortion. All of these codecs employ lossy compression algorithms. In addition to
compressing video data, streaming technologies are being implemented to provide reasonable quality low-
bandwidth video on the Web. Microsoft, RealNetworks, VXtreme, VDOnet, Xing, Precept, Cubic, Motorola,
Viva, Vosaic, and Oracle are actively pursuing the commercialization of streaming technology on the Web.
QuickTime, Apple’s software-based architecture for seamlessly integrating sound, animation, text, and video
(data that changes over time), is often thought of as a compression standard, but it is really much more than that.
MPEG
The MPEG standard has been developed by the Moving Picture Experts Group, a working group convened by
the International Standards Organization (ISO) and the International Electro-Technical Commission (IEC) to
create standards for digital representation of moving pictures and associated audio and other
data.MPEG1andMPEG2 are the current standards. Using MPEG1, you can deliver 1.2 Mbps of videoand250
Kbps of two-channel stereo audio using CD-ROM technology. MPEG2, a completely different system from
MPEG1, requires higher data rates (3 to 15 Mbps) but delivers higher image resolution, picture quality,
interlaced video formats, multi resolution scalability, and multi channel audio features.DVI/Indeo DVI is a
property, programmable compression/decompression technology based on the Intel i750 chip set. This
Hardware consists of two VLSI (Very Large Scale Integrated) chips to separate the image processing and
display functions.
Two levels of compression and decompression are provided by DVI: Production Level Video (PLV) and Real
Time Video (RTV). PLV and RTV both use variable compression rates. DVI’s algorithms can compress video
images at ratios between 80:1 and 160:1.DVI will playback video in full-frame size and in full color at 30
frames per second.
34