Com 416 Multimedia

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

COM 416 - MULTIMEDIA

1.0 OVERVIEW OF MULTIMEDIA

We perceive the universe through our senses. These senses—sight and hearing among them are brought into
play as soon as we interact with our surroundings. Our sensory organs send signals to the brain, which
constructs an interpretation of this interaction. The process of communication, of sending messages from one
person to another, is dependent on our understanding of the senses. In general, the more information that is
perceived by the receiver, the more likely it is that an effective communication will take place. For example,
suppose you are talking to a friend on the telephone. What is missing from this conversation, as opposed to a
regular face-to-face conversation? For one thing, you cannot see the other person's face. The expressions, and
the gestures that accompany what we say, have a lot to do with communication. Now consider a letter you have
written describing a fun trip you took. Here your friend only gets to read the text that you have written and
cannot hear your voice saying the things you have written. Besides, the communication is just one way. You
have to wait a while before finding out what your friend wishes to reply to you. Now suppose you send a picture
of yourself to your friend, along with the letter. Now you are sending some more visual information and your
friend can visualize the fun you are having. However, the impact would have been tremendous if you had sent a
video shot during the trip.

As you can see, the more information you send, the greater the impact of the communication. The medium of
communication—for example, a letter or a telephone call—restricts the usage of the various elements. Indeed,
the development of communication devices is aimed at increasing the amount of information that can be
transmitted. From the early letters involving just text, to the telephone where we could speak, we now are
seeing the development of video telephony. The development of computers is also moving in this direction.
Earlier, the computer was capable of giving only simple text as output, now we can get sound, pictures, and
more. At present, the multimedia computer—a personal computer that has the capability to play sounds,
accurately reproduce pictures, and play videos—is easily available and widely in use.

DEFINITION
Digital Multimedia is the field concerned with computer-controlled integration of text, graphics, images, videos,
audio, and any other medium where every type of information can be represented, transmitted and processed
digitally. The development of powerful multimedia computers and the evolution of the Internet have led to an
explosion of applications of multimedia worldwide. These days multimedia systems are used for education, in
presentations, as information kiosks, and in the gaming industry. In fact, multimedia has applications
everywhere: in businesses, at schools and universities, at home, and even in public places.

The word multimedia is a combination derived from multiple and media. The word medium (the singular of
media) means a transmission channel. For example, sound is transmitted through the medium of air, or
electricity is transmitted through the medium of wires. Similarly, poetry could be considered a medium for
transmitting our thoughts. Or for that matter, a painting is a medium for conveying what we observe. Similarly,
a Hollywood director uses the medium of movies to tell a story. Multimedia is also a medium. To use it
effectively, we have to understand not only how to create specific elements of multimedia, but also to design
our multimedia system so that the messages we wish to convey are conveyed effectively. To be able to create
effective multimedia, it is important for us to be sensitive to other multiple media—such as TV and films.
Nevertheless, it is also necessary to keep in mind that the two are different in many ways.

1
We will understand the differences and similarities between the two as we go along. The most important
difference between traditional multiple media such as radio and television and digital multimedia is the notion
of interactivity. The power of computers allows users to interact with the programs. Since interactivity is such a
powerful concept, many experts in the field of multimedia consider interactivity to be an integral part of
multimedia. We will also follow this convention. Thus, whenever we say the word multimedia, you should
understand that we are referring to digital, interactive multimedia.

INTERACTIVITY
In a multimedia system, if the user has the ability to control what elements are delivered and when, the system
is called an interactive system. Traditional mass media include television, film, radio, and newspapers. These
are called mass media, since the communication processes are one way, originating from a source and being
delivered to a mass audience. These technologies also combine audio, video, graphics, and text, but in a way
that is inflexible. For example, a film has a predefined beginning, middle, and end, irrespective of the audience
watching it. With the power of the computer, the same media could be manipulated by the audience. In this
manner, the audience does not need to remain passive, but becomes the user of the system. Thus, the key
difference between mass media and multimedia is the shift from audience to users, and one-way communication
to two-way communication. This is accomplished through interactivity.

To communicate with the system, the user can use a variety of devices such as the keyboard, mouse, tracking
ball, touch screen, and pen-based mouse. Thus while designing a multimedia application, we have to decide the
level of interactivity we wish to provide to the user of the system. For example, in a direct-sales application, you
can give different choices for a single product with different schemes. The buyers can select the products they
wish to buy.One important thing to notice is that well-designed products always give feedback to the user once
the user interacts with the computer. In our example, once the user selects the products to buy, the program can
provide feedback to the user, such as, "you will get your requested product within S6 hours from now."

MULTIMEDIA BUILDING BLOCKS


Any multimedia application consists any or all of the following components:
1. Text : Text and symbols are very important for communication in any medium. With the recent explosion of
the Internet and World Wide Web, text has become more the important than ever. Web is HTML (Hyper text
Markup language) originally designed to display simple text documents on computer screens, with occasional
graphic images thrown in as illustrations.
2. Audio : Sound is perhaps the most element of multimedia. It can provide the listening pleasure of music, the
startling accent of special effects or the ambience of a mood-setting background.
3. Images : Images whether represented analog or digital plays a vital role in multimedia. It is expressed
in the form of still picture, painting or a photograph taken through a digital camera.
4. Animation : Animation is the rapid display of a sequence of images of 2-D artwork or model positions in
order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence
of vision, and can be created and demonstrated in a number of ways.
5. Video : Digital video has supplanted analog video as the method of choice for making video for multimedia
use. Video in multimedia are used to portray real time moving pictures in a multimedia project.

2
MULTIMEDIA APPLICATIONS
Multimedia finds its application in various areas including, but not limited to, advertisements, art, education,
entertainment, engineering, medicine, mathematics, business, scientific research and spatial, temporal
applications. A few application areas of multimedia are listed below:
 Creative industries
Creative industries use multimedia for a variety of purposes ranging from fine arts, to entertainment, to
commercial art, to journalism, to media and software services provided for any of the industries listed below.
An individual multimedia designer may cover the spectrum throughout their career. Request for their skills
range from technical, to analytical and to creative.
 Commercial
Much of the electronic old and new media utilized by commercial artists is multimedia. Exciting presentations
are used to grab and keep attention in advertising. Industrial, business to business, and interoffice
communications are often developed by creative services firms for advanced multimedia presentations beyond
simple slide shows to sell ideas or liven-up training. Commercial multimedia developers may be hired to design
for governmental services and nonprofit services applications as well.
 Entertainment and Fine Arts
In addition, multimedia is heavily used in the entertainment industry, especially to develop special effects in
movies and animations. Multimedia games are a popular pastime and are software programs available either as
CD-ROMs or online. Some video games also use multimedia features. Multimedia applications that allow users
to actively participate instead of just sitting by as passive recipients of information are called Interactive
Multimedia.
 Education
In Education, multimedia is used to produce computer-based training courses (popularly called CBTs) and
reference books like encyclopedia and almanacs. A CBT lets the user go through a series of presentations, text
about a particular topic, and associated illustrations in various information formats. Edutainment is an informal
term used to describe combining education with entertainment, especially multimedia entertainment.
 Engineering
Software engineers may use multimedia in Computer Simulations for anything from entertainment to training
such as military or industrial training. Multimedia for software interfaces are often done as collaboration
between creative professionals and software engineers.
 Industry
In the Industrial sector, multimedia is used as a way to help present information to shareholders, superiors and
coworkers. Multimedia is also helpful for providing employee training, advertising and selling products all over
the world via virtually unlimited web-based technologies.
 Mathematical and Scientific Research
In Mathematical and Scientific Research, multimedia is mainly used for modeling and simulation. For example,
a scientist can look at a molecular model of a particular substance and manipulate it to arrive at a new
substance. Representative research can be found in journals such as the Journal of Multimedia.
 Medicine
In Medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the human body is
affected by diseases spread by viruses and bacteria and then develop techniques to prevent it.

3
CHARACTERISTICS OF A MULTIMEDIA SYSTEM

A Multimedia system has four basic characteristics:

•Multimedia systems must be computer controlled.


•Multimedia systems are integrated.
•The information they handle must be represented digitally.
•The interface to the final presentation of media is usually interactive

1. Computer Controlled
•Producingthecontentoftheinformation–e.g.byusingtheauthoringtools, image editor, sound and video editor
•Storingtheinformation–providinglargeandsharedcapacityformultimediainformation.
•Transmitting the information – through the network.
•Presenting the information to the end user make direct use of computer peripheral such as display device
(monitor) or sound generator (speaker).

2. Integrated
•All multimedia components (audio, video, text, graphics) used in the system must be somehow integrated.
•Every device, such as microphone and camera is connected to and controlled by a single computer.
•A single type of digital storage is used for all media type.
•Video sequences are shown on computer screen instead of TV monitor

3. Interactivity
•Level1: Interactivity strictly on information delivery. Users select the time at which the presentation starts, the
order, the speed and the form of the presentation itself.
•Level2: Users can modify or enrich the content of the information, and this modification is recorded.
•Level3: Actual processing of users input and the computer generate genuine result based on the users input

4. Digitally Represented
•Digitization: process involved in transforming an analog signal to digital signal.

CLASSIFICATION OF MEDIA
1. The perception media
2. The representation Media
3. The Presentation Media
4. The storage media
5. The transmission media
6. The information Exchange media

1. Perception media: Perception media help human to sense their environment. The central question is how
human perceive information in a computer environment. The answer is through seeing and hearing.

Seeing: For the perception of information through seeing such as text, image and video are used.
Hearing: For the perception of information through hearing media such as music noise and speech are used.

4
2. Representation media: Representation media are defined by internal computer representation of
information. The central question is how the computer information is coded? The answer is that various
format are used to represent media information in computer.
i. Text character is coded in ASCII code
ii. Graphics are coded according to CEPT or CAPTAIN video text standard.
iii. Image can be coded as JPEG format
iv. Audio video sequence can be coded in different TV standard format (PAL, NTSC, SECAM and
stored in the computer in MPEG format)
3. Presentation Media: Presentation media refer to the tools and devices for the input and output of the
information. The central question is, through which the information is delivered by the computer and is
introduced to the computer.
Output media: paper, screen and speaker are the output media.
Input Media: Keyboard, mouse, camera, microphone are the input media.
4. Storage media: Storage Media refer to the data carrier which enables storage of information. The
central question is, how will information be stored? The answer is hard disk, CD-ROM, etc.
5. Transmission media: Transmission Media are the different information carrier that enables continuous
data transmission. The central question is, over which information will be transmitted? The answer is
co-axial cable, fiber optics as well as free air.
6. Information exchange media: Information exchange media includes all information carrier for
transmission, i.e. all storage and transmission media. The central question is, which information carrier
will be used for information exchange between different places? The answer is combine uses of storage
and transmission media. E.g. Electronic mailing system.

REAL-TIME SYSTEM
Real time process is the process which delivers the result of processing in a given time. Main characteristics of
real time system are the correctness of computation and fixed response time. Deadline represent the latest
acceptable time for the presentation of the processing result. Real time system has both hard and soft deadline.
Soft deadline is the type of deadline which in some cases is missed and may be tolerated.
Hard deadline should never be violated. Hard deadline violation is the system failure.

CHARACTERISTICS OF REAL TIME SYSTEM


1. Predictably fast response to time critical event and accurate timing information.
2. High degree of schedulability: Schedulability refers to the degree of resource utilization at which or below
which deadline of each time critical task can be taken into account. Under system overload, processing of the
critical task must be done.
3. Management of manufacturing process and control of the military system are the application area of real time
system.

REAL-TIME AND MULTIMEDIA SYSTEM


1. A piece of music must be played back at a constant speed.
2. To fulfill the timing requirement of the continuous media, the operating system must use real time scheduling
techniques.
3. The real-time requirements of traditional real-time scheduling techniques and control system in application
areas such as factory automation, air craft piloting have high demand of security and fault tolerance.
4. The requirement desire from this demand somehow differentiates real time scheduling efforts applied to
continuous media.

5
5. Multimedia system uses the different scenario than traditional real time operating system in real time
requirements.

MULTIMEDIA PRODUCTION
Multimedia production is a complicated process, usually involving many people. Typically, one or more of the
following people may be involved in making a multimedia product: producer, multimedia designer/creative
designer, subject matter expert, programmer, instructional designer, scriptwriter, computer graphic artist,
audio/video specialist, and webmaster. A brief description of each of these roles follows:

• PRODUCER—The role of the producer is to define, coordinate, and facilitate the production of the project.
Other tasks performed by the producer include negotiating with the client; securing financial resources,
equipment, and facilities, and coordinating the development team. The person should be aware of the
capabilities and limitations of the technology, which helps in discussions with the client.
• MULTIMEDIA DESIGNER—A multimedia designer visualizes the system and determines its structure.
The designer defines the look, feel, format, and style of the entire multimedia system.
• SUBJECT MATTER EXPERT—The subject matter expert provides the program content for the multimedia
architect.
• PROGRAMMER/AUTHOR—The programmer integrates all the multimedia elements like graphics, text,
audio, music, photos, and animation, and codes the functionality of the product.
• INSTRUCTIONAL DESIGNER—The team may include a specialist who can take the information provided
by the content specialists and decide how to present it using suitable strategies and practices. The instructional
designer makes sure that the information is presented in such a manner that the audience easily understands it.
• SCRIPTWRITER—A script is a description of events that happen in a production. The scriptwriter makes
the flowchart of the entire system and decides the level of interactivity of the system.
• COMPUTER GRAPHIC ARTIST—The computer graphic artist creates the graphic elements of the
program such as backgrounds, photos, 3-D objects, logos, animation, and so on.
• AUDIO AND VIDEO SPECIALISTS—Audio and video specialists are needed when intensive use of
narration and digitized video are integrated into a multimedia presentation. The audio specialist is responsible
for recording and editing narration and for selecting, recording, or editing sound effects. The video specialist is
responsible for video capturing, editing, and digitizing.
• WEBMASTER—This individual has the responsibility of creating and maintaining a Webpage. The person
should be capable of converting a multimedia application into a Webpage or creating a Web page with
multimedia elements.

STAGES IN MULTIMEDIA PRODUCTION


Multimedia production is a complex process that can be categorized broadly into the following different stages:

 RESEARCH AND ANALYSIS—At this stage, we should find out as much as possible about the
audience: their education, technology skill level, needs, and so on. We also gather information on the content to
be presented and the system on which the multimedia product will be used.
 SCRIPTING/FLOWCHARTING—Scripting (or flowcharting) involves deciding the overall structure
of the multimedia project. This is done by placing the various segments of the project in order, using arrows to
reflect flow and interactive decision making. A flowchart has information about the major headings/options
given to the user, what comes in the main menu of the program, and the subsequent branching when the user

6
takes an action. For example, if we were designing our home pages with information about our education, our
interests, and our favorite sites as subpages, we would draw a flowchart, starting with our main screen and
indicate the other screens and how they are linked up together.
 STORYBOARDING—The storyboard is a detailed design plan that the designer creates, indicating
what each screen looks like, which media elements are used in the screen, and all the specifications of the media
elements. For example, a storyboard of a screen will contain information about the buttons being used on the
screen, what they look like (a rough sketch), and what happens when the user clicks on a button. The
storyboarding stage is where the detailed visualization of the multimedia system takes place.

 CONSTRUCTION/COLLECTION OF MEDIA ELEMENTS—Usually after the storyboard, a


prototype is made and tested, and the design is reviewed. After this stage, the graphic designer is given detailed
specifications of the project and begins creating graphics and other media elements.

 PROGRAMMING—When the development team has created and collected the various interface and
content elements, they are assembled into a final product using a programming language like Visual Basic. One
trend has been the development of easy-to-use authoring programs such as the Macromedia Director,
HyperCard, and Authorware.

 TESTING—The final production stage is the testing phase. It determines that everything works on the
system it is supposed to work on and also whether typical users will find the design intuitive enough.

7
2.0 VISUALISATION AND CREATIVE PROCESS

Visualization is the process of representing abstract business or scientific data as images that can aid in
understanding the meaning of the data.
Creative visualization is a mental technique that uses the imagination to make dreams and goals come true.
Used in the right way, creative visualization can improve your life and attract to you success and prosperity. It
is a power that can alter your environment and circumstances, cause events to happen, and attract money,
possessions, work, people and love into your life.

FIVE STAGES OF CREATIVITY


1. PREPARATION
The first stage is the idea of PREPARATION, the idea that you are immersing yourself in the domain. If you
are a musician you are absorbing a lot of the music that is inspiring you to create this new piece. If you’re a
writer you are reading other writers in this area. If you are an artist you are looking at other artist’s work in the
area that you are looking at creating something in. If you are a scientist you are looking at all the background
research. And if you are an entrepreneur or marketer you are looking at all the previous market research and
what other companies have done before.
So this stage is normally best carried out in a quiet environment. It’s really this stage that you are trying to
absorb as much information as possible because this information will go into your sub-consciousness where it is
very important for the second stage, or second level.
2. INCUBATION
The second stage is what we call the INCUBATION stage. In incubation this is when all the information that
you have gathered in the PREPARATION stage really goes back. It starts to churn in the back of your mind, in
the sub-consciousness. This is an extremely important stage because sometimes it can takes days, or weeks, or
months or sometimes even years. That idea that you’ll think about writing about a book or piece of music, and
you’re writing about it and you just leave it to the side for a while and then you come back to it. Now the
interesting thing about the incubation stages it that to a certain extent it is not really under your control how
long that stage will take. It is something you cannot really rush because what it leads to is the third stage.
3. INSIGHT
The third stage is what most of the public think is a classic signal or sign of a creative person, what is called the
INSIGHT stage or the insight step. With insight it is really the idea of the ‘Aha’ moment, the ‘Eureka’ moment.
Although it is probably the smallest part of the five steps, it is possible one of the most important parts. On one
of my subsequent videos I’ll take you more into how to increase your chances of having those ‘Aha’ moments,
those insights. A quick thing I would say here is that they most often happen when you are doing some kind of
low-level physical activity; going for a shower, driving a car, having a walk. This is because your sub
consciousness in the previous stages is bubbling away and this insight stage really allows the mind to work on
something else. And then bring these ideas to the forefront of your mind. So that’s the third stage, the insight’s
stage. And now we go on to the fourth stage.
4. EVALUATION
It is an area that a lot of creative people struggle with because often you have so many ideas and you have a
limited amount of time. So the evaluation stage is important because this is where it requires self-criticism and
reflection. It is asking yourself questions like:
“Is this a novel or new idea or is it one that is just re-hashed and has been done before?”
It’s the idea of going out to a small group of trusted friends and saying:
“I’ve had this idea, what do you think about this?”

8
It is very important part because we only have a limited amount of time to do certain things. Often you find that
people who are called the most ‘creative people’ are often very good at this stage, the evaluation stage. They
have all these ideas but they can use self-criticism and reflection to say “these are the ones that have the most
merit and that I’m going to work on”.

5. ELABORATION
And then we have the final stage. This is called ELABORATION. This is where Edison said that it’s “1%
inspiration and 99% perspiration”. Now the elaboration stage is the 99% perspiration stage. This is where you
are actually doing the work. So many people out there think that the creative process is that insight, that ‘Aha’
moment, or the preparation part. But really a creative individual isn’t complete, and I don’t think they can do
anything that really lasts, unless they can go through that and actually put in the hard work. The elaboration;
testing the idea, working on the idea, those late nights in the studio, working at your desk, those hours in the
laboratory if you are scientist, those days testing and micro-testing products. This is the elaboration stage.

9
3.0 TEXT IN MULTIMEDIA
Words and symbols in any form, spoken or written, are the most common system of communication. They
deliver the most widely understood meaning to the greatest number of people. Most academic related text such
as journals, e-magazines are available in the Web Browser readable form.

ABOUT FONTS AND FACES


A typeface is family of graphic characters that usually includes many type sizes and styles. A font is a collection
of characters of a single size and style belonging to a particular typeface family. Typical font styles are bold
face and italic. Other style attributes such as underlining and outlining of characters, may be added at the users
choice. The size of a text is usually measured in points. One point is approximately 1/72of an inch i.e. 0.0138.
The size of a font does not exactly describe the height or width of its characters. This is because the x-height
(the height of lower case character x) of two fonts may differ.

Typefaces of fonts can be described in many ways, but the most common characterization of a typeface is serif
and sans serif. The serif is the little decoration at the end of a letter stroke. Times, Times New Roman,
Bookman are some fonts which comes under serif category. Arial, Optima, Verdana are some examples of sans
serif font. Serif fonts are generally used for body of the text for better readability and sans serif fonts are
generally used for headings. The following fonts shows a few categories of serif and sans serif fonts.

FF
(Serif Font) (Sans serif font)
Selecting Text fonts
It is a very difficult process to choose the fonts to be used in a multimedia presentation. Following are a few
guidelines which help to choose a font in a multimedia presentation.

i. As many number of typefaces can be used in a single presentation, this concept of using many fonts in a
single page is called ransom-note topography.
ii. For small type, it is advisable to use the most legible font.
iii. In large size headlines, the kerning (spacing between the letters) can be adjusted
iv. In text blocks, the leading for the most pleasing line can be adjusted.
v. Drop caps and initial caps can be used to accent the words.
vi. The different effects and colors of a font can be chosen in order to make the text look in a distinct manner.
vii. Anti aliased can be used to make a text look gentle and blended.
viii. For special attention to the text the words can be wrapped onto a sphere or bent like a wave.
ix. Meaningful words and phrases can be used for links and menu items.
x. In case of text links (anchors) on web pages the messages can be accented.

The most important text in a web page such as menu can be put in the top 320 pixels.

10
COMPUTERS AND TEXT
FONTS :
Postscript fonts are a method of describing an image in terms of mathematical constructs (Bezier curves), so it
is used not only to describe the individual characters of a font but also to describe illustrations and whole pages
of text. Since postscript makes use of mathematical formula, it can be easily scaled bigger or smaller. Apple and
Microsoft announced a joint effort to develop a better and faster quadratic curves outline font methodology,
called truetype In addition to printing smooth characters on printers, TrueType would draw characters to a low
resolution (72dpi or 96 dpi) monitor.

CHARACTER SET AND ALPHABETS:

 ASCII Character set


The American standard code for information interchange (SCII) is the 7bit character coding system most
commonly used by computer systems in the United States and abroad. ASCII assigns a number of values to 128
characters, including both lower and uppercase letters, punctuation marks, Arabic numbers and math symbols.
32 control characters are also included. These control characters are used for device control messages, such as
carriage return, line feed, tab and form feed.

 The Extended Character set


A byte which consists of 8 bits, is the most commonly used building block for computer processing. ASCII uses
only 7 bits to code is 128 characters; the 8 thbit of the byte is unused. This extra bit allows another 128 characters
to been coded before the byte is used up, and computer systems today use these extra128 values for an extended
character set. The extended character set is commonly filled with ANSI (American National Standards Institute)
standard characters, including frequently used symbols.

 Unicode
Unicode makes use of 16-bit architecture for multilingual text and character encoding. Unicode uses about
65,000 characters from all known languages and alphabets in the world. Several languages share a set of
symbols that have a historically related derivation, the shared symbols of each language are unified into
collections of symbols (Called scripts). A single script can work for tens or even hundreds of languages.
Microsoft, Apple, Sun, Netscape, IBM, Xerox and Novell are participating in the development of this standard
and Microsoft and Apple have incorporated Unicode into their operating system.

FONT EDITING AND DESIGN TOOLS


There are several software that can be used to create customized font. These tools help a multimedia developer
to communicate his idea or the graphic feeling. Using these software different typefaces can be created.
In some multimedia projects it may be required to create special characters. Using the font editing tools it is
possible to create a special symbols and use it in the entire text.
Following is the list of software that can be used for editing and creating fonts:
1. Fontographer
2. Fontmonger
3. Cool 3D text

11
Special font editing tools can be used to make your own type so you can communicate an idea or graphic
feeling exactly. With these tools professional typographers create distinct text and display faces.

1. Fontographer:
It is macromedia product, it is a specialized graphics editor for both Macintosh and Windows platforms. You
can use it to create postscript, truetype and bitmapped fonts for Macintosh and Windows.

2. Making Pretty Text:


To make your text look pretty you need a toolbox full of fonts and special graphics applications that can stretch,
shade, color and anti-alias your words into real artwork. Pretty text can be found in bitmapped drawings where
characters have been tweaked, manipulated and blended into a graphic image.

3. Hypermedia and Hypertext:


Multimedia is the combination of text, graphic, and audio elements into a single collection or presentation –
becomes interactive multimedia when you give the user some control over what information is viewed and
when it is viewed. When a hypermedia project includes large amounts of text or symbolic content, this content
can be indexed and its element then linked together to afford rapid electronic retrieval of the associated
information. When text is stored in a computer instead of on printed pages the computer’s powerful processing
capabilities can be applied to make the text more accessible and meaningful. This text can be called as
hypertext.

4. Hypermedia Structures:
Two Buzzwords used often in hypertext are link and node. Links are connections between the conceptual
elements, that is, the nodes that may consists of text, graphics, sounds or related information in the
knowledgebase.

5. Searching for words:


Following are typical methods for a word searching in hypermedia systems: Categories, Word Relationships,
Adjacency, Alternates, Association, Negation, Truncation, Intermediate words, Frequency.

4.0 SOUND
12
Voice is the predominant method by which human beings communicate. We are so accustomed to speaking and
listening that we take sound for granted. But sound exists in many different forms and each form has its own
purpose and characteristics. Here are some things you can do:

• Listen to a song on a radio. Try and tune to a clear station on an AM frequency.


• Watch a music video. (Imagine, today you can "watch" a song!)
• Listen to a song from a CD player or a good quality audiotape.
• Speak to a friend on the phone.
How do the sounds from the above exercises differ from each other?

The Nature of Sound


Sound is a key component in communication. Imagine what you would experience if the television program you
are watching becomes very noisy. Or, if the sound system stops working in the middle of a gripping film! The
presence of sound greatly enhances the effect of a mostly graphic presentation, especially in a video or with
animation. In a multimedia project, you can use sound in many different ways, some of which will be discussed
shortly. Sound is the brain's interpretation of electrical impulses being sent by the inner ear through the nervous
system. There are some sounds the human ear cannot perceive—those which have a very high or low frequency.
Your dog can help you with those because dog scan hear these very high or low frequency sounds and their ears
are very susceptible to sound variations.

How Do We Hear?
If a tree falls in the forest, and there is no one to hear it, will there be a sound? This is a very old philosophical
dilemma, which relies on using the word sound for two different purposes. One use is as a description of a
particular type of physical disturbance—sound is an organized movement of molecules caused by a vibrating
body in some medium—water, air, rock, etc.

The other use is as a description of a sensation—sound is the auditory sensation produced through the ear by the
alteration in pressure, particle displacement, or particle velocity which is propagated in an elastic medium. Both
these definitions are correct. They differ only in the first being a cause and the second being an effect. When an
object moves back and forth (vibrates), it pushes the air immediately next to it a bit to one side and, when
coming back, creates a slight vacuum. This process of oscillation creates a wave similar to the ripples that are
created when you throw a stone instill waters. The air particles that move in waves make the eardrum oscillate.
This movement is registered by a series of small bones—the hammer, the anvil, and the stirrup—that transmit
these vibrations to the inner ear nerve endings. These, in turn, send impulses to the brain, which perceives them
as sounds.

For example, consider what happens when you pluck a guitar string. The plucked string vibrates, generating
waves—periodic compressions and decompressions—in the air surrounding the vibrating string. These, sound
waves now move through the air. When these sound waves reach the ear, they cause the eardrum to vibrate,
which in turn results in signals being sent to the brain

Hearing Your Own Voice


The way you sound to yourself is a very personal thing. Only you hear your voice the way you do. Everyone
else hears your voice differently. For example, you will find that your voice sounds different in a tape recording
than you sound to yourself. This is because sound waves inside your body travel through the bones, cartilage,
and muscles between your voice box and your inner ear. Sounds from tape recorders (and other people) travel
through the air and reach your eardrum, and thus sound different.

Use of Audio in Multimedia


13
You can use sound in a multimedia project in two ways. In fact, all sounds fall into two broad categories:
1. Content sound
2. Ambient sound

Content sound provides information to audiences, for example, dialogs in movies or theater. Some examples of
content sound used in multimedia are:
• Narration: Narration provides information about an animation that is playing on the screen.
• Testimonials: These could be auditory or video sound tracks used in presentations or movies.
• Voice-overs: These are used for short instructions, for example, to navigate the multimedia application.
• Music: Music may be used to communicate (as in a song).

Ambient sound consists of an array of background and sound effects. These include:
• Message reinforcement: The background sounds you hear in real life, such as the crowds at a ball game, can
be used to reinforce the message that you wish to communicate.
• Background music: Set the mood for the audience to receive and process information by starting and ending
a presentation with music.
• Sound effects: Sound effects are used in presentations to liven up the mood and add effects to your
presentations, such as sound attached to bulleted lists.

SOME PHYSICS BACKGROUND

Properties of Sound

Many of the terms that we learned in our high school physics class are used by audio experts. In this section we
review some of these terms. As we have seen, sound waves are disturbances in the air (or other mediums of
transmission). The wave consists of compressions and rarefactions of air and is a longitudinal wave. However,
all waves can be represented by a standard waveform depicting the compressions and rarefactions. The
compressions can map to the troughs and the rarefactions to crests in Figure 1, which depicts a typical
waveform. A waveform gives a measurement of the speed of the air particles and the distance that they travel
for a given sound in a given medium. The amplitude measures the relative loudness of the sound, which is the
distance between a valley and a crest as shown in Figure 1. The amplitude determines the volume of the sound.
The unit of measurement of volume is a decibel. Have you ever stood on the tarmac when an airplane takes off?
The sound produced is of such a high decibel value that you want to shut your ears because they hurt.

1. Frequency
The difference in time between the formation of two crests is termed as the period. It is measured in seconds
(see Figure 1). A number of crests (peaks) may occur within a second. The number of peaks that occur in one
second is the frequency. Another term associated with frequency is pitch. If an object oscillates rapidly, it
creates a "high-pitched" sound. A low-frequency sound on the other hand is produced by an object that vibrates
slowly, such as the thicker strings of a piano or guitar. Frequency is measured by the number of cycles
(vibrations) per second and the unit of frequency is hertz (Hz). Frequency may also be defined as the number of
waves passing a point in one second. The human ear can perceive a range of frequencies from 20-20,000 Hz (or
20 kHz).However, it is most sensitive to sounds in the range of 2-4 kHz.

14
Fig. 1: Frequency

2. Wavelength
Wavelength is the distance from the midpoint of one crest to the midpoint of the next crest. It is represented by
the symbol X (refer Figure 2).

3. Doppler Effect
Sound waves, as we said earlier, are compressions and rarefactions of air. When the object making the sound is
moving toward you, the frequency goes up due to the waves getting pushed more tightly together. The opposite
happens when the object moves away from you and the pitch goes down. This is called the Doppler effect. Why
does the horn of an approaching car sound high-pitched when it is coming close to you, yet suddenly becomes
low when it moves away? As a car and its horn move toward you, the pushes of sound—the sound waves—get
crammed together, which makes them higher pitched. On the other hand, when the car and the horn move away
from you, the sound waves are spread out further apart. That makes a lower pitched sound. This is depicted in
Figure 3.

4. Bandwidth
Bandwidth is defined as the difference between the highest and the lowest frequency

Wavelength

5. Harmonics
Few objects produce sound of a single frequency. Most musical instruments, for example, generate multiple
frequencies for each note. That is really the way one can tell the difference between musical instruments, for
example, a violin and a flute, even though both produce notes of precisely the same pitch. The combinations of
frequencies generated by an instrument are known as the timbre. The sounds that we hear from vibrating objects
are complex in the sense that they contain many different frequencies. This is due to the complex way the
15
objects vibrate. A "note" (say, Middle C) played on a piano sounds different from the same "note" played on a
saxophone. In both cases, different frequencies above the common fundamental note sounded are present. These
different frequencies along with the difference in timbre enable you to distinguish between different
instruments. The harmonic series is a series of frequencies that are whole number multiples of a fundamental
frequency. For example, taking the tone Middle C, whose fundamental frequency is approximately 261 Hz, the
harmonic series (HS) on this frequency is:
FIGURE 4

DIGITAL AUDIO
The sound heard by the ear (also called audio) is analog in nature and is a continuous waveform. Acoustic
instruments produce analog sounds. A computer needs to transfer the analog sound wave into its digital
representation, consisting of discrete numbers. In this section, we will try to understand the basic principles of
digital audio that are critical in understanding the storage, transmission, and applications of audio data. With the
Internet providing an unrestricted medium for audio transmission, a large amount of research is focused on
compression techniques, speed of transmission, and audio quality. A microphone converts the sound waves into
electrical signals. This signal is then amplified, filtered, and sent to an analog-to-digital converter. This
information can then be retrieved and edited using a computer. If you want to output this data as sound, the
stream of data is sent to the speakers via a digital-to-analog converter, a reconstruction filter, and the audio is
amplified. This produces the analog sound wave that we hear.

SAMPLING
The audio input from a source is sampled several thousand times per second. Each sample is a snapshot of the
original signal at a particular time. Let us make an analogy. Consider the making of a motion picture. A
dynamic scene is captured on film or videotape 24-30 times a second. The eye perceives a rapid succession of
individual photographic frames as movement on the screen. Due to the speed of display of the frames, the eye
perceives it as a continuum. Similarly, sound sampling transfers a continuous sound wave into discrete
numbers.

SAMPLING RATE
When sampling a sound, the computer processes snapshots of the waveform. The frequency of these snapshots
is called the sampling rate. The rate can vary typically from 5000-90,000 samples per second. Sampling rate is
an important (though not the only) factor in determining how accurately the digitized sound represents the
original analog sound. Let us take an example. Your mother is scolding you for breaking her precious vase kept
in the living room. Your sister hears only bits and pieces of the conversation because she is not interested in the
matter.

16
Later you ask your sister if the scolding was justified and your sister replies that she did not listen to the whole
conversation. This is because she sampled the voices at a very wide range.

DIGITIZATION
Digitization is the process of assigning a discrete value to each of the sampled values. It is performed by an
Integrated Chip (IC) called an A to D Converter. In the case of 8-bitdigitization, this value is between 0 and 255
(or -128 and 127). In 16-bit digitization, this value is between 0 and 65,535 (or -32,768 and 32,767). An
essential thing to remember is that a digitized signal can take only certain (discrete) values. The process of
digitization introduces noise in a signal. This is related to the number of bits per sample.

FIDELITY
Fidelity is defined as the closeness of the recorded version to the original sound. In the case of digital speech, it
depends upon the number of bits per sample and the sampling rate. A really high-fidelity (hi-fi) recording takes
up a lot of memory space (176.4 Kb for every second of audio of stereo quality sampled at 16 bits, 44.1 kHz per
channel). Fortunately for most computer multimedia applications, it is not necessary to have very high fidelity
sound.

QUALITY OF SOUND

Quality of Sound in a Telephone


The telephone until very recently was considered an independent office or home appliance. The advent of voice
mail systems was the first step in changing the role of the telephone. Voice mail servers convert the analog
voice and store it in digital form. With the standards for voice mail file formats and digital storage of sound for
computer systems coming closer together, use of a computer system to manage the phone system is a natural
extension of the user's desktop. The bandwidth of telephone conversation is 3300 Hz. The frequency ranges
from 200-3500 Hz. The signal of course is inherently analog.

Quality of Sound in a CD
CD-ROMs have become the media choice for the music industry in a very short period of time. The reasons are
as follows:
 Ease of use and durability of the media
 Random access capability as compared to audiotapes
 Very high quality sound
 Large storage volumes
CD-ROMs are becoming important media for multimedia applications. The sampling rate is typically 44 kHz
for each channel (left and right).For example, take an audiocassette and listen to a song by your favorite singer.
Then listen to the same song on a CD. Do you hear the difference? This difference in audio quality is because of
the difference in recording the song on the two different media

COMPRESSION
An important aspect of communication is transfer of data from the creator to the recipient. Transfer of data in
the Internet age is very time-dependent. Take for instance speech, which is nothing but changes in the intensity
of sound over a fixed period. This speech is transferred across networks in the form of sound files. If the size of
the sound files is too large, the time taken to transfer the files increases. This increase in the transfer time
deteriorates the quality of the sound at the receiver's end. The time taken to transfer a file can be decreased
using compression.
Compression in computer terms means reducing the physical size of data such that it occupies less storage space
and memory. Compressed files are, therefore, easier to transfer because there is a sizable amount of reduction in
the size of data to be transferred. This results in a reduction in the time needed for file transfer as well as a
reduction in the bandwidth utilization thus providing good sound quality even over a slow network. The
17
following examples of digital media show the amount of storage space required for one second of playback of
an audio file:
• An uncompressed audio signal of telephone quality (8-bit sampled at 8 kHz) leads to a bandwidth requirement
of 64 Kbps and storage requirement of 8 KB to store one second of playback.
• An uncompressed stereo audio signal of CD quality (16-bit sampled at 44.1 kHz) leads to a bandwidth
requirement of 44.1 kHz x 16 bits = 705.6 Kbps and storage requirement of 88.2 KB for one second of playback

Compression Requirements
In the case of audio, processing data in a multimedia system leads to storage requirements in the range of
several megabytes. Compressions in multimedia systems are subjected to certain constraints. These constraints
are:
• The quality of the reproduced data should be adequate for applications.
• The complexity of the technique used should be minimal, to make a cost-effective compression technique.
• The processing of the algorithm should not take too long.
• Various audio data rates should be supported. Thus, depending on specific system conditions the data rates can
be adjusted.
• It should be possible to generate data on one multimedia system and reproduce data on another system. The
compression technique should be compatible with various reproduction systems.

As many applications exchange multimedia data using communication networks, the compatibility of
compression is required. Standards like CCITT (International Consultative Committee for Telephone and
Telegraph), ISO (International Standard Organization), and MPEG (Moving Picture Experts Group) are used to
achieve this compatibility.

Common Compression Methods


An array of compression techniques have been set by the CCITT Group—an international organization that
develops communication standards known as "Recommendations" for all digitally controlled forms of
communication.
There are two types of compression:
a. Lossless Compression
b. Lossy Compression

Lossless Compression
In lossless compression, data are not altered or lost in the process of compression or decompression.
Decompression produces a replica of the compressed object. This compression technique is used for text
documents, databases, and text-related objects. The following are some of the commonly used lossless
standards:
• Packbits encoding (run-length encoding)
• CCITT Group 3 1-D (compression standard based on run-length encoding scheme)
• CCITT Group 3 2-D (compression standard based on run-length encoding scheme modified by two-
dimensional encoding)
• CCITT Group 4 (compression standards based on two-dimensional compression)
• Lempel-Ziv and Welch algorithm LZW (Techniques used by ARJ/PKZIP)

Lossy Compression
There is loss of some information when lossy compression is used. The loss of this data is such that the object
looks more or less like the original. This method is used where absolute data accuracy is not essential. Lossy
compression is the most commonly used compression

AUDIO FILE FORMATS


18
Common Audio File Formats
Until the early 1990s, PC applications were only visual, without any audio output capability. Occasionally, an
application would use the internal speaker of the PC to produce sound such as an alert in case of error. Game
applications on the other hand made use of the PC speaker to produce sounds of varying frequencies to create
good sound effects. Numerous audio file formats have been introduced over the years. Each has made its own
mark in the field of multimedia across various platforms. Audio is one of the most important components of any
multimedia system. Can you imagine watching a movie without sound—it would be so lifeless. Some
commonly used audio file formats with their brief description are listed in Table below.

Audio File Formats

19
AUDIO EDITING
One can record or manipulate audio files using various audio editors. You must have a sound card installed on
your PC to edit stored or recorded audio data. Recording sound for multimedia applications is only the first step
in the process of sound processing. After the audio has been recorded and stored, it has to be modified to
improve the level of quality. Unwanted sound or silences have to be removed. Mistakes in recording can be
erased or modified. Sounds can be mixed to get a better effect. Also adding effects to the audio file gives that
extra touch to the listener who will hear the audio. Some common audio editing software packages for Windows
are:
• Cool Edit
• Sound Forge XP
• Wave Flow
20
Using these wave editors, one can perform functions like copy and paste, just as one would use any text editor.
You can also concatenate, append, or mix two or more audio files. We assume that the audio files are saved in
the WAV format. So the files would have a .wav extension. This is a popular format for use on the windows
platform. However, many editors will allow editing in other formats as well. We have also used the Wave Flow
Editor (that is packaged with the sound blaster card) to illustrate some common editing options. You can also
experiment with these effects using other editors.

Special Effects
Sound and music effects are a part of our daily lives. Every environment, whether it be a highway, an office, or
a home, has its own set of sounds to characterize it. These include a car being driven, office equipment in
operation, or household appliances. Sounds not only provide a realistic sense of the scene but can also provide
important input for sensing a scene change. Sound effects can be incorporated in audio files using audio editors.
Sound effects are generated by simply manipulating the amplitude or wavelength of the audio waveform. There
is a variety of special effects built into audio editors. The most commonly used effects are echo, reverb, fade-in,
fade-out, and amplify.

a. Reverb Special Effect


A person could receive sound directly from the sound source as well as reflected from other objects around him.
Consider the room shown in the Figure 1. LS is the sound source and A is the listener who receives both the
direct sound from the source as well as the reflected sound. The sound persists even after the sound source has
stopped producing sound waves. The sound gradually fades away after some time. This gradual fading is called
reverberation. Reverberation is different from echo. In the case of an echo, there is a time gap between the
finish of the original sound and the arrival of the echo. Echo sometimes causes irritation and undesirable effects,
while reverberation is pleasing to hear and should be incorporated in the design of any room.
The steps to add the Reverb effect to an audio file in Wave Flow are as follows:
1. Activate the wave editor. Wave Flow.
2. Open a .wav file. The .wav file is displayed .
3. Click on the Edit option from the main menu toolbar, and select the Select All option.
4. The entire waveform will be selected.
5. Click on the Tools option from the main menu toolbar and select the Reverb option.
The Reverb screen is displayed.
6. Select the Big Empty Room preset reverb setting from the Reverb list box.
7. Play the changed .wav file and notice the difference.

b. The Fade-in Effect


Imagine that you are about to give a multimedia presentation before an audience. The projector is on and the
speakers are set to a high volume to give realistic sound effects. As soon as you start your multimedia
application, the first screen of the presentation is projected. The audio starts with a bang, which may cause some
damage to the speakers and the audience is left stunned. This is because of the abrupt beginning of the audio
which is sometimes not desirable. To avoid these audio problems we make use of the fade-in effect, which
performs a progressive increase of the volume of the waveform making the initial volume lower than the final
volume. This effect is also used in CD or tape audio. Even if the volume of the speakers is set to the maximum
value, the audio file will gradually increase in volume until it reaches its peak value. Listen to a music cassette
or a CD and notice that the beginning of each song has a fade-in effect. The gradual increase in the sound of an
approaching train is another example of the fade-in effect.
The steps to add the fade-in effect in Wave Flow are as follows:
1. Activate the wave editor. Wave Flow.
2. Open a .wav file.

21
3. Select the first 25% of the waveform by holding the left mouse button down and dragging the mouse until the
selection is made.
4. Click on the Tools option from the main menu toolbar and select the Fade-in option.
The fade-in dialog box is displayed (Figure 4).
5. Set the Initial Percent to 0% and select the Progression type as Linear. Click on the OK button.
6. Click on the waveform to remove the selection and play the changed wave file to the audience.
7. Note the steady increase in volume of the wave file over a period of time.
Note the change in amplitude of the selected waveform to the audience. Amplitude is linked to the volume of
the sound.

c. The Fade-Out Effect


The fade-out effect is the exact opposite of the fade-in effect. When audio ends, you may want the audio to just
fade out and die, instead of stopping abruptly. This effect can also be used to fade out of one type of audio to
another type of audio. Any audio that ends abruptly is not very pleasing to the ear. Therefore, it is preferable to
let it fade out and die. This is known as the fade-out effect. You may have heard some medley music on a
cassette player or a CD player which has many songs joined together to form one single song. The train that has
arrived at the station platform in the fade-in example is now ready to leave the station. The sound of the train is
now at its peak and gradually keeps decreasing as it leaves the station. This effect is the fade-out effect.
The steps to add the Fade-Out effect in Wave Flow are as follows:
1. Activate the wave editor, Wave Flow.
2. Open a .wav file.
3. Select the last 25% of the waveform by holding the left mouse button down and dragging the mouse until the
selection is made.
4. Click on the Tools option from the main menu toolbar and select the Fade-Out option.
5. Set the Initial percent to 0% and select the Progression type as Linear. Click on the OK button.
6. Note the change in amplitude of the selected waveform to the audience.
7. Click on the waveform to remove the selection and play the changed wave file to the audience.
8. Note the steady decrease in output of the wave file over a period of time. If time permits, try out the other
fade-out options on the wave file.

Common Editing Packages


An audio editing package is very necessary because one rarely uses the audio file as it is recorded in the studio
or otherwise in its original raw form. Generally there are some changes that have to be made to the audio file
before it can be used. Some of the most commonly used audio editing packages for Windows are:

• COOL EDIT—This is shareware software used to edit audio files. This package supports a variety of sound
formats. It has a built-in CD player through which one can convert Red-Book audio (CD audio) into a
waveform and save it to any available format. Although this software is shareware, it has enough functions to be
used professionally. Being easy to use has made it one of the popular shareware audio editing software
packages. It is freely downloadable from the Internet.
• SOUND FORGE XP—It is as powerful as Cool Edit. It is not shareware.
• WAVE STUDIO—It is packaged along with the Sound Blaster Multimedia Kit. It is a powerful aid for
editing audio files. This editor supports most of the audio file formats. The only drawback is that this editor
works only if you have a Sound Blaster soundcard.

22
5.0 IMAGE
Still images are the important element of a multimedia project or a web site. In order to make a multimedia
presentation look elegant and complete, it is necessary to spend ample amount of time to design the graphics
and the layouts. Competent, computer literate skills in graphic art and design are vital to the success of a
multimedia project.

Digital Image
A digital image is represented by a matrix of numeric values each representing a quantized intensity value.
When I is a two-dimensional matrix, then I(r,c) is the intensity value at the position corresponding to row r and
column c of the matrix. The points at which an image is sampled are known as picture elements, commonly
abbreviated as pixels. The pixel values of intensity images are called gray scale levels (we encode here the
“color” of the image). The intensity at each pixel is represented by an integer and is determined from the
continuous image by averaging over a small neighborhood around the pixel location. If there are just two
intensity values, for example, black, and white, they are represented by the numbers 0 and 1; such images are
called binary-valued images. If8-bit integers are used to store each pixel value, the gray levels range from 0
(black) to 255(white).

Digital Image Format


There are different kinds of image formats in the literature. We shall consider the image format that comes out
of an image frame grabber, i.e., the captured image format, and the format when images are stored, i.e., the
stored image format. Captured Image Format The image format is specified by two main parameters: spatial
resolution, which is specified as pixels (eg. 640x480) and color encoding, which is specified by bits per pixel.
Both parameter values depend on hardware and software for input/output of images.

Multiple Monitors
When developing multimedia, it is helpful to have more than one monitor, or a single high-resolution monitor
with lots of screen real estate, hooked up to your computer. In this way, you can display the full-screen working
area of your project or presentation and still have space to put your tools and other menus. This is particularly
important in an authoring system such as Macromedia Director, where the edits and changes you make in one
window are immediately visible in the presentation window-provided the presentation window is not obscured
by your editing tools.

Making Still Images


Still images may be small or large, or even full screen. Whatever their form, still images are generated by the
computer in two ways: as bitmap (or paint graphics) and as vector drawn (or just plain drawn) graphics.
Bitmaps are used for photo-realistic images and for complex drawing requiring fine detail. Vector-drawn
objects are used for lines, boxes, circles, polygons, and other graphic shapes that can be mathematically
expressed in angles, coordinates, and distances. A drawn object can be filled with color and patterns, and you
can select it as a single object. Typically, image files are compressed to save memory and disk space; many
image formats already use compression within the file itself – for example, GIF,JPEG, and PNG. Still images
may be the most important element of your multimedia project. If you are designing multimedia by yourself,
put yourself in the role of graphic artist and layout designer.

Bitmap Software
The abilities and feature of image-editing programs for both the Macintosh and Windows range from simple to
complex. The Macintosh does not ship with a painting tool, and Windows provides only the rudimentary Paint
(see following figure), so you will need to acquire this very important software separately – often bitmap editing
or painting programs come as part of a bundle when you purchase your computer, monitor, or scanner.

23
Capturing and Editing Images
The image that is seen on a computer monitor is digital bitmap stored in video memory, updated about every
1/60 second or faster, depending upon monitor’s scan rate. When the images are assembled for multimedia
project, it may often be needed to capture and store an image directly from screen. It is possible to use the
PrtScr key available in the keyboard to capture a image.

Scanning Images
After scanning through countless clip art collections, if it is not possible to find the unusual background you
want for a screen about gardening. Sometimes when you search for something too hard, you don’t realize that
it’s right in front of your face. Open the scan in an image-editing program and experiment with different filters,
the contrast, and various special effects. Be creative, and don’t be afraid to try strange combinations –
sometimes mistakes yield the most intriguing results.

Vector Drawing
Most multimedia authoring systems provide for use of vector-drawn objects such as lines, rectangles, ovals,
polygons, and text. Computer-aided design (CAD) programs have traditionally used vector-drawn object
systems for creating the highly complex and geometric rendering needed by architects and engineers. Graphic
artists designing for print media use vector-drawn objects because the same mathematics that put a rectangle on
your screen can also place that rectangle on paper without jaggies. This requires the higher resolution of the
printer, using a page description language such as PostScript. Programs for 3-D animation also use vector-
drawn graphics. For example, the various changes of position, rotation, and shading of light required to spin the
extruded.

How Vector Drawing Works


Vector-drawn objects are described and drawn to the computer screen using a fraction of the memory space
required to describe and store the same object in bitmap form. A vector is a line that is described by the location
of its two endpoints. A simple rectangle, for example, might be defined as follows:

Color
Color is a vital component of multimedia. Management of color is both a subjective and a technical exercise.
Picking the right colors and combinations of colors for your project can involve many tries until you feel the
result is right.

Understanding Natural Light and Color


The letters of the mnemonic ROY G. BIV, learned by many of us to remember the colors of the rainbow, are the
ascending frequencies of the visible light spectrum: red, orange, yellow, green, blue, indigo, and violet.
Ultraviolet light, on the other hand, is beyond the higher end of the visible spectrum and can be damaging to
humans. The color white is a noisy mixture of all the color frequencies in the visible spectrum. The cornea of
the eye acts as a lens to focus light rays onto the retina. The light rays stimulate many thousands of specialized
nerves called rods and cones that cover the surface of the retina. The eye can differentiate among millions of
colors, or hues, consisting of combination of red, green, and blue.

Additive Color
In additive color model, a color is created by combining colored light sources in three primary colors: red, green
and blue (RGB). This is the process used for a TV or computer monitor

Subtractive Color
In subtractive color method, a new color is created by combining colored media such as paints or ink that
absorb (or subtract) some parts of the color spectrum of light and reflect the others back to the eye. Subtractive

24
color is the process used to create color in printing. The printed page is made up of tiny halftone dots of three
primary colors, cyan, magenta and yellow (CMY).

Image Authoring tools


Image Authoring tools is so known as authorware, a program that helps you write hypertext or multimedia
applications. Authoring tools usually enable you to create a final application merely by linking together objects,
such as a paragraph of text, an illustration, or a song. By defining the objects' relationships to each other, and by
sequencing them in an appropriate order, authors (those who use authoring tools) can produce attractive and
useful graphics applications. Most authoring systems also support a scripting language for more sophisticated
applications. The distinction between authoring tools and programming tools is not clear-cut. Typically, though,
authoring tools require less technical knowledge to master and are used exclusively for applications that present
a mixture of textual, graphical, and audio data.

Multimedia Authoring Tools


Multimedia authoring tools provide the important framework you need for organizing and editing the elements
of multimedia like graphics, sounds, animations and video clips. Authoring tools are used for designing
interactivity and the user interface, for presentation your project on screen and assembling multimedia elements
into a single cohesive project. Authoring software provides an integrated environment for binding together the
content and functions of your project. Authoring systems typically include the ability to create, edit and import
specific types of data; assemble raw data into a playback sequence or cue sheet and provide structured method
or language for responding to user input.

Features of Authoring Tools


Features of multimedia authoring tools are as mention below:
• Editing features
• Organizing features
• Programming features
• Interactive features
• Performance tuning features
• Playback features
• Delivery features
• Cross-Platform features
• Internet Playability
Now let us discuss each of them in detail.

Editing features
The elements of multimedia – image, animation, text, digital audio and MIDI music and video clips –need to be
created, edited and converted to standard file formats and the specialized applications provide these capabilities.
Editing tools for these elements, particularly text and still images are often included in your authoring system.

Organizing features
The organization, design and production process for multimedia involves storyboarding and flowcharting. Some
authoring tools provide a visual flowcharting system or overview facility for illustrating your project’s structure
at a macro level. Storyboards or navigation diagrams too can help organize a project. Because designing the
interactivity and navigation flow of you project often requires a great deal of planning and programming effort,
your story board should describe not just graphics of each screen but the interactive elements as well. Features
that help organize your material, such as those provided by Super Edit, Authorware, Icon Author and other
authoring systems, are a plus.

25
Programming features
Authoring tools that offer a very high level language or interpreted scripting environment for navigation control
and for enabling user inputs – such as Macromedia Director, Macromedia Flash, HyperCard, Meta Card and
Tool Book are more powerful. The more commands and functions provided in the scripting language, the more
powerful the authoring system. As with traditional programming tools looks for an authoring package with good
debugging facilities, robust text editing and online syntax reference. Other scripting augmentation facilities are
advantages as well. In complex projects you may need to program custom extensions of the scripting language
for direct access to the computer’s operating system. Some authoring tools offer direct importing of
preformatted text, including facilities, complex text search mechanisms and hyper linkage tools. These
authoring systems are useful for development of CD-ROM information products online documentation
products, online documentation and help systems and sophisticated multimedia enhanced publications With
script you can perform computational tasks; sense and respond to user input; create character, icon and motion
animation; launch other application; and control external multimedia devices.

Interactivity features
Interactivity empowers the end users of your project by letting them control the content and flow of
information. Authoring tools should provide one or more levels of interactivity: Simple branching, which offers
the ability to go to another section of the multimedia production. Conditional branching, which supports a go-to
based on the result of IF-THEN decision or events. A structured language that supports complex programming
logic, such as nested IF-THENs, subroutines, event tracking and message passing among objects and elements.

Performance tuning features


Complex multimedia projects require extra synchronization of events. Accomplishing synchronization is
difficult because performance varies widely among the different computers used for multimedia development
and delivery. Some authoring tools allow you to lock a production’s playback speed to specified computer
platform, but other provides no ability what so ever to control performance on various systems.

Playback features
When you are developing multimedia project, your will continually assembling elements and testing to see how
the assembly looks and performs. Your authoring system should let you build a segment or part of your project
and then quickly test it as if the user were actually using it.

Delivery features
Delivering your project may require building a run-time version of the project using the multimedia authoring
software. A run-time version allows your project to play back without requiring the full authoring software and
all its tools and editors. Many times the runtime version does not allow user to access or change the content,
structure and programming of the project. If you are going to distribute your project widely, you should
distribute it in the run-time version.

Cross-Platform features
It is also increasingly important to use tools that make transfer across platforms easy. For many developers, the
Macintosh remains the multimedia authoring platform of choice, but 80% of that developer’s target market may
be Windows platforms. If you develop on a Macintosh, look for tools that provide a compatible authoring
system for Windows or offer a run-time player for the other platform.

Internet Playability
Due to the Web has become a significant delivery medium for multimedia, authoring systems typically provide
a means to convert their output so that it can be delivered within the context of HTML or DHTML, either with
special plug-in or embedding Java, JavaScript or other code structures in the HTML document.

26
Image file formats
Image file formats are standardized means of organizing and storing digital images. Image files are composed
of digital data in one of these formats that can be rasterized for use on a computer display or printer. An image
file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes
a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device
displaying it.

Image File Size


Generally speaking, in raster images, Image file size is positively correlated to the number of pixels in an image
and the color depth, or bits per pixel, of the image. Images can be compressed in various ways, however.
Compression uses an algorithm that stores an exact representation or an approximation of the original image in
a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding
decompression algorithm. Considering different compressions, it is common for two images of the same
number of pixels and color depth to have a very different compressed file size. Considering exactly the same
compression, number of pixels, and color depth for two images, different graphical complexity of the original
images may also result in very different file sizes after compression due to the nature of compression
algorithms. With some compression formats, images that are less complex may result in smaller compressed file
sizes. This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats.
For example, graphically simple images (i.e images with large continuous regions like line art or animation
sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy
JPEG format. Vector images, unlike raster images, can be any dimension independent of file size. Files ize
increases only with the addition of more vectors.

Image file compression


There are two types of image file compression algorithms: lossless and lossy.

Lossless compression
Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed
image. Lossless compression generally, but not exclusively, results in larger files than lossy compression.
Lossless compression should be used to avoid accumulating stages of re-compression when editing images.

Lossy compression
Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to
be a perfect copy, but it is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than
lossless compression. Most lossy compression algorithms allow for variable compression that trades image
quality for file size.

Major Graphic File Formats


Including proprietary types, there are hundreds of image file types. The PNG, JPEG, and GIF formats are most
often used to display images on the Internet. These graphic formats are listed and briefly described below,
separated into the two main families of graphics: raster and vector.

In addition to straight image formats, Metafile formats are portable formats which can include both raster and
vector information. Examples are application-independent formats such as WMF and EMF. The metafile format
is an intermediate format. Most Windows applications open metafiles and then save them in their own native
format. Page description language refers to formats used to describe the layout of a printed page containing
text, objects and images. Examples are PostScript, PDF and PCL.

27
RASTER FORMATS

JPEG/JFIF
JPEG (Joint Photographic Experts Group) is a compression method; JPEG-compressed images are usually
stored in the JFIF (JPEG File Interchange Format) file format. JPEG compression is (in most cases) lossy
compression. The JPEG/JFIF filename extension is JPG or JPEG. Nearly every digital camera can save images
in the JPEG/JFIF format, which supports 8-bit grayscale images and 24-bit color images (8 bits each for red,
green, and blue). JPEG applies lossy compression to images, which can result in a significant reduction of the
file size. The amount of compression can be specified, and the amount of compression affects the visual quality
of the result. When not too great, the compression does not noticeably detract from the image's quality, but
JPEG files suffer generational degradation when repeatedly edited and saved. (JPEG also provides lossless
image storage, but the lossless version is not widely supported.)

JPEG 2000
JPEG 2000 is a compression standard enabling both lossless and lossy storage. The compression methods used
are different from the ones in standard JFIF/JPEG; they improve quality and compression ratios, but also
require more computational power to process. JPEG2000 also adds features that are missing in JPEG. It is not
nearly as common as JPEG, but it is used currently in professional movie editing and distribution (some digital
cinemas, for example, use JPEG 2000 for individual movie frames).

Exif
The Exif (Exchangeable image file format) format is a file standard similar to the JFIF format with TIFF
extensions; it is incorporated in the JPEG-writing software used in most cameras. Its purpose is to record and to
standardize the exchange of images with image metadata between digital cameras and editing and viewing
software. The metadata are recorded for individual images and include such things as camera settings, time and
date, shutter speed, exposure, image size, compression, name of camera, color information. When images are
viewed or edited by image editing software, all of this image information can be displayed. The actual Exif
metadata as such may be carried within different host formats, e.g. TIFF, JFIF (JPEG) or PNG. IFF-META is
another example.

TIFF
The TIFF (Tagged Image File Format) format is a flexible format that normally saves 8bits or 16 bits per color
(red, green, blue) for 24-bit and 48-bit totals, respectively, usually using either the TIFF or TIF filename
extension. TIFF's flexibility can be both an advantage and disadvantage, since a reader that reads every type of
TIFF file does not exist TIFFs can be lossy and lossless; some offer relatively good lossless compression for bi-
level (black & white) images. Some digital cameras can save in TIFF format, using the LZW compression
algorithm for lossless storage. TIFF image format is not widely supported by web browsers. TIFF remains
widely accepted as a photograph file standard in the printing business. TIFF can handle device-specific color
spaces, such as the CMYK defined by a particular set of printing press inks. OCR (Optical Character
Recognition) software packages commonly generate some (often monochromatic) form of TIFF image for
scanned text pages.

RAW
RAW refers to raw image formats that are available on some digital cameras, rather than to a specific format.
These formats usually use a lossless or nearly lossless compression, and produce file sizes smaller than the TIFF
formats. Although there is a standard raw image format, (ISO 12234-2, TIFF/EP), the raw formats used by most
cameras are not standardized or documented, and differ among camera manufacturers. Most camera
manufacturers have their own software for decoding or developing their raw file format, but there are also many
third-party raw file converter applications available that accept raw files from most digital cameras. Some

28
graphic programs and image editors may not accept some or all raw file formats, and some older ones have been
effectively orphaned already.

Adobe's Digital Negative (DNG) specification is an attempt at standardizing a raw image format to be used by
cameras, or for archival storage of image data converted from undocumented raw image formats, and is used by
several niche and minority camera manufacturers including Pentax, Leica, and Samsung. The raw image
formats of more than230 camera models, including those from manufacturers with the largest market shares
such as Canon, Nikon, Phase One, Sony, and Olympus, can be converted to DNG. DNG was based on ISO
12234-2, TIFF/EP, and ISO's revision of TIFF/EP is reported to be adding Adobe's modifications and
developments made for DNG into profile 2 of the new version of the standard. As far as video cameras are
concerned, ARRI's Arriflex D-20 and D-21 cameras provide raw 3K-resolution sensor data with Bayer pattern
as still images (one per frame) in a proprietary format (.ari file extension). Red Digital Cinema Camera
Company, with its Mysterium sensor family of still and video cameras, uses its proprietary raw format called
REDCODE (.R3D extension), which stores still as well as audio + video information in one lossy-compressed
file.

GIF
GIF (Graphics Interchange Format) is limited to an 8-bit palette, or 256 colors. This makes the GIF format
suitable for storing graphics with relatively few colors such as simple diagrams, shapes, logos and cartoon style
images. The GIF format supports animation and is still widely used to provide image animation effects. It also
uses a lossless compression that is more effective when large areas have a single color, and ineffective for
detailed images or dithered images.

BMP
The BMP file format (Windows bitmap) handles graphics files within the Microsoft Windows OS. Typically,
BMP files are uncompressed, hence they are large; the advantage is their simplicity and wide acceptance in
Windows programs.

PNG
The PNG (Portable Network Graphics) file format was created as the free, open-source successor to GIF. The
PNG file format supports 8 bit palette images (with optional transparency for all palette colors) and 24 bit true
color (16 million colors) or 48 bit true color with and without alpha channel - while GIF supports only 256
colors and a single transparent color. Compared to JPEG, PNG excels when the image has large, uniformly
colored areas. Thus lossless PNG format is best suited for pictures still under edition - and the lossy formats,
like JPEG, are best for the final distribution of photographic images, because in this case JPG files are usually
smaller than PNG files. The Adam7-interlacing allows an early preview, even when only a small percentage of
the image data has been transmitted.PNG provides a patent-free replacement for GIF and can also replace many
common uses of TIFF. Indexed-color, grayscale, and true color images are supported, plus an optional alpha
channel.PNG is designed to work well in online viewing applications like web browsers so it is fully streamable
with a progressive display option. PNG is robust, providing both full file integrity checking and simple
detection of common transmission errors. Also, PNG can store gamma and chromaticity data for improved
color matching on heterogeneous platforms. Some programs do not handle PNG gamma correctly, which can
cause the images to be saved or displayed darker than they should be. Animated formats derived from PNG are
MNG and APNG. The latter is supported by Mozilla Firefox and Opera and is backwards
compatible with PNG.

PPM, PGM, PBM, PNM and PFM


Net pbm format is a family including the portable pixmap file format (PPM), the portable gray map file
format (PGM) and the portable bitmap file format (PBM). These are either pure ASCII files or raw binary files
with an ASCII header that provide very basic functionality and serve as a lowest common denominator for
29
converting pixmap, graymap, or bitmap files between different platforms. Several applications refer to them
collectively as PNM or PAM format (Portable Any Map). PFM was invented later in order to carry floating
point-based pixel information (as used in HDR).

PAM
A late addition to the PNM family is the PAM format (Portable Arbitrary Format).

WEBP
WebP is a new open image format that uses both lossless and lossy compression. It was designed by Google to
reduce image file size to speed up web page loading: its principal purpose is to supersede JPEG as the primary
format for photographs on the web. WebP now supports animated images and alpha channel (transparency) in
lossy images. WebP is based on VP8's intra-frame coding and uses a container based on RIFF.

HDR Raster formats


Most typical raster formats cannot store HDR data (32 bit floating point values per pixel component), which is
why some relatively old or complex formats are still predominant here, and worth mentioning separately.
Newer alternatives are showing up, though.

RGBE (Radiance HDR)


The classical representation format for HDR images, originating from Radiance and also supported by Adobe
Photoshop.

IFF-RGFX
IFF-RGFX the native format of SView5 provides a straightforward IFF-style representation of any kind of
image data ranging from 1-128 bit (LDR and HDR), including common meta data like ICC profiles, XMP,
IPTC or EXIF.

30
6.0 ANIMATION
Animation makes static presentations come alive. It is visual change over time and can add great power to our
multimedia projects. Carefully planned, well-executed video clips can make a dramatic difference in a
multimedia project. Animation is created from drawn pictures and video is created using real time visuals.

PRINCIPLES OF ANIMATION
Animation is the rapid display of a sequence of images of 2-D artwork or model positions in order to create an
illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can
be created and demonstrated in a number of ways. The most common method of presenting animation is as a
motion picture or video program, although several other forms of presenting animation also exist Animation is
possible because of a biological phenomenon known as persistence of vision and a psychological phenomenon
called phi.
An object seen by the human eye remains chemically mapped on the eye’s retina for a brief time after viewing.
Combined with the human mind’s need to conceptually complete a perceived action, this makes it possible for a
series of images that are changed very slightly and very rapidly, one after the other, to seemingly blend together
into a visual illusion of movement. The following shows a few cells or frames of a rotating logo. When the
images are progressively and rapidly changed, the arrow of the compass is perceived to be spinning. Television
video builds entire frames or pictures every second; the speed with which each frame is replaced by the next
one makes the images appear to blend smoothly into movement. To make an object travel across the screen
while it changes its shape, just change the shape and also move or translate it a few pixels for each frame.

ANIMATION TECHNIQUES
When you create an animation, organize its execution into a series of logical steps. First, gather up in your mind
all the activities you wish to provide in the animation; if it is complicated, you may wish to create a written
script with a list of activities and required objects. Choose the animation tool best suited for the job. Then build
and tweak your sequences; experiment with lighting effects. Allow plenty of time for this phase when you are
experimenting and testing. Finally, post-process your animation, doing any special rendering and adding sound
effects.

Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each frame, which have been
replaced today by acetate or plastic. Cels of famous animated cartoons have become sought-after, suitable-for-
framing collector’s items. Cel animation artwork begins with key frames (the first and last frame of an action).
For example, when an animated figure of a man walks across the screen, the balances the weight of his entire
body on one foot and then the other in a series of falls and recoveries, with the opposite foot and leg catching up
to support the body.
The animation techniques made famous by Disney use a series of progressively different on each frame of
movie film which plays at 24 frames per second. A minute of animation may thus require as many as 1,440
separate frames. The term cel derives from the clear celluloid sheets that were used for drawing each frame,
which is been replaced today by acetate or plastic. Cel animation artwork begins with key frames.

Computer Animation
Computer animation programs typically employ the same logic and procedural concepts as cel animation, using
layer, key frame, and tweening techniques, and even borrowing from the vocabulary of classic animators. On
the computer, paint is most often filled or drawn with tools using features such as gradients and anti aliasing.
The word links, in computer animation terminology, usually means special methods for computing RGB pixel
values, providing edge detection, and layering so that images can blend or otherwise mix their colors to produce

31
special transparencies, inversions, and effects. Computer Animation is same as that of the logic and procedural
concepts as cel animation and use the vocabulary of classic cel animation – terms such as layer, Keyframe, and
tweening. The primary difference between the animation software program is in how much must be drawn by
the animator and how much is automatically generated by the software In 2D animation the animator creates an
object and describes a path for the object to follow. The software takes over, actually creating the animation on
the fly as the program is being viewed by your user. In 3Danimation the animator puts his effort in creating the
models of individual and designing the characteristic of their shapes and surfaces. Paint is most often filled or
drawn with tools using features such as gradients and anti- aliasing.

Kinematics
It is the study of the movement and motion of structures that have joints, such as a walking man. Inverse
Kinematics is in high-end 3D programs, it is the process by which you link objects such as hands to arms and
define their relationships and limits. Once those relationships are set you can drag these parts around and let the
computer calculate the result.

Morphing
Morphing is popular effect in which one image transforms into another. Morphing application and other
modeling tools that offer this effect can perform transition not only between still images but often between
moving images as well. The morphed images were built at a rate of 8 frames per second, with each transition
taking a total of 4 seconds.

Animation File Formats


Some file formats are designed specifically to contain animations and the can be ported among application and
platforms with the proper translators.
Director *.dir, *.dcr
AnimationPro *.fli, *.flc
3D Studio Max *.max
SuperCard and Director *.pics
CompuServe *.gif
Flash *.fla, *.swf

Following is the list of few Software used for computerized animation:


3D Studio Max
Flash
AnimationPro

32
7.0 VIDEO

ANALOG VERSUS DIGITAL


Digital video has supplanted analog video as the method of choice for making video for multimedia use. While
broadcast stations and professional production and postproduction houses remain greatly invested in analog
video hardware (according to Sony, there are more than 350,000 Betacam SP devices in use today), digital
video gear produces excellent finished products at a fraction of the cost of analog. A digital camcorder directly
connected to a computer workstation eliminates the image-degrading analog-to-digital conversion step typically
performed by expensive video capture cards, and brings the power of nonlinear video editing and production to
everyday users.

Broadcast Video Standards


Four broadcast and video standards and recording formats are commonly in use around the world: NTSC, PAL,
SECAM, and HDTV. Because these standards and formats are not easily interchangeable, it is important to
know where your multimedia project will be used.

PAL
The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe, Australia, and South Africa.
PAL is an integrated method of adding color to a black-and-white television signal that paints 625 lines at a
frame rate 25 frames per second.

SECAM
The Sequential Color and Memory (SECAM) system is used in France, Russia, and few other countries.
Although SECAM is a 625-line, 50 Hz system, it differs greatly from both the NTSC and the PAL color
systems in its basic technology and broadcast method.

Shooting and Editing Video


To add full-screen, full-motion video to your multimedia project, you will need to invest in specialized
hardware and software or purchase the services of a professional video production studio. In many cases, a
professional studio will also provide editing tools and post-production capabilities that you cannot duplicate
with your Macintosh or PC.

Video Tips
A useful tool easily implemented in most digital video editing applications is “blue screen,” “Ultimate,” or
“chromo key” editing. Blue screen is a popular technique for making multimedia titles because expensive sets
are not required. Incredible background scan be generated using 3-D modeling and graphic software, and one or
more actors, vehicles, or other objects can be neatly layered onto that background. Applications such as Video
Shop, Premiere, Final Cut Pro, and I Movie provide this capability. Recording Formats S-VHS video. In S-VHS
video, color and luminance information are kept on two separate tracks. The result is a definite improvement in
picture quality. This standard is also used in Hi-8. still, if your ultimate goal is to have your project accepted by
broadcast stations, this would not be the best choice.

Component (YUV)
In the early 1980s, Sony began to experiment with a new portable professional video format based on Betamax.
Panasonic has developed their own standard based on a similar technology, called “MII,” Betacam SP has
become the industry standard for professional video field recording. This format may soon be eclipsed by a new
digital version called “Digital Betacam.”
33
Digital Video
Full integration of motion video on computers eliminates the analog television form of video from the
multimedia delivery platform. If a video clip is stored as data on a hard disk, CD-ROM, or other mass-storage
device, that clip can be played back on the computer’s monitor without overlay boards, videodisk players, or
second monitors. This playback of digital video is accomplished using software architecture such as QuickTime
or AVI, a multimedia producer or developer; you may need to convert video source material from its still
common analog form (videotape) to a digital form manageable by the end user’s computer system. So an
understanding of analog video and some special hardware must remain in your multimedia toolbox. Analog to
digital conversion of video can be accomplished using the video overlay hardware described above, or it can be
delivered direct to disk using FireWire cables. To repetitively digitize a full-screen color video image every
1/30 second and store it to disk or RAM severely taxes both Macintosh and PC processing capabilities–special
hardware, compression firmware, and massive amounts of digital storage space are required.

Video Compression
To digitize and store a 10-second clip of full-motion video in your computer requires transfer of an enormous
amount of data in a very short amount of time. Reproducing just one frame of digital video component video at
24 bits requires almost 1MB of computer data; 30seconds of video will fill a gigabyte hard disk. Full-size, full-
motion video requires that the computer deliver data at about 30MB per second. This overwhelming
technological bottleneck is overcome using digital video compression schemes or codecs (coders/decoders). A
codec is the algorithm used to compress a video for delivery and then decode it in real-time for fast playback.
Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak, Sorenson,
ClearVideo, RealVideo, and VDOwave are available to compress digital video information. Compression
schemes use Discrete Cosine Transform (DCT), an encoding algorithm that quantifies the human eye’s ability
to detect color and image distortion. All of these codecs employ lossy compression algorithms. In addition to
compressing video data, streaming technologies are being implemented to provide reasonable quality low-
bandwidth video on the Web. Microsoft, RealNetworks, VXtreme, VDOnet, Xing, Precept, Cubic, Motorola,
Viva, Vosaic, and Oracle are actively pursuing the commercialization of streaming technology on the Web.
QuickTime, Apple’s software-based architecture for seamlessly integrating sound, animation, text, and video
(data that changes over time), is often thought of as a compression standard, but it is really much more than that.

MPEG
The MPEG standard has been developed by the Moving Picture Experts Group, a working group convened by
the International Standards Organization (ISO) and the International Electro-Technical Commission (IEC) to
create standards for digital representation of moving pictures and associated audio and other
data.MPEG1andMPEG2 are the current standards. Using MPEG1, you can deliver 1.2 Mbps of videoand250
Kbps of two-channel stereo audio using CD-ROM technology. MPEG2, a completely different system from
MPEG1, requires higher data rates (3 to 15 Mbps) but delivers higher image resolution, picture quality,
interlaced video formats, multi resolution scalability, and multi channel audio features.DVI/Indeo DVI is a
property, programmable compression/decompression technology based on the Intel i750 chip set. This
Hardware consists of two VLSI (Very Large Scale Integrated) chips to separate the image processing and
display functions.

Two levels of compression and decompression are provided by DVI: Production Level Video (PLV) and Real
Time Video (RTV). PLV and RTV both use variable compression rates. DVI’s algorithms can compress video
images at ratios between 80:1 and 160:1.DVI will playback video in full-frame size and in full color at 30
frames per second.

34

You might also like