CG Module3
CG Module3
MODULE-3
A standard organization for input procedures in a graphics package is to classify the functions
according to the type of data that is to be processed by each function. This scheme allows any
physical device, such as a keyboard or a mouse, to input any data class.
1. Locator Devices
Interactive selection of coordinate points is typically done by positioning a screen cursor within a
displayed scene.
Page 1
Computer Graphics and Fundamentals of Image Processing (21CS63)
Input methods include a variety of devices: mouse, touchpad, joystick, trackball, spaceball,
thumbwheel, dial, hand cursor, or digitizer stylus.
Selection of coordinates can also be achieved through buttons, keys, or switches indicating
processing options.
Keyboards are often used for cursor control, with dedicated keys for moving the cursor up,
down, left, and right, and additional keys for diagonal movement.
Rapid cursor movement is possible by holding down the cursor control keys.
Some keyboards integrate additional devices like touchpads or joysticks for cursor positioning.
Keyboards can also be used for entering numerical values or codes to specify coordinate
positions.
Specialized devices like light pens have been used for detecting screen positions via light from
phosphors, requiring specific implementation procedures.
2. Stroke Devices
Physical devices such as mice, trackballs, joysticks, and hand cursors serve as both locator and stroke
devices.
Activating "continuous" mode on a tablet generates a stream of coordinate values as the cursor moves.
Commonly used in paintbrush systems for creating drawings with different brush strokes.
3. String Devices
Keyboards are the primary physical device for string input.
Character strings in computer graphics are often used for labeling pictures or graphs.
Other devices can generate character patterns for specific applications.
Characters can be sketched on screen using stroke or locator-type devices.
Pattern recognition programs interpret characters using predefined pattern dictionaries.
Page 2
Computer Graphics and Fundamentals of Image Processing (21CS63)
4. Valuator Devices
Valuator input sets scalar values for geometric transformations, viewing parameters, and physical
parameters like temperature or voltage.
Control dials are a common physical device for valuator input, calibrated to produce numerical values
within a predefined range.
Rotary potentiometers convert dial rotation into voltage, translated into scalar values like -10.5 to
25.5.
Slide potentiometers convert linear movements into scalar values.
Keyboards with numeric keys can function as valuator devices.
Joysticks, trackballs, tablets, and other devices interpret pressure or movement for valuator input.
Graphical representations such as sliders, buttons, rotating scales, and menus on monitors provide
valuator input.
Cursor positioning with devices like mice or joysticks selects values on these graphical
representations.
5. Choice Devices
Menus in graphics programs select processing options, parameter values, and object shapes
for constructing pictures.
Choice devices for menu selection include cursor-positioning tools like mice, trackballs,
keyboards, touch panels, or button boxes.
Function keys on keyboards or separate button boxes often enter menu selections.
Each button or function key is programmed for specific operations or values, sometimes
including preset options on the input device.
When a screen- cursor position (x, y) is selected, it is compared to the coordinate extents of each
listed menu item. A menu item with vertical and horizontal boundaries at the coordinate values
xmin, xmax, ymin, and ymax is selected if the input coordinates satisfy the inequalities
For larger menus with relatively few options displayed, a touch panel is commonly used. A
selected screen position is compared to the coordinate extents of the individual menu options to
determine what process is to be performed.
Page 3
Computer Graphics and Fundamentals of Image Processing (21CS63)
Alternate methods for choice input include keyboard and voice entry.
Standard keyboards allow typing commands or menu options, often using abbreviated formats.
Menu listings can be numbered or labeled with short identifiers for quick selection.
Voice input systems employ a similar encoding scheme, ideal for small sets of options (20 or
fewer).
6. Pick Devices
When coordinate-extent tests do not uniquely identify a pick object, the distances from the pick
position to individual line segments could be computed. Figure 3.1 illustrates a pick position that
is within the coordinate extents of two line segments.
For a two-dimensional line segment with pixel endpoint coordinates (x1, y1) and (x2, y2), the
perpendicular distance squared from a pick position (x, y) to the line is calculated as
Page 4
Computer Graphics and Fundamentals of Image Processing (21CS63)
Another Picking technique involves associating a pick window with a cursor position.
The pick window is centered on the cursor, utilizing clipping procedures to find intersecting
objects.
For line picking, pick-window dimensions (w, h) can be set very small to isolate a single line
segment.
This method ensures precise selection of specific elements within a scene.
Figure 3.2: A pick window with center coordinates ( xp, yp), width w, and height h.
Highlighting facilitates picking by sequentially marking objects overlapping a pick position.
Users can accept or reject highlighted objects using keyboard commands.
Picking can proceed without selecting a specific cursor position by highlighting all scene objects.
Naming picture components enables object selection via keyboard input.
This method is straightforward but less interactive compared to screen-based picking.
Naming components down to individual primitives aids in selection but may require prompts due to
potential complexity.
1. The input interaction mode for the graphics program and the input devices. Either the
program or the devices can initiate data entry, or both can operate simultaneously.
2. Selection of a physical device that is to provide input within a particular logical
classification (for example, a tablet used as a stroke device).
3. Selection of the input time and device for a particular set of data values.
Page 5
Computer Graphics and Fundamentals of Image Processing (21CS63)
Input Modes
Some input functions in an interactive graphics system are used to specify how the program and
input devices should interact. A program could request input at a particular time in the
processing (request mode), or an input device could independently provide updated input
(sample mode), or the device could independently store all collected data (event mode).
1) Request Mode
2) Sample Mode
In sample mode, the application program and input devices operate independently.
Input devices may be operating at the same time that the program is processing other
data.
New values obtained from the input devices replace previously input data values.
When the program requires new data, it samples the current values that have been stored
from the device input.
3) Event Mode
In event mode, the input devices initiate data input to the application program.
The program and the input devices again operate concurrently, but now the input devices
deliver data to an input queue, also called an event queue. All input data is saved.
When the program requires new data, it goes to the data queue.
Typically, any number of devices can be operating at the same time in sample and event modes.
Page 6
Computer Graphics and Fundamentals of Image Processing (21CS63)
Some can be operating in sample mode, while others are operating in event mode. But only one
device at a time can deliver input in request mode.
Echo Feedback
Requests can usually be made in an interactive input program for an echo of input data and
associated parameters. When an echo of the input data is requested, it is displayed within a
specified screen area.
Callback Functions
Page 7
Computer Graphics and Fundamentals of Image Processing (21CS63)
It can define an endpoint for a new line segment.
Position an object (e.g., center of a sphere).
Specify a starting point or center for text strings.
Numeric values of selected positions can be echoed on-screen for accuracy.
Users can make precise adjustments using dials, arrow keys, or other interactive devices
based on echoed coordinates.
2) Dragging
Another interactive positioning technique is to select an object and drag it to a new location.
Using a mouse, position the cursor at the object's original position, press and hold a mouse
button.
Move the cursor to a new position.
Release the mouse button to display the object at the new cursor location.
The object typically updates and displays at intermediate positions as the cursor moves.
3) Constraints
Any procedure for altering input coordinate values to obtain a particular orientation or
alignment of an object is called a constraint. For example, an input line segment can be
constrained to be horizontal or vertical, as illustrated in Figures 3.3 and 3.4.
To implement this type of constraint, we compare the input coordinate values at the two
endpoints.
If the difference in the y values of the two endpoints is smaller than the difference in the x
values, a horizontal line is displayed. Otherwise, a vertical line is drawn.
Other kinds of constraints can be applied to input coordinates to produce a variety of alignments.
Lines could be constrained to have a particular slant, such as 45◦, and input coordinates could be
constrained to lie along predefined paths, such as circular arcs.
4) Grids
Another kind of constraint is a rectangular grid displayed in some part of the screen area.
With an activated grid constraint, input coordinates are rounded to the nearest grid
intersection. Figure 3.5 illustrates line drawing using a grid. Each of the cursor positions
in this example is shifted to the nearest grid intersection point, and a line is drawn
between these two grid positions.
Grids facilitate object constructions, because a new line can be joined easily to a
previously drawn line by selecting any position near the endpoint grid intersection of one
end of the displayed line.
Spacing between grid lines is often an option, and partial grids or grids with different
spacing could be used in different screen areas.
Page 9
Computer Graphics and Fundamentals of Image Processing (21CS63)
Figure 3.5: Construction of a line segment with endpoints constrained to grid intersection
positions.
5) Rubber-Band Methods
Line segments and other basic shapes can be constructed and positioned using rubber-
band methods that allow the sizes of objects to be interactively stretched or contracted.
Figure 3.6 demonstrates a rubber-band method for interactively specifying a line
segment.
First, a fixed screen position is selected for one endpoint of the line. Then, as the cursor
moves around, the line is displayed from the start position to the current position of the
cursor.
The second endpoint of the line is input when a button or key is pressed.
Using a mouse, a rubber-band line is constructed while pressing a mouse key. When
the mouse key is released, the line display is completed.
Page 10
Computer Graphics and Fundamentals of Image Processing (21CS63)
Similar rubber-band methods can be used to construct rectangles, circles, and other objects.
Figure 3.7 demonstrates rubber-band construction of a rectangle, and Figure 3.8 shows a rubber-
band circle construction.
Page 11
Computer Graphics and Fundamentals of Image Processing (21CS63)
6) Gravity Field
Graphics packages aid in connecting lines at non-grid intersections by employing a gravity field.
Input positions near a line segment are automatically adjusted to the nearest position on the line.
This process, known as "gravitating," ensures precise alignment without requiring exact cursor
positioning.
The gravity field area around the line facilitates seamless connection of line segments in
figures.
A gravity field area around a line is illustrated with the shaded region shown in Figure 3.9.
Gravity fields around line endpoints are enlarged to facilitate easier connection of lines.
Selected positions within these circular gravity areas are drawn toward the corresponding
endpoint.
Gravity field sizes are chosen to assist positioning without overlapping with other lines.
When many lines are present, gravity areas can overlap, complicating precise point specification.
Typically, the boundary of the gravity field is not visibly displayed on the screen.
Figure 3.9:A gravity field around a line. Any selected point in the shaded area is shifted to
a position on the line.
Page 12
Computer Graphics and Fundamentals of Image Processing (21CS63)
7) Interactive Painting and Drawing Methods
Drawing options include standard curve shapes like circular arcs and splines, as well as freehand
sketching.
Splines are constructed interactively by specifying control points or through freehand sketching.
The system fits a polynomial curve to the set of points provided.
Freehand drawing utilizes a stylus on a graphics tablet or cursor path on a monitor to create
curves.
Designers can adjust curve shapes by manipulating selected points along the curve path.
Painting and drawing packages offer choices like line widths, styles, and other attributes.
Artists’ workstations provide various brush styles, patterns, color combinations, object shapes,
and surface textures.
Some systems adjust line width and brush strokes based on the pressure applied by the artist’s
hand on the stylus.
Virtual-Reality Environments
Virtual reality environments use a data glove for interactive input, allowing users to grasp and
move objects within a displayed scene.
Scenes are viewed through a head-mounted display system, showing computer-generated
imagery in stereographic projection.
Tracking devices monitor the headset and data glove positions and orientations relative to
objects in the scene.
Users navigate and rearrange objects within the virtual environment using the data glove.
Alternatively, stereographic projections on a raster monitor display two views on alternate
refresh cycles.
Stereographic glasses enable viewers to see the 3D scene, with interactive object manipulations
facilitated by a data glove and tracking device.
This method allows for immersive interaction and manipulation of virtual objects in real-time.
Page 13
Computer Graphics and Fundamentals of Image Processing (21CS63)
Figure 3.10: Using a head-tracking stereo display, called the BOOM and a Dataglove a
researcher interactively manipulates exploratory probes in the unsteady flow around a
Harrier jet airplane.
Page 14
Computer Graphics and Fundamentals of Image Processing (21CS63)
OpenGL programs manage interactive device input through routines in the OpenGL Utility
Toolkit (GLUT).
GLUT interfaces with window systems to handle input from standard devices like mice,
keyboards, tablets, space balls, button boxes, and dials.
Each device is associated with a callback function in GLUT, which executes when an input event
occurs.
These GLUT commands, including callback functions, are integrated into the main procedure
alongside other GLUT statements.
This setup enables OpenGL programs to respond dynamically to user input from various
devices during execution.
Following function is used to specify (“register”) a procedure that is to be called when the mouse
pointer is in a display window and a mouse button is pressed or released:
glutMouseFunc (mouseFcn);
Parameter button is assigned a GLUT symbolic constant that denotes one of the three mouse
buttons.
Parameter action is assigned a symbolic constant that specifies which button action we want to
use to trigger the mouse activation event. Allowable values for button are GLUT_
LEFT_BUTTON, GLUT_MIDDLE_BUTTON, and GLUT_RIGHT_BUTTON.
By activating a mouse button while the screen cursor is within the display window, we can select
Page 15
Computer Graphics and Fundamentals of Image Processing (21CS63)
a position for displaying a primitive such as a single point, a line segment, or a fill area.
This routine invokes fcnDoSomething when the mouse is moved within the display window
with one or more buttons activated. The function that is invoked has two arguments:
where (xMouse, yMouse) is the mouse location in the display window relative to the top-left
corner, when the mouse is moved with a button pressed.
Some action can be performed when we move the mouse within the display window without
pressing a button:
glutPassiveMotionFunc(fcnDoSomethingElse);
With keyboard input, the following function is used to specify a procedure that
is to be invoked when a key is pressed:
glutKeyboardFunc (keyFcn);
For function keys, arrow keys, and other special-purpose keys, following command can be used:
Page 16
Computer Graphics and Fundamentals of Image Processing (21CS63)
glutSpecialFunc (specialKeyFcn);
Usually, tablet activation occurs only when the mouse cursor is in the display
window. A button event for tablet input is recorded with
glutTabletButtonFunc (tabletFcn);
and the arguments for the invoked function are similar to those for a mouse:
We designate a tablet button with an integer identifier such as 1, 2, 3, and so on, and the button
action is specified with either GLUT_UP or GLUT_DOWN. The returned values xTablet and
yTablet are the tablet coordinates. The number of available tablet buttons can be determined
with the following command
glutDeviceGet (GLUT_NUM_TABLET_BUTTONS);
glutTabletMotionFunc (tabletMotionFcn);
The returned values xTablet and yTablet give the coordinates on the tablet
surface.
Page 17
Computer Graphics and Fundamentals of Image Processing (21CS63)
glutSpaceballButtonFunc (spaceballFcn);
Spaceball buttons are identified with the same integer values as a tablet, and
parameter action is assigned either the value GLUT_UP or the value
GLUT_DOWN. The number of available spaceball buttons can be determined with a
call to glutDeviceGet using the argument GLUT_NUM_SPACEBALL_BUTTONS
glutSpaceballMotionFunc (spaceballTranlFcn);
The three-dimensional translation distances are passed to the invoked function as, for example:
glutSpaceballRotateFunc (spaceballRotFcn);
The three-dimensional rotation angles are then available to the callback function, as follows:
glutButtonBoxFunc (buttonBoxFcn);
Page 18
Computer Graphics and Fundamentals of Image Processing (21CS63)
The buttons are identified with integer values, and the button action is specified as GLUT_UP or
GLUT_DOWN.
glutDialsFunc(dialsFcn);
Following callback function is used to identify the dial and obtain the angular amount of
rotation:
Dials are designated with integer values, and the dial rotation is returned as an integer degree
value.
Page 19
Computer Graphics and Fundamentals of Image Processing (21CS63)
o Assign identifiers to objects and reprocess the scene using the revised
view volume. (Pick information is then stored in the pick buffer.)
o Restore the original viewing and geometric-transformation matrix.
o Determine the number of objects that have been picked, and return to
the normal rendering mode.
o Process the pick information.
1. The stack position of the object, which is the number of identifiers in the name stack, up
to and including the position of the picked object.
2. The minimum depth of the picked object.
3. The maximum depth of the picked object.
4. The list of the identifiers in the name stack from the first (bottom) identifier to the
identifier for the picked object.
The integer depth values stored in the pick buffer are the original values in the
range from 0 to 1.0, multiplied by 232 - 1.
Above routine call switches to selection mode. A scene is processed through the viewing
pipeline but not stored in the frame buffer. A record of information for each object that would
have been displayed in the normal rendering mode is placed in the pick buffer. In addition, this
Page 20
Computer Graphics and Fundamentals of Image Processing (21CS63)
command returns the number of picked objects, which is equal to the number of information
records in the pick buffer. To return to the normal rendering mode (the default), glRenderMode
routine is invoked using the argument GL_RENDER. A third option is the argument GL_
FEEDBACK, which stores object coordinates and other information in a feedback buffer
without displaying the objects.
Following statement is used to activate the integer-ID name stack for the picking operations:
glInitNames ( );
The ID stack is initially empty, and this stack can be used only in selection mode.
To place an unsigned integer value on the stack, following function can be invoked:
glPushName (ID);
This places the value for parameter ID on the top of the stack and pushes the
previous top name down to the next position in the stack.
The top of the stack can be replaced using
glLoadName (ID);
To eliminate the top of the ID stack, following command is used:
glPopName ( );
A pick window within a selected viewport is defined using the following
GLU function:
gluPickMatrix (xPick, yPick, widthPick, heightPick, vpArray);
Page 21
Computer Graphics and Fundamentals of Image Processing (21CS63)
Page 22
Computer Graphics and Fundamentals of Image Processing (21CS63)
Page 23
Computer Graphics and Fundamentals of Image Processing (21CS63)
glutCreateMenu (menuFcn);
glutAddMenuEntry ("First Menu Item", 1);
...
...
...
glutAddSubMenu ("Submenu Option", submenuID);
The glutAddSubMenu function can also be used to add the submenu to the current menu.
Modifying GLUT Menus
To change the mouse button that is used to select a menu option, first
cancel the current button attachment and then attach the new button. A button
attachment is cancelled for the current menu with
glutDetachMenu (mouseButton);
where parameter mouseButton is assigned the GLUT constant identifying the
button (left, middle, or right) that was previously attached to the menu.
After detaching the menu from the button, glutAttachMenu is used to attach
it to a different button.
Options within an existing menu can also be changed.
For example, an option in the current menu can be deleted with the function
glutRemoveMenuItem (itemNumber);
where parameter itemNumber is assigned the integer value of the menu option
that is to be deleted.
GUI Components: Include display windows, icons, menus, and more for user interaction.
Specialized Dialogues: Tailored for specific fields to select options using familiar terms.
User Skill Levels: Accommodated with different levels of functionality and modes.
Consistency: Maintained in layout, design elements, and interaction patterns.
Error Handling: Includes validation, informative messages, and prompts.
Feedback: Provides visual, auditory, and textual responses for user actions.
Page 24
Computer Graphics and Fundamentals of Image Processing (21CS63)
It states the type of objects that can be displayed and how the objects can be manipulated. For
example, if the system is to be used as a tool for architectural design, the model describes how
the package can be used to construct and display views of buildings by positioning walls, doors,
windows, and other building components.
A circuit-design program provides electrical or logic symbols and the positioning operations for
adding or deleting elements within a layout. All information in the user dialogue is presented in
the language of the application.
Windows and Icons
Typical GUIs provide visual representations both for the objects that are to be manipulated in an
application and for the actions to be performed on the application objects. In addition to the
standard display-window operations, such as opening, closing, positioning, and resizing, other
operations are needed for working with the sliders, buttons, icons, and menus. Some systems are
capable of supporting multiple window managers so that different window styles can be
accommodated, each with its own window manager, which could be structured for a particular
application. Icons representing objects such walls, doors, windows, and circuit elements are often
referred to as application icons. The icons representing actions, such as rotate, magnify, scale,
clip, or paste, are called control icons, or command icons.
Accommodating Multiple Skill Levels
Usually, interactive GUIs provide several methods for selecting actions. For example, an option
could be specified by pointing to an icon, accessing a pulldown or pop-up menu, or by typing a
keyboard command. This allows a package to accommodate users that have different skill levels.
A less experienced user may find an interface with a large, comprehensive set of operations to be
difficult to use, so a smaller interface with fewer but more easily understood operations and
detailed prompting may be preferable. A simplified set of menus and options is easy to learn and
remember, and the user can concentrate on the application instead of on the details of the
interface. Simple point-and-click operations are often easiest for an inexperienced user of an
applications package.
Page 25
Computer Graphics and Fundamentals of Image Processing (21CS63)
Experienced users, typically want speed. This means fewer prompts and more input from the
keyboard or with multiple mouse-button clicks. Actions are selected with function keys or with
simultaneous combinations of keyboard keys, because experienced users will remember these
shortcuts for commonly used actions.
Help facilities can be designed on several levels so that beginners can carry on a detailed
dialogue, while more experienced users can reduce or eliminate prompts and messages. Help
facilities can also include one or more tutorial applications, which provide users with an
introduction to the capabilities and use of the system.
Consistency
An important design consideration in an interface is consistency. An icon shape should always
have a single meaning, rather than serving to represent different actions or objects depending on
the context.
Examples of consistency:
Always placing menus in the same relative positions so that a user does not have to hunt
for a particular option.
Always using the same combination of keyboard keys for an action.
Always using the same color encoding so that a color does not have different meanings in
different situations.
Minimizing Memorization
Operations in an interface should also be structured so that they are easy to understand and to
remember. Obscure, complicated, inconsistent, and abbreviated command formats lead to
confusion and reduction in the effective application of the software. One key or button used for
all delete operations, for example, is easier to remember than a number of different keys for
different kinds of delete procedures.
Icons and window systems can also be organized to minimize memorization. Different kinds of
information can be separated into different windows so that a user can identify and select items
easily. Icons should be designed as easily recognizable shapes that are related to application
objects and actions. To select a particular action, a user should be able to select an icon that
resembles that action.
Page 26
Computer Graphics and Fundamentals of Image Processing (21CS63)
Backup and Error Handling
Feedback
Responding to user actions is another important feature of an interface, particularly for an
inexperienced user. As each action is entered, some response should be given. Otherwise, a user
might begin to wonder what the system is doing and whether the input should be reentered.
Feedback can be given in many forms, such as highlighting an object, displaying an icon or
message, and displaying a selected menu option in a different color.
When the processing of a requested action is lengthy, the display of a flashing message,
clock, hourglass, or other progress indicator is important. It may also be possible for the system
to display partial results as they are completed, so that the final display is built up a piece at a
time.
Standard symbol designs are used for typical kinds of feedback. A cross, a frowning face, or a
thumbs-down symbol is often used to indicate an error, and some kind of time symbol or a blinking “at
work” sign is used to indicate that an action is being processed.
This type of feedback can be very effective with a more experienced user, but the beginner
may need more detailed feedback that not only clearly indicates what the system is doing but
also what the user should input next.
Clarity is another important feature of feedback. A response should be easily understood,
but not so overpowering that the user’s concentration is interrupted. With function keys, feedback
can be given as an audible click or by lighting up the key that has been pressed. Audio feedback
has the advantage that it does not use up screen space, and it does not divert the user’s attention
from the work area. A fixed message area can be used so that a user always know where to
Page 27
Computer Graphics and Fundamentals of Image Processing (21CS63)
look for messages, but it may be advantageous in some cases to place feedback messages in the
work area near the cursor.
Echo feedback is often useful, particularly for keyboard input, so that errors can be detected
quickly. Selection of coordinate points can be echoed with a cursor or other symbol that appears
at the selected position.
Computer animation generally refers to any time sequence of visual changes in a picture.
1. Storyboard layout
2. Object definitions
3. Key-frame specifications
4. Generation of in-between frames
1. Storyboard Layout
The storyboard is an outline of the action. It defines the motion sequence as a set of basic events
that are to take place. Depending on the type of animation to be produced, the storyboard could
consist of a set of rough sketches, along with a brief description of the movements, or it could
just be a list of the basic ideas for the action. Originally, the set of motion sketches was attached
to a large board that was used to present an overall view of the animation project. Hence, the
name “storyboard.”
2. Object Definitions
An object definition is given for each participant in the action. Objects can be defined in terms
of basic shapes, such as polygons or spline surfaces. In addition, a description is often given of
the movements that are to be performed by each character or object in the story.
Page 28
Computer Graphics and Fundamentals of Image Processing (21CS63)
3. Key-Frame Specifications
A key frame is a detailed drawing of the scene at a certain time in the animation sequence.
Within each key frame, each object (or character) is positioned according to the time for that
frame. Some key frames are chosen at extreme positions in the action; others are spaced so that
the time interval between key frames is not too great. More key frames are specified for intricate
motions than for simple, slowly varying motions. Development of the key frames is generally the
responsibility of the senior animators, and often a separate animator is assigned to each character
in the animation.
In-betweens are the intermediate frames between the key frames. The total number of frames,
and hence the total number of in-betweens, needed for an animation is determined by the display
media that is to be used. Film requires 24 frames per second, and graphics terminals are
refreshed at the rate of 60 or more frames per second. Typically, time intervals for the motion are
set up so that there are from three to five in-betweens for each pair of key frames. Depending on
the speed specified for the motion, some key frames could be duplicated.
There are several other tasks that may be required, depending on the application. These
additional tasks include motion verification, editing, and the production and synchronization
of a soundtrack. Many of the functions needed to produce general animations are now
computer-generated.
Figure 3.11: One frame from the award-winning computer-animated short film Luxo Jr.
Page 29
Computer Graphics and Fundamentals of Image Processing (21CS63)
The film was designed using a key-frame animation system and cartoon animation
techniques to provide lifelike actions of the lamps. Final images were rendered with
multiple light sources and procedural texturing techniques.
Figure 3.12: One frame from the short film Tin Toy, the first computer-animated film to
win an Oscar. Designed using a key-frame animation system, the film also required
extensive facial-expression modeling. Final images were rendered using procedural
shading, self-shadowing techniques, motion blur, and texture mapping.
One of the most important techniques for simulating acceleration effects, particularly for
nonrigid objects, is squash and stretch.
Figure 3.13 shows how squash and stretch technique is used to emphasize the acceleration and
deceleration of a bouncing ball. As the ball accelerates, it begins to stretch. When the ball hits the
floor and stops, it is first compressed (squashed) and then stretched again as it accelerates and
bounces upwards.
Page 30
Computer Graphics and Fundamentals of Image Processing (21CS63)
Figure 3.13 : A bouncing-ball illustration of the “squash and stretch” technique for
emphasizing object acceleration.
Another technique used by film animators is timing. Timing refers to the spacing between
motion frames. A slower moving object is represented with more closely spaced frames, and a
faster moving object is displayed with fewer frames over the path of the motion. This effect is
illustrated in Figure 3.14, where the position changes between frames increase as a bouncing ball
moves faster.
Figure 3.14: The position changes between motion frames for a bouncing ball increase as
the speed of the ball increases.
Page 31
Computer Graphics and Fundamentals of Image Processing (21CS63)
Object movements can also be emphasized by creating preliminary actions that indicate an
anticipation of a coming motion. For example, a cartoon character might lean forward and rotate
its body before starting to run; or a character might perform a “windup” before throwing a ball.
Follow-through actions can be used to emphasize a previous motion. After throwing a ball, a
character can continue the arm swing back to its body; or a hat can fly off a character that is
stopped abruptly. An action also can be emphasized with staging. Staging refers to any method
for focusing on an important part of a scene, such as a character hiding something.
Some animation packages, such as Wavefront for example, provide special functions for both
the overall animation design and the processing of individual objects. Others are special-
purpose packages for particular features of an animation, such as a system for generating in-
between frames or a system for figure animation.
A set of routines is often provided in a general animation package for storing and managing the
object database. Object shapes and associated parameters are stored and updated in the database.
Other object functions include those for generating the object motion and those for rendering the
object surfaces. Movements can be generated according to specified constraints using two
dimensional or three-dimensional transformations. Standard functions can then be applied to
identify visible surfaces and apply the rendering algorithms.
Another typical function set simulates camera movements. Standard camera motions are
zooming, panning, and tilting. Finally, given the specification for the key frames, the in-
betweens can be generated automatically.
Page 32
Computer Graphics and Fundamentals of Image Processing (21CS63)
Computer-Animation Languages
Routines can be developed to design and control animation sequences within a general-purpose
programming language, such as C, C++, Lisp, or Fortran. But several specialized animation
languages have been developed.
A graphics editor
A key-frame generator
An in-between generator
Standard graphics routines
The graphics editor allows an animator to design and modify object shapes, using spline
surfaces, constructive solid geometry methods, or other representation schemes.
intensities and surface illumination properties), and setting the camera parameters (position,
orientation, and lens characteristics).
Another standard function is action specification. Action specification involves the layout of
motion paths for the objects and camera. Usual graphics routines are needed for viewing and
perspective transformations, geometric transformations to generate object movements as a
function of accelerations or kinematic path specifications, visible-surface identification, and the
surface-rendering operations.
Key-frame systems were originally designed as a separate set of animation routines for
generating the in-betweens from the user-specified key frames. Now, these routines are often a
component in a more general animation package. In the simplest case, each object in a scene is
defined as a set of rigid bodies connected at the joints and with a limited number of degrees of
freedom.
Example:
The single-armed robot in Figure 3.15 has 6 degrees of freedom, which are referred to as arm
Page 33
Computer Graphics and Fundamentals of Image Processing (21CS63)
sweep, shoulder swivel, elbow extension, pitch, yaw, and roll. The number of degrees of
freedom for this robot arm can be extended to 9 by allowing three-dimensional translations for
the base (Figure 3.16). If base rotations are allowed, the robot arm can have a total of 12 degrees
of freedom. The human body, in comparison, has more than 200 degrees of freedom.
Figure 3.16: Translational and rotational degrees of freedom for the base of the robot arm
Parameterized systems allow object motion characteristics to be specified as part of the object
definitions. The adjustable parameters control such object characteristics as degrees of freedom,
motion limitations, and allowable shape changes.
Scripting systems allow object specifications and animation sequences to be defined with a
user-input script. From the script, a library of various objects and motions can be constructed.
Page 34
Computer Graphics and Fundamentals of Image Processing (21CS63)
Character Animation
Animation of simple objects is relatively straightforward. It becomes much more difficult to
create realistic animation of more complex figures such as humans or animals. Consider the
animation of walking or running human (or humanoid) characters. Based upon observations in
their own lives of walking or running people, viewers will expect to see animated characters
move in particular ways. If an animated character’s movement doesn’t match this expectation,
the believability of the character may suffer. Thus, much of the work involved in character
animation is focused on creating believable movements.
The connecting points, or hinges, for an articulated figure are placed at the shoulders, hips,
knees, and other skeletal joints, which travel along specified motion paths as the body moves.
For example, when a motion is specified for an object, the shoulder automatically moves in a
certain way and, as the shoulder moves, the arms move. Different types of movement, such as
walking, running, or jumping, are defined and associated with particular motions for the joints
and connecting links.
Page 35
Computer Graphics and Fundamentals of Image Processing (21CS63)
Figure 3.17: A simple articulated figure with nine joints and twelve connecting links, not
counting the oval head
A series of walking leg motions, for instance, might be defined as in Figure 3.18. The hip joint is
translated forward along a horizontal line, while the connecting links perform a series of
movements about the hip, knee, and angle joints. Starting with a straight leg [Figure 3.18(a)], the
first motion is a knee bend as the hip moves forward [Figure 3.18(b)]. Then the leg swings
forward, returns to the vertical position, and swings back, as shown in Figures 3.18(c), (d), and
(e). The final motions are a wide swing back and a return to the straight vertical position, as in
Figures 3.18(f) and (g). This motion cycle is repeated for the duration of the animation as the
figure moves over a specified distance or time interval.
Figure 3.18: Possible motions for a set of connected links representing a walking leg.
As a figure moves, other movements are incorporated into the various joints. A sinusoidal
motion, often with varying amplitude, can be applied to the hips so that they move about on the
torso. Similarly, a rolling or rocking motion can be imparted to the shoulders, and the head can
bob up and down.
Motion Capture
Page 36
Computer Graphics and Fundamentals of Image Processing (21CS63)
movement of the character is predetermined (as in a scripted scene). The animated character will
perform the same series of movements as the live actor.
The classic motion capture technique involves placing a set of markers at strategic positions on
the actor’s body, such as the arms, legs, hands, feet, and joints. It is possible to place the markers
directly on the actor, but more commonly they are affixed to a special skintight body suit worn
by the actor. The actor is them filmed performing the scene. Image processing techniques are
then used to identify the positions of the markers in each frame of the film, and their positions
are translated to coordinates. These coordinates are used to determine the positioning of the body
of the animated character. The movement of each marker from frame to frame in the film is
tracked and used to control the corresponding movement of the animated character.
To accurately determine the positions of the markers, the scene must be filmed by multiple
cameras placed at fixed positions. The digitized marker data from each recording can then be
used to triangulate the position of each marker in three dimensions. Typical motion capture
systems will use up to two dozen cameras.
Optical motion capture systems rely on the reflection of light from a marker into the camera.
These can be relatively simple passive systems using photoreflective markers that reflect
illumination from special lights placed near the cameras, or more advanced active systems in
which the markers are powered and emit light.
Non-optical systems rely on the direct transmission of position information from the markers to a
recording device. Some non-optical systems use inertial sensors that provide gyroscope-based
position and orientation information.
Some motion capture systems record more than just the gross movements of the parts of the
actor’s body. It is possible to record even the actor’s facial movements. Often called
performance capture systems, these typically use a camera trained on the actor’s face and small
light-emitting diode (LED) lights that illuminate the face. Small photoreflective markers attached
to the face reflect the light from the LEDs and allow the camera to capture the small movements
of the muscles of the face, which can then be used to create realistic facial animation
on a computer-generated character.
Page 37
Computer Graphics and Fundamentals of Image Processing (21CS63)
Periodic Motions
When animation is constructed with repeated motion patterns, such as a rotating object, the
motion should be sampled frequently enough to represent the movements correctly. The motion
must be synchronized with the frame-generation rate so that enough frames are displayed per
cycle to show the true motion. Otherwise, the animation may be displayed incorrectly. A typical
example of an undersampled periodic-motion display is the wagon wheel in a Western movie
that appears to be turning in the wrong direction. Figure 3.19 illustrates one complete cycle in the
rotation of a wagon wheel with one red spoke that makes 18 clockwise revolutions per second. If
this motion is recorded on film at the standard motion-picture projection rate of 24 frames per
second, then the first five frames depicting this motion would be as shown in Figure 3.20.
Because the wheel completes 3/4 of a turn every 1/24 of a second, only one animation frame is
generated per cycle, and the wheel thus appears to be rotating in the opposite (counterclockwise)
direction.
Figure 3.19: Five positions for a red spoke during one cycle of a wheel motion that is turning
at the rate of 18 revolutions per second.
Page 38
Computer Graphics and Fundamentals of Image Processing (21CS63)
Figure 3.20: The first five film frames of the rotating wheel in Figure 19 produced at the
rate of 24 frames per second
Double-buffering operations, if available, are activated using the following GLUT command:
glutInitDisplayMode (GLUT_DOUBLE);
This provides two buffers, called the front buffer and the back buffer, that we can use
alternately to refresh the screen display. While one buffer is acting as the refresh buffer for the
current display window, the next frame of an animation can be constructed in the other buffer.
We specify when the roles of the two buffers are to be interchanged using
glutSwapBuffers ( );
To determine whether double-buffer operations are available on a system, we can issue the
following query:
A value of GL_TRUE is returned to array parameter status if both front and back
buffers are available on a system. Otherwise, the returned value is GL_FALSE.
Page 39
Computer Graphics and Fundamentals of Image Processing (21CS63)
glutIdleFunc (animationFcn);
where parameter animationFcn can be assigned the name of a procedure that is to perform the
operations for incrementing the animation parameters. This procedure is continuously executed
whenever there are no display-window events that must be processed. To disable the
glutIdleFunc, we set its argument to the value NULL or the value 0
Question Bank
1. Explain in detail about logical classification of input devices.
2. Explain request mode, sample mode and event mode.
3. Explain in detail about interactive picture construction techniques.
4. Write a note on virtual reality environment.
5. Explain different OpenGL interactive Input-Device functions.
6. Explain OpenGL menu functions in detail.
7. Explain about designing a graphical user interface.
8. Write a note on OpenGL Animation Procedures.
9. Explain character animation in detail.
10. Write a note on computer animation languages.
11. Explain briefly about general computer animation functions.
12. Explain in detail about traditional animation techniques.
13. Explain in detail about different stages involved in design of animation sequences.
14. Write a note on periodic motion.
Page 40