Computer Graphics-Iii Cse & It Unit-Iv 2-D Viewing
Computer Graphics-Iii Cse & It Unit-Iv 2-D Viewing
views of a picture on an output device. Typically, a graphics package allows a user to specify which part of a defined picture is to be display& and where that part is to be placed on the display device. Any convenient Cartesian coordinate system, referred to as the world-coordinate reference frame, can be used to define the picture. A user can select a single area for display, or several areas could be selected for simultaneous display or for an animated panning sequence across a scene. The pidure parts within the selected areas are then mapped onto specified areas of the device coordinates. THE VIEWING PIPELINE: A world-coordinate area selected for display is called a window. An area on a display device to which a window is mapped is called a viewport. The window defines what is to be viewed; the viewport defines where it is to be displayed.Often, windows and viewports are rectangles in standard position, with the rectangle edges parallel to the coordinate axes In general, the mapping of a part of a world-coordinate scene to device coordinates is referred to as a viewing transformation. sometimes the two-dimensional viewing transformation is simply referred to as the window-to-viewport transformation or the windowing transformation.
Figure illustrates the mapping of a picture section that falls within a rectangular window onto a designated &angular viewport. we will only use the term window to refer to an area of a world-coordinate scene that has been selected for display we carry out the viewing transformation in several steps, as indicated in Fig.
First, we construct the scene in world coordinates using the output primitives and attributes. Next. to obtain a particular orientation for the window, we can set up a two-dimensional viewing-coordinate system in the world-coordinate plane, and define a tvindow In the viewing-coordinate system. Once the viewing reference frame is established, we can transform descriptions in world coordinates to viewing coordinates. We then define a viewport in normalized coordinates (in the range from 0 to 1 ) and map the viewing-coordinate description of the scene to normalized coordinates. At the final step, all parts of the picture that are outside the viewport are clipped, and the contents of the viewport are transferred to device coordinates. Following Figure illustrates a rotated viewing-coordinate reference frame and the mapping to normalized coordinates.
By changing the position of the viewport, we can view objects at different positions on the display area of an output device. Also, by varying the size of viewports, we can change the size and proportions of displayed objects. We achieve zooming effects by successively mapping different-sized windows on a fixed-size viewport. As the windows are made smaller, we zoom in on some part of a scene to view details that are not shown with larger windows. Similarly, more overview is obtained by zooming out from a section of a scene with successively larger windows. when all coordinate transformations are completed, viewport clipping can be performed in normalized coordinates or in device coordinates. VIEWING COORDINATE REFERENCE FRAME: This coordinate system provides the reference frame for specifying the world coordinate Window. First, a viewing-coordinate origin is selected at some world position: P0 = (x0,yo). Then we need to establish the orientation, or rotation, of this reference frame. One way to do this is to specifying world vector V that defines the viewing yv direction. Vector V is called the view up vector. Given V, we can calculate the components of unit vectors v=(vx,vy) u =(ux,uy) for the viewing yv and Xv, axes, respectively.
We obtain the matrix for converting world coordinate positions to viewing coordinates as a two-step composite transformation: First, we translate the viewing origin to the world origin, and then we rotate to align the two coordinate reference frames. The composite two-dimensional transformation to convert world coordinates to viewing coordinate is MWC,VC = R . T where T is the translation matrix that takes the viewing origin point Po to the world origin, and R is the rotation matrix. WINDOW-TO-VIEWPORT COORDINATE TRANSFORMATION: Once object descriptions have been transferred to the viewing reference frame, we choose the window extents in viewing coordinates and select the viewport limits in normalized coordinates. Object descriptions are then transferred to normalized device coordinates. We do this using a transformation that maintains the same relative placement of objects in normalized space as they had in viewing coordinates. If a coordinate position is at the center of the viewing window, for instance, it will be displayed at the center of the viewport. Figure illustrates the window-to-viewport mapping. A point at position (xw,yw) in the window is mapped into position (xv, yv) in the associated viewport.
To maintain the same relative placement in the viewport as in the window,we require that
Solving these expressions for the view port position (xv,yv) we have
This conversion is performed with the following sequence of transformations: 1. Perform a scaling transformation using a fixed-point position of(xwmin,ywmin) that scales the window area to the size of the viewport. 2. Translate the scaled window area to the position of the viewport. Character strings can be handled in two ways when they are mapped to a viewport. The simplest mapping maintains a constant character size, even though the viewport area may be enlarged or reduced relative to the window. This method would be employed when text is formed with standard character fonts that cannot be changed. In systems that allow for changes in character size, string definitions can be windowed the same as other primitives. Workstation Transformation : Any number of output devices can be open in a particular application, and another window-to-viewport transformation can be performed for each open output device. workstation transformation, is accomplished by selecting a window area in normalized space and a viewport area in the coordinates of the display device. With the workstation transformation, we gain some additional control over the positioning of parts of a scene on individual output devices.
TWO-DIMENSIONAL \/IEWING FUNCTIONS: We define a viewing reference system in an application program with the following function: *evaluateVieworientationMatrix(x0, y0, xv, yv, error, ViewMatrix) where parameters x0 and y0 are the coordinates of the viewing origin, and parameters xV and yV are the world-coordinate positions for the view up vector. An integer error code is generated if the input parameters are in error; otherwise, the viewmatrix for the world-to-viewing transformation is calculated. To set up the elements of a window-to-viewport mapping matrix, we invoke the function *evaluateviewMappingMatrix(xwmin,xwmax,ywmin,ywmax,xvmi n,xvmax,yvmin,yvmax,error, viewMappingMatrix) Here, the window limits in viewing coordinates are chosen with parameters Xwmin, xwmax, ywmin, ywmax and the viewport limit are set with the normalized coordinate positions xvmin, xvmax, yvmin, yvmax Next, we can store combinations of viewing and window-viewport mappings for various workstations in a viewing Table with *setViewRepresentation(ws,viewIndex,viewMatrix,viewMapping Matrix,xclipmin,xclipmax,yclipmin,yclipmax,clipxy) where parameter ws designates the output device (workstation), and parameter viewIndex sets an integer identifier for this particular window-viewport pair. The matrices viewMatrix and viewMappingMatrix can be concatenated and referenced by the viewIndex.Additional clipping limits can also be specified here, but they are usually set to coincide with the viewport boundaries. And parameter clipxy is assigned either the value noclip or the value clip. when we know that all of the scene is included within the viewport limits The function setViewIndex(ViewIndex)
we apply a workstation transformation by selecting a workstation window-viewport pair: *setWorkstationWindow(WS,xwswindmin,xwswinmax,ywswindm in,ywswindmax) *setworkstationviewport(ws,xwsVPortmin,xwsVPortmax,ywsVPo rtmin,ywsVPortmax) where parameter ws gives the workstation number. Window-coordinate extents are specified in the range from 0 to 1 (normalized space), and viewport limits are in integer device coordinates.
CLIPPING OPERATIONS:
The procedure that identifies those portions of a picture that are either inside or outside of a specified region of space is referred to as a clipping algorithm,or simply clipping. The region against which an object is to clipped is called a clip window. For the viewing transformation, we want to display only those picture parts that are within the window area (assuming that the clipping flags have not been set to noclip). Everything outside the window is discarded. Clipping algorithms can be applied in world coordinates, so that only the contents of the window interior are mapped to device coordinates. In the following sections, we consider algorithms for clipping the following Primitive types * Point Clipping * Line Clipping (straight-line segments) * Area Clipping (polygons) * Text Clipping POINT CLlPPlNG: Assuming that the clip window is a rectangle in standard position, we save a point P = (x, y) for display if the following inequalities are satisfied: xwmin x xwmax ywmin y ywmax where the edges of the clip window (xwmin,xwmax,ywmin,ywmax) can be either the world-coordinate window boundaries or viewport boundaries. If any one of these four inequalities is not satisfied, the point is clipped (not saved for display). For example, point clipping can be applied to scenes involving explosions or sea foam that are modeled with particles (points) distributed in some region of the scene. LINE CLIPPING: Figure illustrates possible relationships between line positions and a standard rectangular clipping region. Line clipping procedure involves several parts. First, we can test a given line segment to determine whether it lies completely inside the clipping window. If it does not, we try to determine whether it lies completely outside the window. Finally, if we cannot identify a line as completely inside or completely outside, we must perform intersection calculations with one or more clipping boundaries. A line with both endpoints inside all clipping boundaries, such as the line from P1 toP2 is saved. A line with both endpoints outside any one of the clip boundaries (line P3P4 in Fig.) is outside the window. All other lines cross one or more clipping boundaries, and may require calculation of multiple intersection points. 6
For a line segment with endpoints (x1, y1) and (x2,y2) one or both endpoints outside the clipping rectangle, the parametric representation is x=x1+u(x2-x1) y=y1+u(y2-y1) 0u1 If the value of u for an intersection with a rectangle boundary edge is outside the range 0 to 1, the line does not enter the interior of the window at that boundary. If the value of u is within the range from 0 to 1, the line segment does indeed cross into the clipping area. COHEN SUTHERLAND LINE CLIPPING: This is one of the oldest and most popular line-clipping procedures. The line endpoint in a picture is assigned a four-digit binary code, called a region code, that identifies the location of the point relative to the boundaries of the clipping rectangle. Regions are set up in reference to the boundaries as shown in Fig.
Each bit position in the region code is used to indicate one of the four relative coordinate positions of the point with respect to the clip window: to the left, right, top, or bottom. By numbering the bit positions in the region code as 1 through 4 from right to left, the coordinate regions can be correlated with the bit positions as bit 1: left bit 2: right bit 3: below bit 4: above
A value of 1 in any bit position indicates that the point is in that relative position,otherwise, the bit position is set to 0. If a point is within the clipping rectangle, the region code is 0000. A point that is below and to the left of the rectangle has a region code of 0101. Bit values in the region code are determined by comparing endpoint coordinate values (x, y) to the clip boundaries. Bit 1 is set to 1 if x < xwmin Bit 2 is set to 1 if x < xwmax Bit 3 is set to 1 if y< ywmin Bit 4 is set to 1 if y< xwmin Region-code bit values can be determined with the following two steps: 1. Calculate differences between endpoint coordinates and clipping boundaries. 2. Use the resultant sign bit of each difference calculation to set the corresponding value in the region code. Bit 1 is the sign bit of x-xwmin ,bit 2 is the sign bit of xwmax-x, bit 3 is the sign bit of yywmin,bit 4 is the sign bit of ywmax-y. Once we have established region codes for all line endpoints, we can quickly determine which lines are completely inside the clip window and which are clearly outside. Any lines that are completely contained within the window boundaries have a region code of 0000 for both endpoints, and we trivially accept these lines. Any lines that have a 1 in the same bit position in the region codes for each endpoint are completely outside the clipping rectangle, and we trivially reject these lines. We would discard the line that has a region code of 1001 for one endpoint and a code of 0101 for the other endpoint. Both endpoints of this line are left of the clipping rectangle, as indicated by the 1 in the first bit position of each region code. A method that can be used to test lines for total clipping is to perform the logical and operation with both region codes. If the result is not 0000, the line is completely outside the clipping region To illustrate the specific steps in clipping lines against rectangular boundaries using the Cohen-Sutherland algorithm, we show how the lines in Fig. could be processed. Starting with the bottom endpoint of the line from P1to P2 we check P1, against the left, right, and bottom boundaries in turn and find that this point is below the clipping rectangle. We then find the intersection point P1 with the bottom boundary and discard the line section from P1 to P1. The line now has been reduced to the section from P1 to P2.Since P2 is outside the clip window, we check this endpoint against the boundaries and find that it is to the left of the window. Intersection point P2is calculated, but this point is above the window. So the final intersection calculation yields P2, and the line from P1 to P2 is saved. Point P3 in the next line is to the left of the clipping rectangle, so we determine the intersection P3, and eliminate the line section from P3 to P3 By checking region codes for the line section from P3 to P4 we find that the remainder of the line is below the clip window and can be discarded also. For a line with endpoint coordinates (x1,y1) and (x2,y2) they coordinate of the intersection point with a vertical boundary can be obtained with the calculation y=y1+m(x-x1) where the x value is set either to xwmin or to wmax and the slope of the line is calculated
as m = (y2-y1)/(x2-x1) Similarly, if we are looking for the intersection with a horizontal boundary, the x coordinate can be calculated as x = x1 + y y 1 / m with y is set either to ywmin or to ywmax. CYRUS BECK LINE CLIPPING ALGORITHM: The cyrus beck technique for the line clipping is a parametric based method which is used to clip a 2D line against an arbitrary convex polygon. The Parametric representation of a line segment from P0 to P1 is given as P(t)=P0 +(P1 P0 )t , 0 t 1.--------------------------(1) This parametric representation is advantageous as it does not depend on any coordinate system. The parametert can be obtained from the equation (1) as P(t) = P0 + (P1-P0) t P(t) - P0 = (P1-P0) t t = P(t) P 0 / P1 - P0 The values of t on the intersection of a line with the four window edges are given by Left Edge t = x L - x1 / x 2 x1 Right Edge t = x R - x1 / x 2 x1 Top Edge t = y T - y1 / y 2 y1 Bottom Edge t = y B - y1 / y 2 y1 Where t in all the above equations lies between 0 and 1 as 0 t 1 , x L, x R, y B , y T are the left , right, bottom and top window coordinates respectively . Consider a Polygon Cp with b as a boundary point as Ni as the inner normal to any polygon boundary therefore for any value of t the dot product is given as Ni.[p(t)-bi ] ------------------------------(2)
V N0 P a Ni
Where i=1,2,3,.. i)Ni.[[P(t)-bi ] > 0 , if the point in the polygon boundary. ii)Ni.[[P(t)-bi ] = 0 , if the point on the polygon boundary iii)Ni.[[P(t)-bi ] < 0 , if the point outside the polygon boundary Substituting equation (1) in equation ( 2) results Ni.[p(t)-bi ] Ni.[ P0 + (P1-P0) t -bi ]=0 (for a point on the polygon boundary) Ni.[ P0- bi] + Ni[(P1-P0) t]=0 ------------------------(3) Let P1-P0 = D ( direction of the line ) P0- bi = Wi (weighting factor ) Equation (3) reduces to Ni . Wi + Ni . Dt =0 t(Ni . D) + Ni . Wi = 0 t(Ni . D) = - Ni . Wi t = - Ni . Wi / Ni . D
10
where D must not be 0 (D 0 ) D = 0 => P1 = P0 If P1 = P0 then the following conditions holds i)Wi . Ni < 0 for the point outside the boundary ii) Wi . Ni = 0 for the point on the boundary iii) Wi . Ni > 0 for the point inside the boundary
11