0% found this document useful (0 votes)
17 views23 pages

CG Program Explanation

Explanation of cg programs

Uploaded by

sonugowda8393
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views23 pages

CG Program Explanation

Explanation of cg programs

Uploaded by

sonugowda8393
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

PGM-1 Explaination ( Bresenham’s Algorithm)

1. The bresenham_line function takes four arguments: x1, y1, x2, and
y2, which represent the starting and ending points of the line segment.

2. Inside the function, it calculates the deltas (dx and dy) between the
starting and ending points, and determines the step direction (x_step
and y_step) for each axis.

3. It initializes the error term (error) and an empty list (line_points) to


store the coordinates of the line points.

4. The function then enters a loop that iterates dx + 1 times, where in


each iteration:

5. It appends the current point (x, y) to the line_points list.

6. It updates the error term (error) and adjusts the coordinates of x and
y based on the Bresenham's line algorithm.

7. After the loop, the function returns the line_points list containing
the coordinates of the line points.

8. In the example usage section, the code sets up a turtle graphics


window, defines the starting and ending points of the line segment (x1,
y1, x2, y2), calls the bresenham_line function to get the line points,
and then draws the line segment by moving the turtle to each point in

the line_points list.

9. The turtle.exitonclick() function keeps the graphics window open


until the user clicks on it, allowing the user to view the drawn line.
PGM-2 Explanation (2D object geometric operations)

1. The code starts by importing the necessary modules: turtle for


drawing graphics and math for mathematical operations.

2. A turtle screen is set up with a white background color.

3. A turtle instance t is created, and its speed and pen size are set.

4. Two helper functions draw_rectangle and draw_circle is defined to


draw rectangles and circles, respectively. These functions take the
coordinates, dimensions (width, height, or radius), and color as
arguments.

5. Three transformation functions are defined: translate, rotate, and


scale. These functions take the coordinates and transformation
parameters (translation distances, rotation angle, or scaling factors) as
arguments and move the turtle's position and orientation accordingly.

6. The code then demonstrates the use of these functions by drawing


and transforming a rectangle and a circle.

• A rectangle is drawn at (-200, 0) with a width of 100 and a height of


50 in blue color.

• The rectangle is translated 200 units to the right, and a new rectangle
is drawn at (0, 0).

• The rectangle is rotated by 45 degrees, and a new rectangle is drawn.

• The rectangle is scaled by a factor of 2 in both dimensions, and a


new rectangle is drawn.
• A circle is drawn at (100, 100) with a radius of 50 in red color.

• The circle is translated 200 units to the right, and a new circle is
drawn at (300,100).

• The circle is rotated by 45 degrees, and a new circle is drawn.

• The circle is scaled by a factor of 2 in both dimensions, and a new


circle is drawn at (600, 200).

7. Finally, the turtle. done () function is called to keep the window


open until it's closed by the user.
PGM – 3 Explainations (3D object geometric operations)

1. Imports the necessary modules from the vpython library.

2. Creates a 3D canvas with a white background and dimensions of


800x600 pixels.

3. Defines a function draw_cuboid that creates a 3D cuboid (box) at a


specified position, with given length, width, height, and color. It
returns the created cuboid object.

4. Defines a function draw_cylinder that creates a 3D cylinder at a


specified

position, with given radius, height, and color. It returns the created
cylinder object.

5. Defines a function translate that translates a 3D object by given dx,


dy, and dz values along the x, y, and z axes, respectively.

6. Defines a function rotate that rotates a 3D object by a given angle


around a specified axis.

7. Defines a function scale that scales a 3D object by given sx, sy, and
sz scale factors along the x, y, and z axes, respectively.

8. Creates a blue cuboid at position (-2, 0, 0) with dimensions 2x2x2.

9. Translates the cuboid by (4, 0, 0) using the translate function.

10.Rotates the cuboid by 45 degrees around the y-axis using the rotate
function.
11.Scales the cuboid by a factor of 1.5 along all axes using the scale
function.

12.Creates a red cylinder at position (2, 2, 0) with radius 1 and height


10.

13.Translates the cylinder by (0, -2, 0) using the translate function.

14.Rotates the cylinder by 30 degrees around the x-axis using the


rotate function.

15.Scales the cylinder by a factor of 1.5 along all axes using the scale
function.

16.Enters an infinite loop to keep the 3D scene interactive, with a


frame rate of

30 frames per second, using the rate function.


PGM-4 Explainations (. 2D transformation basic objects)

1. The code starts by importing the necessary libraries: cv2 for


OpenCV and numpy for numerical operations.

2. It defines the dimensions of the canvas (canvas_width and


canvas_height) and creates a blank white canvas using NumPy.

3. The initial object (a square) is defined as an array of four points


(obj_points) representing the vertices of the square.

4. The transformation matrices are defined:

• translation_matrix: A 2x3 matrix for translation.

• rotation_matrix: A rotation matrix obtained using

cv2.getRotationMatrix2D for rotating around a specified center point


by a given angle.

• scaling_matrix: A 2x3 matrix for scaling.

5. The transformations are applied to the initial object by performing


matrix multiplication with the transformation matrices:

• translated_obj: The object is translated by applying the


translation_matrix.

• rotated_obj: The translated object is rotated by applying the


rotation_matrix.

• scaled_obj: The rotated object is scaled by applying the


scaling_matrix.
6. The original object and the transformed objects (translated, rotated,
and scaled) are drawn on the canvas using cv2.polylines.

7. The canvas with the drawn objects is displayed using cv2.imshow,


and the code waits for a key press (cv2.waitKey(0)) before closing the
window.

8. Finally, all windows are closed using cv2.destroyAllWindows().

9. The resulting output is a window displaying the following:

• The original square (black)

• The translated square (green)

• The rotated square (red)

• The scaled square (blue)


PGM-5 Explanation (3D transformation basic objects)

1. Imports the necessary modules from pygame, OpenGL.GL,


OpenGL.GLU, and numpy.

2. Initializes Pygame and sets up the display with a width of 800


pixels and a height of 600 pixels.

3. Sets up OpenGL by clearing the color buffer, enabling depth testing,


setting up the projection matrix using gluPerspective, and switching to
the ModelView matrix mode.

4. Defines the vertices of a 3D cube using a NumPy array.

5. Defines the edges of the cube as pairs of vertex indices using a


NumPy array.

6. Sets up the transformation matrices for translation, rotation, and


scaling:

7. translation_matrix translates the object along the negative z-axis by


5 units.

8. rotation_matrix is initially set to the identity matrix (no rotation).

9. scaling_matrix scales the object by a factor of 1.5 along all axes.

10.Enters the main loop, which runs until the user closes the window.

11.Inside the main loop:

• Handles the Pygame event queue, checking for the QUIT event to
exit the loop.
• Clears the color and depth buffers using glClear.

• Applies the transformations:

• Loads the identity matrix using glLoadIdentity.

• Applies the translation matrix using glMultMatrixf.

• Rotates the object around the vector (1, 1, 0) by an angle that


increases with each iteration.

• Applies the rotation matrix using glMultMatrixf.

• Applies the scaling matrix using glMultMatrixf.

• Draws the 3D cube by iterating over the edges and vertices, using

glBegin(GL_LINES) and glVertex3fv.

• Increments the rotation angle for the next iteration.

• Swaps the front and back buffers using pygame.display.flip() to


display the rendered scene.

12.After the main loop ends, the code quits Pygame.


PGM -6 Explanation (Animation effects on simple objects)

1. Imports the required modules: pygame for creating the graphical


window and handling events, and random for generating random
values.

2. Initializes Pygame and sets up a window with a width of 800 pixels


and a height of 600 pixels.

3. Defines some colors (BLACK, WHITE, RED, GREEN, BLUE) as


RGB tuples.

4. Initializes a list called objects to store the properties of each circle


object. The properties include the x and y coordinates, radius, color,
and velocities (speed_x and speed_y).

5. Generates num_objects (set to 10) with random positions, radii,


colors, and velocities, and appends them to the objects list.

6. Enters the main loop, which runs until the user closes the window.

7. Inside the main loop:

8. Handles the Pygame event queue, checking for the QUIT event to
exit the loop.

9. Clears the screen by filling it with the WHITE color.

10.Iterates over each object in the objects list:

11.Updates the x and y coordinates of the object based on its


velocities.
12.Checks if the object has collided with the edges of the screen. If so,
it reverses the corresponding velocity component (x or y) to make the
object bounce off the edge.

13.Draws the object (circle) on the screen using pygame.draw.circle


with the object's color, position, and radius.

14.Updates the display using pygame.display.flip().

15.Limits the frame rate to 60 frames per second (FPS) using


clock.tick(60).

16.After the main loop ends, the code quits Pygame.


PGM-6 Explanation (display image into 4 quadrants)

1. The necessary libraries, cv2 (OpenCV) and numpy, are imported.

2. The path to the input image file is specified (image_path). In this


case, it's set to "image/atc.jpg", assuming the image file named
"atc.jpg" is located in a directory named "image" relative to the
script's location.

3. The image is loaded using cv2.imread().

4. The height and width of the image are obtained from the shape
attribute of the image.

5. The image is split into four quadrants using NumPy array slicing:

6. up_left: Top-left quadrant

7. up_right: Top-right quadrant

8. down_left: Bottom-left quadrant

9. down_right: Bottom-right quadrant

10.A blank canvas with the same dimensions as the original image is
created using np.zeros(). This canvas will be used to display the four
quadrants.

11.The four quadrants are placed on the canvas using NumPy array
assignment.

12.The canvas containing the four quadrants is displayed using


cv2.imshow().
13.The script waits for a key press (cv2.waitKey(0)) before closing the
window.

14.Finally, all windows are closed using cv2.destroyAllWindows().


PGM-8 Explanation (rotation, scaling, and translation on an
image.)

1. The necessary libraries, cv2 (OpenCV) and numpy, are imported.

2. The path to the input image file is specified (image_path). In this


case, it's set to "image/atc.jpg", assuming the image file named
"atc.jpg" is located in a directory named "image" relative to the
script's location.

3. The image is loaded using cv2.imread().

4. The height and width of the image are obtained from the shape
attribute of the image.

5. The transformation matrices for rotation, scaling, and translation are


defined:

6. rotation_matrix: Obtained using cv2.getRotationMatrix2D() to


rotate the image by 45 degrees around its center.

7. scaling_matrix: A 2x3 NumPy matrix to scale the image by a factor


of 1.5 along both axes.

8. translation_matrix: A 2x3 NumPy matrix to translate the image by


(100, 50) pixels.

9. The transformations are applied to the original image using


cv2.warpAffine():

10.rotated_img: The image is rotated using the rotation_matrix.

11.scaled_img: The image is scaled using the scaling_matrix.


12.translated_img: The image is translated using the
translation_matrix.

13.The original image and the transformed images (rotated, scaled,


and

translated) are displayed using cv2.imshow().

14.The script waits for a key press (cv2.waitKey(0)) before closing the
windows.

15.Finally, all windows are closed using cv2.destroyAllWindows().


PGM-9 Explanation (Filtering techniques)

1. The necessary libraries, cv2 (OpenCV) and numpy, are imported.

2. The path to the input image file is specified (image_path). In this


case, it's set to "image/atc.jpg", assuming the image file named
"atc.jpg" is located in a directory named "image" relative to the
script's location.

3. The image is loaded using cv2.imread().

4. The image is converted to grayscale using cv2.cvtColor(img,

cv2.COLOR_BGR2GRAY). This step is necessary for edge detection


and texture extraction, as these operations are typically performed on
grayscale images.

5. Edge detection is performed on the grayscale image using the


Canny edge detector (cv2.Canny(gray, 100, 200)). The Canny edge
detector is a popular algorithm for edge detection, and the two
arguments (100 and 200) are the lower and upper thresholds for
hysteresis.

6. Texture extraction is performed using a simple averaging filter

(cv2.filter2D(gray, -1, kernel)). A 5x5 averaging kernel (kernel =


np.ones((5, 5), np.float32) / 25) is defined, where each element is set
to 1/25 (the sum of the kernel elements is 1). This kernel is applied to
the grayscale image using cv2.filter2D(), which performs a 2D
convolution between the image and the kernel. The resulting image
(texture) captures the texture information of the original image.

7. The original image (img), the detected edges (edges), and the
extracted texture

(texture) are displayed using cv2.imshow().

8. The script waits for a key press (cv2.waitKey(0)) before closing the
windows.

9. Finally, all windows are closed using cv2.destroyAllWindows().


PGM-10 Explanation (blur and smoothing an image.)

1. The cv2 library is imported from OpenCV.

2. The image is loaded using cv2.imread('image/atc.jpg'). Make sure


to replace 'image/atc.jpg' with the correct path to your image file.

3. Three different types of blurring/smoothing filters are applied to the


image:

4. Gaussian Blur: gaussian_blur = cv2.GaussianBlur(image, (5, 5), 0)


applies a Gaussian blur filter to the image. The parameters (5, 5)
specify the size of the Gaussian kernel, and 0 is the standard deviation
value in the X and Y directions (which is automatically calculated
from the kernel size).

5. Median Blur: median_blur = cv2.medianBlur(image, 5) applies a


median blur filter to the image. The parameter 5 specifies the size of
the median filter kernel.

6. Bilateral Filter: bilateral_filter = cv2.bilateralFilter(image, 9, 75,


75) applies a bilateral filter to the image. The parameters 9, 75, and 75
represent the diameter of the pixel neighborhood, the filter sigma in
the color space, and the filter sigma in the coordinate space,
respectively.

7. The original image and the filtered images are displayed using
cv2.imshow().

8. The script waits for a key press (cv2.waitKey(0)) before closing the
windows.

9. Finally, all windows are closed using cv2.destroyAllWindows().

PGM-11 Explanation (contour an image.)

1. The cv2 library is imported from OpenCV, and the numpy library is
imported for numerical operations.

2. The image is loaded using cv2.imread('image/atc.jpg'). Make sure


to replace 'image/atc.jpg' with the correct path to your image file.

3. The image is converted to grayscale using cv2.cvtColor(image,

cv2.COLOR_BGR2GRAY). This step is necessary because contour


detection is often performed on grayscale images.

4. Binary thresholding is applied to the grayscale image using

cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV +

cv2.THRESH_OTSU). This operation converts the grayscale image


into a binary image, where pixels are either black or white, based on a
threshold value. The cv2.THRESH_OTSU flag automatically
determines the optimal threshold value using Otsu's method. The
cv2.THRESH_BINARY_INV flag inverts the binary image, so that
foreground objects become white and the background becomes black.

5. The contours are found in the binary image using


cv2.findContours(thresh,cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE). The

cv2.RETR_EXTERNAL flag retrieves only the extreme outer


contours, and cv2.CHAIN_APPROX_SIMPLE compresses the
contour data by

approximating it with a simplified polygon.

6. A copy of the original image is created using contour_image =


image.copy(). This copy will be used to draw the contours on.

7. The contours are drawn on the contour image using

cv2.drawContours(contour_image, contours, -1, (0, 255, 0), 2). The -1

argument indicates that all contours should be drawn, the (0, 255, 0)
argument specifies the color (green in this case), and the 2 argument
specifies the thickness of the contour lines.

8. The original image and the contour image are displayed using
cv2.imshow().

9. The script waits for a key press (cv2.waitKey(0)) before closing the
windows.

10.Finally, all windows are closed using cv2.destroyAllWindows().


PGM-12 Explanation (detect a face/s in an image.)

1. This code demonstrates how to perform face detection in an image


using OpenCV in Python. Here's a breakdown of what the code does:

2. The cv2 library is imported from OpenCV.

3. The Haar cascade classifier for face detection is loaded using

cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml'). This classifier is a pre-trained
model that can detect frontal faces in images.

4. The image is loaded using cv2.imread('image/face.jpeg'). Make sure


to replace 'image/face.jpeg' with the correct path to your image file
containing faces.

5. The image is converted to grayscale using cv2.cvtColor(image,

cv2.COLOR_BGR2GRAY). Face detection is typically performed on


grayscale images.

6. The face_cascade.detectMultiScale method is used to detect faces in


the grayscale image.

SThe parameters scaleFactor=1.1, minNeighbors=5, and minSize=(30,


30) control the detection process:

7. scaleFactor=1.1 specifies the scale factor used to resize the input


image for different scales.

8. minNeighbors=5 specifies the minimum number of neighboring


rectangles that should overlap to consider a face detection as valid.

9. minSize=(30, 30) specifies the minimum size of the face to be


detected.

10.For each detected face, a rectangle is drawn around it using

cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2). The


rectangle

coordinates are obtained from the faces list returned by


detectMultiScale. The (0, 255, 0) argument specifies the color (green
in this case), and the 2 argument specifies the thickness of the
rectangle lines.

11.The image with the detected faces and rectangles is displayed using

cv2.imshow('Face Detection', image).

12.The script waits for a key press (cv2.waitKey(0)) before closing the
window.
13.Finally, the window is closed using cv2.destroyAllWindows().

You might also like