0% found this document useful (0 votes)
9 views5 pages

DeepLearningExp4.Ipynb - Colab

The document outlines an experiment to implement an object detection model using SSD MobileNet v2 in Google Colab, focusing on detecting and classifying objects in images. It includes prerequisites, key terms, experimental setup, and a step-by-step procedure for loading the model, processing images, and visualizing results. Additionally, it discusses precautions, potential sources of error, and poses short questions related to the experiment.

Uploaded by

manyaprakash31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views5 pages

DeepLearningExp4.Ipynb - Colab

The document outlines an experiment to implement an object detection model using SSD MobileNet v2 in Google Colab, focusing on detecting and classifying objects in images. It includes prerequisites, key terms, experimental setup, and a step-by-step procedure for loading the model, processing images, and visualizing results. Additionally, it discusses precautions, potential sources of error, and poses short questions related to the experiment.

Uploaded by

manyaprakash31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

4/22/25, 9:03 PM DeepLearningExp4.

ipynb - Colab

keyboard_arrow_down Experiment 4
Objective:

To implement an object detection model using SSD MobileNet v2 in Google Colab. To detect
and classify objects within an image. To visualize the detected objects with bounding boxes and
labels.

Prerequisites:

Basic understanding of Python programming. Familiarity with Google Colab environment.


Knowledge of deep learning concepts, especially object detection. Basic understanding of
image processing libraries like OpenCV and Pillow.

Key Terms:

Object Detection: Identifying and localizing objects within an image or video. SSD (Single Shot
MultiBox Detector): A popular object detection algorithm known for its speed and accuracy.
MobileNet: A lightweight convolutional neural network architecture designed for mobile and
embedded vision applications. TensorFlow Hub: A library for publishing and discovering
reusable machine learning models. Bounding Box: A rectangular box that encloses a detected
object in an image. COCO Dataset: A large-scale object detection, segmentation, and captioning
dataset.

Experimental Setup:

Environment: Google Colab notebook. Libraries: TensorFlow, OpenCV, TensorFlow Hub, Pillow,
Matplotlib. Model: SSD MobileNet v2 pre-trained model from TensorFlow Hub. Data: User-
uploaded image.

Theory:

The SSD MobileNet v2 object detection model uses a deep convolutional neural network to
analyze an input image and predict the presence and location of objects within it. The model is
pre-trained on the COCO dataset, which contains a wide variety of object categories. The model
outputs bounding boxes, class labels, and confidence scores for each detected object.

Procedure:

Step 1: Install Libraries

!pip install tensorflow opencv-python-headless tensorflow-hub

Requirement already satisfied: tensorflow in /usr/local/lib/python3.11/dist-packages


Requirement already satisfied: opencv-python-headless in /usr/local/lib/python3.11/di
Requirement already satisfied: tensorflow-hub in /usr/local/lib/python3.11/dist-packa
Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.11/dist-packa
https://colab.research.google.com/drive/1qbsCgWJS2U9rrVCF_EjcyDzz22M_HoAF#printMode=true 1/5
4/22/25, 9:03 PM DeepLearningExp4.ipynb - Colab
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.11/dist-pa
Requirement already satisfied: flatbuffers>=24.3.25 in /usr/local/lib/python3.11/dist
Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in /usr/local/lib/
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.11/dist-
Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.11/dist-pac
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.11/dist-pa
Requirement already satisfied: packaging in /usr/local/lib/python3.11/dist-packages (
Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.11/dist-
Requirement already satisfied: setuptools in /usr/local/lib/python3.11/dist-packages
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.11/dist-packages
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.11/dist-pac
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.11/
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.11/dist-packag
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.11/dist-
Requirement already satisfied: tensorboard<2.19,>=2.18 in /usr/local/lib/python3.11/d
Requirement already satisfied: keras>=3.5.0 in /usr/local/lib/python3.11/dist-package
Requirement already satisfied: numpy<2.1.0,>=1.26.0 in /usr/local/lib/python3.11/dist
Requirement already satisfied: h5py>=3.11.0 in /usr/local/lib/python3.11/dist-package
Requirement already satisfied: ml-dtypes<0.5.0,>=0.4.0 in /usr/local/lib/python3.11/d
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib
Requirement already satisfied: tf-keras>=2.14.1 in /usr/local/lib/python3.11/dist-pac
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.11/dist-p
Requirement already satisfied: rich in /usr/local/lib/python3.11/dist-packages (from
Requirement already satisfied: namex in /usr/local/lib/python3.11/dist-packages (from
Requirement already satisfied: optree in /usr/local/lib/python3.11/dist-packages (fro
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-package
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-p
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-p
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.11/dist-pack
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/li
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.11/dist-pack
Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.11/dist-pa
Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/dis
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.11/d
Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/dist-packages

Step 2: Import Libraries

 
import cv2
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
from PIL import Image

Step 3: Load the model

# Load SSD with MobileNet from TensorFlow Hub


detector = hub.load("https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2")

Step 4: Upload Image

https://colab.research.google.com/drive/1qbsCgWJS2U9rrVCF_EjcyDzz22M_HoAF#printMode=true 2/5
4/22/25, 9:03 PM DeepLearningExp4.ipynb - Colab

from google.colab import files


uploaded = files.upload()

Choose Files No file chosen Upload widget is only available when the cell has been
executed in the current browser session. Please rerun this cell to enable.
Saving dogandmontague2.jpg to dogandmontague2.jpg

Step 5: PreProcess Image

filename = list(uploaded.keys())[0]

# Open the image


image = Image.open(filename).convert('RGB') # Convert the image to RGB
image_np = np.array(image)

# Convert to TensorFlow format


input_tensor = tf.convert_to_tensor(image_np, dtype=tf.uint8)
input_tensor = input_tensor[tf.newaxis, ...]

# Run object detection


detections = detector(input_tensor)

# Extract info from detection output


boxes = detections["detection_boxes"][0].numpy()
class_ids = detections["detection_classes"][0].numpy().astype(int)
scores = detections["detection_scores"][0].numpy()

Step 6: Run Object Detection

# List of COCO dataset class labels


labels = [
'???', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant', '???',
'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse',
'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', '???',
'backpack', 'umbrella', '???', '???', 'handbag', 'tie', 'suitcase',
'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat',
'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle',
'???', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana',
'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', '???',
'dining table', '???', '???', 'toilet', '???', 'tv', 'laptop', 'mouse',
'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', '???', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush'
]

Step 7: Extract Detection Information

https://colab.research.google.com/drive/1qbsCgWJS2U9rrVCF_EjcyDzz22M_HoAF#printMode=true 3/5
4/22/25, 9:03 PM DeepLearningExp4.ipynb - Colab

image_with_boxes = image_np.copy()
h, w, _ = image_with_boxes.shape

for i in range(len(scores)):
if scores[i] > 0.5:
ymin, xmin, ymax, xmax = boxes[i]
(left, top, right, bottom) = (xmin * w, ymin * h, xmax * w, ymax * h)
label = labels[class_ids[i]]

# Draw the box


cv2.rectangle(image_with_boxes, (int(left), int(top)), (int(right), int(bottom)),

# Draw the label


cv2.putText(image_with_boxes, label, (int(left), int(top) - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)

Step 8: Visualize Results

plt.figure(figsize=(12, 8))
plt.imshow(image_with_boxes)
plt.axis('off')
plt.title("Detected Objects with Bounding Boxes")
plt.show()

https://colab.research.google.com/drive/1qbsCgWJS2U9rrVCF_EjcyDzz22M_HoAF#printMode=true 4/5
4/22/25, 9:03 PM DeepLearningExp4.ipynb - Colab

Precautions:

Ensure that the uploaded image is in a compatible format (e.g., JPG, PNG). Check the internet
connection for downloading the model and dependencies. Be cautious when using external
libraries and ensure they are from trusted sources.

Sources of Error:

Low-quality images can lead to inaccurate object detection. Occlusion or overlapping objects
can affect detection performance. The model may not be able to detect objects that are not part
of the COCO dataset.

Results:

To see the output, run the code. The output will display the uploaded image with bounding
boxes drawn around detected objects and labels indicating the object categories.

Short Questions:

1.What is the purpose of the SSD MobileNet v2 model? 2.How are objects detected and
classified in the code? 3.What are some potential sources of error in object detection? 4.How
can you improve the accuracy of object detection? 5.What are some real-world applications of
object detection?

Start coding or generate with AI.

https://colab.research.google.com/drive/1qbsCgWJS2U9rrVCF_EjcyDzz22M_HoAF#printMode=true 5/5

You might also like