Skip to content

Changes for Tensorflow 2.5.0 #90

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ There currently exist several versions of the tutorial, corresponding to the var

## TensorFlow 2 Object Detection API tutorial

[![TensorFlow 2.2](https://img.shields.io/badge/TensorFlow-2.2-FF6F00?logo=tensorflow)](https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0) [![Documentation Status](https://readthedocs.org/projects/tensorflow-object-detection-api-tutorial/badge/?version=latest)](http://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/?badge=latest)
[![TensorFlow 2.5](https://img.shields.io/badge/TensorFlow-2.5-FF6F00?logo=tensorflow)](https://github.com/tensorflow/tensorflow/releases/tag/v2.5.0) [![Documentation Status](https://readthedocs.org/projects/tensorflow-object-detection-api-tutorial/badge/?version=latest)](http://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/?badge=latest)

Since July 10, 2020 TensorFlow [announced that the Object Detection API officially supports TensorFlow 2](https://blog.tensorflow.org/2020/07/tensorflow-2-meets-object-detection-api.html). Therefore, an updated version of the tutorial was created to cover TensorFlow 2.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 8 additions & 8 deletions docs/source/auto_examples/object_detection_camera.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\nDetect Objects Using Your Webcam\n================================\n"
"\n# Detect Objects Using Your Webcam\n"
]
},
{
Expand All @@ -29,7 +29,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Create the data directory\n~~~~~~~~~~~~~~~~~~~~~~~~~\nThe snippet shown below will create the ``data`` directory where all our data will be stored. The\ncode will create a directory structure as shown bellow:\n\n.. code-block:: bash\n\n data\n \u2514\u2500\u2500 models\n\nwhere the ``models`` folder will will contain the downloaded models.\n\n"
"## Create the data directory\nThe snippet shown below will create the ``data`` directory where all our data will be stored. The\ncode will create a directory structure as shown bellow:\n\n.. code-block:: bash\n\n data\n \u2514\u2500\u2500 models\n\nwhere the ``models`` folder will will contain the downloaded models.\n\n"
]
},
{
Expand All @@ -47,7 +47,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Download the model\n~~~~~~~~~~~~~~~~~~\nThe code snippet shown below is used to download the object detection model checkpoint file,\nas well as the labels file (.pbtxt) which contains a list of strings used to add the correct\nlabel to each detection (e.g. person).\n\nThe particular detection algorithm we will use is the `SSD ResNet101 V1 FPN 640x640`. More\nmodels can be found in the `TensorFlow 2 Detection Model Zoo <https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md>`_.\nTo use a different model you will need the URL name of the specific model. This can be done as\nfollows:\n\n1. Right click on the `Model name` of the model you would like to use;\n2. Click on `Copy link address` to copy the download link of the model;\n3. Paste the link in a text editor of your choice. You should observe a link similar to ``download.tensorflow.org/models/object_detection/tf2/YYYYYYYY/XXXXXXXXX.tar.gz``;\n4. Copy the ``XXXXXXXXX`` part of the link and use it to replace the value of the ``MODEL_NAME`` variable in the code shown below;\n5. Copy the ``YYYYYYYY`` part of the link and use it to replace the value of the ``MODEL_DATE`` variable in the code shown below.\n\nFor example, the download link for the model used below is: ``download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz``\n\n"
"## Download the model\nThe code snippet shown below is used to download the object detection model checkpoint file,\nas well as the labels file (.pbtxt) which contains a list of strings used to add the correct\nlabel to each detection (e.g. person).\n\nThe particular detection algorithm we will use is the `SSD ResNet101 V1 FPN 640x640`. More\nmodels can be found in the `TensorFlow 2 Detection Model Zoo <https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md>`_.\nTo use a different model you will need the URL name of the specific model. This can be done as\nfollows:\n\n1. Right click on the `Model name` of the model you would like to use;\n2. Click on `Copy link address` to copy the download link of the model;\n3. Paste the link in a text editor of your choice. You should observe a link similar to ``download.tensorflow.org/models/object_detection/tf2/YYYYYYYY/XXXXXXXXX.tar.gz``;\n4. Copy the ``XXXXXXXXX`` part of the link and use it to replace the value of the ``MODEL_NAME`` variable in the code shown below;\n5. Copy the ``YYYYYYYY`` part of the link and use it to replace the value of the ``MODEL_DATE`` variable in the code shown below.\n\nFor example, the download link for the model used below is: ``download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz``\n\n"
]
},
{
Expand All @@ -65,7 +65,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Load the model\n~~~~~~~~~~~~~~\nNext we load the downloaded model\n\n"
"## Load the model\nNext we load the downloaded model\n\n"
]
},
{
Expand All @@ -83,7 +83,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Load label map data (for plotting)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLabel maps correspond index numbers to category names, so that when our convolution network\npredicts `5`, we know that this corresponds to `airplane`. Here we use internal utility\nfunctions, but anything that returns a dictionary mapping integers to appropriate string labels\nwould be fine.\n\n"
"## Load label map data (for plotting)\nLabel maps correspond index numbers to category names, so that when our convolution network\npredicts `5`, we know that this corresponds to `airplane`. Here we use internal utility\nfunctions, but anything that returns a dictionary mapping integers to appropriate string labels\nwould be fine.\n\n"
]
},
{
Expand All @@ -101,7 +101,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Define the video stream\n~~~~~~~~~~~~~~~~~~~~~~~\nWe will use `OpenCV <https://pypi.org/project/opencv-python/>`_ to capture the video stream\ngenerated by our webcam. For more information you can refer to the `OpenCV-Python Tutorials <https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>`_\n\n"
"## Define the video stream\nWe will use `OpenCV <https://pypi.org/project/opencv-python/>`_ to capture the video stream\ngenerated by our webcam. For more information you can refer to the `OpenCV-Python Tutorials <https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>`_\n\n"
]
},
{
Expand All @@ -119,7 +119,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Putting everything together\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe code shown below loads an image, runs it through the detection model and visualizes the\ndetection results, including the keypoints.\n\nNote that this will take a long time (several minutes) the first time you run this code due to\ntf.function's trace-compilation --- on subsequent runs (e.g. on new images), things will be\nfaster.\n\nHere are some simple things to try out if you are curious:\n\n* Modify some of the input images and see if detection still works. Some simple things to try out here (just uncomment the relevant portions of code) include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).\n* Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).\n* Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.\n\n"
"## Putting everything together\nThe code shown below loads an image, runs it through the detection model and visualizes the\ndetection results, including the keypoints.\n\nNote that this will take a long time (several minutes) the first time you run this code due to\ntf.function's trace-compilation --- on subsequent runs (e.g. on new images), things will be\nfaster.\n\nHere are some simple things to try out if you are curious:\n\n* Modify some of the input images and see if detection still works. Some simple things to try out here (just uncomment the relevant portions of code) include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).\n* Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).\n* Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.\n\n"
]
},
{
Expand Down Expand Up @@ -150,7 +150,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.8"
"version": "3.9.5"
}
},
"nbformat": 4,
Expand Down
35 changes: 32 additions & 3 deletions docs/source/auto_examples/object_detection_camera.rst
Original file line number Diff line number Diff line change
@@ -1,20 +1,33 @@

.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "auto_examples\object_detection_camera.py"
.. LINE NUMBERS ARE GIVEN BELOW.

.. only:: html

.. note::
:class: sphx-glr-download-link-note

Click :ref:`here <sphx_glr_download_auto_examples_object_detection_camera.py>` to download the full example code
.. rst-class:: sphx-glr-example-title
Click :ref:`here <sphx_glr_download_auto_examples_object_detection_camera.py>`
to download the full example code

.. rst-class:: sphx-glr-example-title

.. _sphx_glr_auto_examples_object_detection_camera.py:
.. _sphx_glr_auto_examples_object_detection_camera.py:


Detect Objects Using Your Webcam
================================

.. GENERATED FROM PYTHON SOURCE LINES 9-11

This demo will take you through the steps of running an "out-of-the-box" detection model to
detect objects in the video stream extracted from your camera.

.. GENERATED FROM PYTHON SOURCE LINES 13-24

Create the data directory
~~~~~~~~~~~~~~~~~~~~~~~~~
The snippet shown below will create the ``data`` directory where all our data will be stored. The
Expand All @@ -27,6 +40,7 @@ code will create a directory structure as shown bellow:

where the ``models`` folder will will contain the downloaded models.

.. GENERATED FROM PYTHON SOURCE LINES 24-32

.. code-block:: default

Expand All @@ -39,6 +53,8 @@ where the ``models`` folder will will contain the downloaded models.
os.mkdir(dir)


.. GENERATED FROM PYTHON SOURCE LINES 33-51

Download the model
~~~~~~~~~~~~~~~~~~
The code snippet shown below is used to download the object detection model checkpoint file,
Expand All @@ -58,6 +74,7 @@ follows:

For example, the download link for the model used below is: ``download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz``

.. GENERATED FROM PYTHON SOURCE LINES 51-82

.. code-block:: default

Expand Down Expand Up @@ -93,10 +110,13 @@ For example, the download link for the model used below is: ``download.tensorflo
print('Done')


.. GENERATED FROM PYTHON SOURCE LINES 83-86

Load the model
~~~~~~~~~~~~~~
Next we load the downloaded model

.. GENERATED FROM PYTHON SOURCE LINES 86-121

.. code-block:: default

Expand Down Expand Up @@ -136,25 +156,31 @@ Next we load the downloaded model



.. GENERATED FROM PYTHON SOURCE LINES 122-128

Load label map data (for plotting)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Label maps correspond index numbers to category names, so that when our convolution network
predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility
functions, but anything that returns a dictionary mapping integers to appropriate string labels
would be fine.

.. GENERATED FROM PYTHON SOURCE LINES 128-131

.. code-block:: default

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS,
use_display_name=True)


.. GENERATED FROM PYTHON SOURCE LINES 132-136

Define the video stream
~~~~~~~~~~~~~~~~~~~~~~~
We will use `OpenCV <https://pypi.org/project/opencv-python/>`_ to capture the video stream
generated by our webcam. For more information you can refer to the `OpenCV-Python Tutorials <https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>`_

.. GENERATED FROM PYTHON SOURCE LINES 136-140

.. code-block:: default

Expand All @@ -163,6 +189,8 @@ generated by our webcam. For more information you can refer to the `OpenCV-Pytho
cap = cv2.VideoCapture(0)


.. GENERATED FROM PYTHON SOURCE LINES 141-155

Putting everything together
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The code shown below loads an image, runs it through the detection model and visualizes the
Expand All @@ -178,6 +206,7 @@ Here are some simple things to try out if you are curious:
* Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).
* Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.

.. GENERATED FROM PYTHON SOURCE LINES 155-196

.. code-block:: default

Expand Down
Loading