|
| 1 | +--- |
| 2 | +layout: page |
| 3 | +title: "OpenCV" |
| 4 | +description: "Instructions how to setup OpenCV within Home Assistant." |
| 5 | +date: 2017-04-01 22:36 |
| 6 | +sidebar: true |
| 7 | +comments: false |
| 8 | +sharing: true |
| 9 | +footer: true |
| 10 | +logo: OpenCV_Logo.png |
| 11 | +ha_category: Hub |
| 12 | +ha_release: 0.44 |
| 13 | +ha_iot_class: "Local Push" |
| 14 | +--- |
| 15 | + |
| 16 | +[OpenCV](https://www.opencv.org) is an open source computer vision image and video processing library. |
| 17 | + |
| 18 | +Some pre-defined classifiers can be found here: https://github.com/opencv/opencv/tree/master/data |
| 19 | + |
| 20 | +### {% linkable_title Configuration %} |
| 21 | + |
| 22 | +To setup OpenCV with Home Assistant, add the following section to your `configuration.yaml` file: |
| 23 | + |
| 24 | +```yaml |
| 25 | +# Example configuration.yaml entry |
| 26 | + |
| 27 | +opencv: |
| 28 | + classifier_group: |
| 29 | + - name: Family |
| 30 | + add_camera: True |
| 31 | + entity_id: |
| 32 | + - camera.front_door |
| 33 | + - camera.living_room |
| 34 | + classifier: |
| 35 | + - file_path: /path/to/classifier/face.xml |
| 36 | + name: Bob |
| 37 | + - file_path: /path/to/classifier/face_profile.xml |
| 38 | + name: Jill |
| 39 | + min_size: (20, 20) |
| 40 | + color: (255, 0, 0) |
| 41 | + scale: 1.6 |
| 42 | + neighbors: 5 |
| 43 | + - file_path: /path/to/classifier/kid_face.xml |
| 44 | + name: Little Jimmy |
| 45 | +``` |
| 46 | +
|
| 47 | +Configuration variables: |
| 48 | +
|
| 49 | +- **name** (*Required*): The name of the OpenCV image processor. |
| 50 | +- **entity_id** (*Required*): The camera entity or list of camera entities that this classification group will be applied to. |
| 51 | +- **classifier** (*Required*): The classification configuration for to be applied: |
| 52 | + - **file_path** (*Required*): The path to the HAARS or LBP classification file (xml). |
| 53 | + - **name** (*Optional*): The classification name, the default is `Face`. |
| 54 | + - **min_size** (*Optional*): The minimum size for detection as a tuple `(width, height)`, the default is `(30, 30)`. |
| 55 | + - **color** (*Optional*): The color, as a tuple `(Blue, Green, Red)` to draw the rectangle when linked to a dispatcher camera, the default is `(255, 255, 0)`. |
| 56 | + - **scale** (*Optional*): The scale to perform when processing, this is a `float` value that must be greater than or equal to `1.0`, default is `1.1`. |
| 57 | + - **neighbors** (*Optional*): The minimum number of neighbors required for a match, default is `4`. The higher this number, the more picky the matching will be; lower the number, the more false positives you may experience. |
| 58 | + |
| 59 | +Once OpenCV is configured, it will create an `image_processing` entity for each classification group/camera entity combination as well as a camera so you can see what Home Assistant sees. |
| 60 | + |
| 61 | +The attributes on the `image_processing` entity will be: |
| 62 | + |
| 63 | +```json |
| 64 | +'matches': { |
| 65 | + 'Bob': [ |
| 66 | + (x, y, w, h) |
| 67 | + ], |
| 68 | + 'Jill': [ |
| 69 | + (x, y, w, h) |
| 70 | + ], |
| 71 | + 'Little Jimmy': [ |
| 72 | + (x, y, w, h) |
| 73 | + ] |
| 74 | +} |
| 75 | +``` |
| 76 | + |
0 commit comments