|
11 | 11 | * **[Face Detection](#usage-face-detection)**
|
12 | 12 | * **[Face Recognition](#usage-face-recognition)**
|
13 | 13 | * **[Face Landmark Detection](#usage-face-landmark-detection)**
|
| 14 | + * **[Full Face Detection and Recognition Pipeline](#usage-full-face-detection-and-recognition-pipeline)** |
14 | 15 |
|
15 | 16 | ## Examples
|
16 | 17 |
|
@@ -175,19 +176,6 @@ Or simply obtain the tensor (tensor has to be disposed manually):
|
175 | 176 | const t = recognitionNet.forward('myImg')
|
176 | 177 | ```
|
177 | 178 |
|
178 |
| -Compute the Face Descriptors for Detected Faces: |
179 |
| - |
180 |
| -``` javascript |
181 |
| -const detections = await detectionNet.locateFaces(input) |
182 |
| - |
183 |
| -// get the face tensors from the image (have to be disposed manually) |
184 |
| -const faceTensors = await faceapi.extractFaceTensors(input, detections) |
185 |
| -const descriptors = await Promise.all(faceTensors.map(t => recognitionNet.computeFaceDescriptor(t))) |
186 |
| - |
187 |
| -// free memory for face image tensors after we computed their descriptors |
188 |
| -faceTensors.forEach(t => t.dispose()) |
189 |
| -``` |
190 |
| - |
191 | 179 | <a name="usage-face-landmark-detection"></a>
|
192 | 180 |
|
193 | 181 | ### Face Landmark Detection
|
@@ -247,3 +235,39 @@ const landmarksByFace = await Promise.all(faceTensors.map(t => faceLandmarkNet.d
|
247 | 235 | // free memory for face image tensors after we computed their descriptors
|
248 | 236 | faceTensors.forEach(t => t.dispose())
|
249 | 237 | ```
|
| 238 | + |
| 239 | +<a name="usage-full-face-detection-and-recognition-pipeline"></a> |
| 240 | + |
| 241 | +### Full Face Detection and Recognition Pipeline |
| 242 | + |
| 243 | +After face detection has been performed, I would recommend to align the bounding boxes of the detected faces before passing them to the face recognition net, which will make the computed face descriptor much more accurate. You can easily align the faces from their face landmark positions as shown in the following example: |
| 244 | + |
| 245 | +``` javascript |
| 246 | +// first detect the face locations |
| 247 | +const detections = await detectionNet.locateFaces(input) |
| 248 | + |
| 249 | +// get the face tensors from the image (have to be disposed manually) |
| 250 | +const faceTensors = (await faceapi.extractFaceTensors(input, detections)) |
| 251 | + |
| 252 | +// detect landmarks and get the aligned face image bounding boxes |
| 253 | +const alignedFaceBoxes = await Promise.all(faceTensors.map( |
| 254 | + async (faceTensor, i) => { |
| 255 | + const faceLandmarks = await landmarkNet.detectLandmarks(faceTensor) |
| 256 | + return faceLandmarks.align(detections[i]) |
| 257 | + } |
| 258 | +)) |
| 259 | + |
| 260 | +// free memory for face image tensors after we detected the face landmarks |
| 261 | +faceTensors.forEach(t => t.dispose()) |
| 262 | + |
| 263 | +// get the face tensors for the aligned face images from the image (have to be disposed manually) |
| 264 | +const alignedFaceTensors = (await faceapi.extractFaceTensors(input, alignedFaceBoxes)) |
| 265 | + |
| 266 | +// compute the face descriptors from the aligned face images |
| 267 | +const descriptors = await Promise.all(alignedFaceTensors.map( |
| 268 | + faceTensor => recognitionNet.computeFaceDescriptor(faceTensor) |
| 269 | +)) |
| 270 | + |
| 271 | +// free memory for face image tensors after we computed their descriptors |
| 272 | +alignedFaceTensors.forEach(t => t.dispose()) |
| 273 | +``` |
0 commit comments