Skip to content

Commit 12b439f

Browse files
Merge pull request justadudewhohacks#20 from justadudewhohacks/all-faces
faceapi.allFaces + global api
2 parents 9012be3 + fe036c9 commit 12b439f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+17187
-2131
lines changed

README.md

Lines changed: 34 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -104,6 +104,15 @@ To load a model, you have provide the corresponding manifest.json file as well a
104104

105105
Assuming the models reside in **public/models**:
106106

107+
``` javascript
108+
await faceapi.loadFaceDetectionModel('/models')
109+
// accordingly for the other models:
110+
// await faceapi.loadFaceLandmarkModel('/models')
111+
// await faceapi.loadFaceRecognitionModel('/models')
112+
```
113+
114+
As an alternative, you can also create instance of the neural nets:
115+
107116
``` javascript
108117
const net = new faceapi.FaceDetectionNet()
109118
// accordingly for the other models:
@@ -118,7 +127,7 @@ await net.load('/models/face_detection_model-weights_manifest.json')
118127
await net.load('/models')
119128
```
120129

121-
Alternatively you can load the weights as a Float32Array (in case you want to use the uncompressed models):
130+
Using instances, you can also load the weights as a Float32Array (in case you want to use the uncompressed models):
122131

123132
``` javascript
124133
// using fetch
@@ -145,7 +154,7 @@ const maxResults = 10
145154

146155
// inputs can be html canvas, img or video element or their ids ...
147156
const myImg = document.getElementById('myImg')
148-
const detections = await detectionNet.locateFaces(myImg, minConfidence, maxResults)
157+
const detections = await faceapi.locateFaces(myImg, minConfidence, maxResults)
149158
```
150159

151160
Draw the detected faces to a canvas:
@@ -162,7 +171,7 @@ faceapi.drawDetection(canvas, detectionsForSize, { withScore: false })
162171
You can also obtain the tensors of the unfiltered bounding boxes and scores for each image in the batch (tensors have to be disposed manually):
163172

164173
``` javascript
165-
const { boxes, scores } = detectionNet.forward('myImg')
174+
const { boxes, scores } = net.forward('myImg')
166175
```
167176

168177
<a name="usage-face-recognition"></a>
@@ -173,8 +182,8 @@ Compute and compare the descriptors of two face images:
173182

174183
``` javascript
175184
// inputs can be html canvas, img or video element or their ids ...
176-
const descriptor1 = await recognitionNet.computeFaceDescriptor('myImg')
177-
const descriptor2 = await recognitionNet.computeFaceDescriptor(document.getElementById('myCanvas'))
185+
const descriptor1 = await faceapi.computeFaceDescriptor('myImg')
186+
const descriptor2 = await faceapi.computeFaceDescriptor(document.getElementById('myCanvas'))
178187
const distance = faceapi.euclidianDistance(descriptor1, descriptor2)
179188

180189
if (distance < 0.6)
@@ -183,16 +192,10 @@ else
183192
console.log('no match')
184193
```
185194

186-
You can also get the face descriptor data synchronously:
187-
188-
``` javascript
189-
const desc = recognitionNet.computeFaceDescriptorSync('myImg')
190-
```
191-
192195
Or simply obtain the tensor (tensor has to be disposed manually):
193196

194197
``` javascript
195-
const t = recognitionNet.forward('myImg')
198+
const t = net.forward('myImg')
196199
```
197200

198201
<a name="usage-face-landmark-detection"></a>
@@ -204,7 +207,7 @@ Detect face landmarks:
204207
``` javascript
205208
// inputs can be html canvas, img or video element or their ids ...
206209
const myImg = document.getElementById('myImg')
207-
const landmarks = await faceLandmarkNet.detectLandmarks(myImg)
210+
const landmarks = await faceapi.detectLandmarks(myImg)
208211
```
209212

210213
Draw the detected face landmarks to a canvas:
@@ -236,11 +239,11 @@ const rightEyeBrow = landmarks.getRightEyeBrow()
236239
Compute the Face Landmarks for Detected Faces:
237240

238241
``` javascript
239-
const detections = await detectionNet.locateFaces(input)
242+
const detections = await faceapi.locateFaces(input)
240243

241244
// get the face tensors from the image (have to be disposed manually)
242245
const faceTensors = await faceapi.extractFaceTensors(input, detections)
243-
const landmarksByFace = await Promise.all(faceTensors.map(t => faceLandmarkNet.detectLandmarks(t)))
246+
const landmarksByFace = await Promise.all(faceTensors.map(t => faceapi.detectLandmarks(t)))
244247

245248
// free memory for face image tensors after we computed their descriptors
246249
faceTensors.forEach(t => t.dispose())
@@ -250,19 +253,31 @@ faceTensors.forEach(t => t.dispose())
250253

251254
### Full Face Detection and Recognition Pipeline
252255

253-
After face detection has been performed, I would recommend to align the bounding boxes of the detected faces before passing them to the face recognition net, which will make the computed face descriptor much more accurate. You can easily align the faces from their face landmark positions as shown in the following example:
256+
After face detection has been performed, I would recommend to align the bounding boxes of the detected faces before passing them to the face recognition net, which will make the computed face descriptor much more accurate. Fortunately, the api can do this for you under the hood. You can obtain the full face descriptions (location, landmarks and descriptor) of each face in an input image as follows:
257+
258+
``` javascript
259+
const fullFaceDescriptions = await faceapi.allFaces(input, minConfidence)
260+
261+
const fullFaceDescription0 = fullFaceDescriptions[0]
262+
console.log(fullFaceDescription0.detection) // bounding box & score
263+
console.log(fullFaceDescription0.landmarks) // 68 point face landmarks
264+
console.log(fullFaceDescription0.descriptor) // face descriptors
265+
266+
```
267+
268+
You can also do everything manually as shown in the following:
254269

255270
``` javascript
256271
// first detect the face locations
257-
const detections = await detectionNet.locateFaces(input)
272+
const detections = await faceapi.locateFaces(input, minConfidence)
258273

259274
// get the face tensors from the image (have to be disposed manually)
260275
const faceTensors = (await faceapi.extractFaceTensors(input, detections))
261276

262277
// detect landmarks and get the aligned face image bounding boxes
263278
const alignedFaceBoxes = await Promise.all(faceTensors.map(
264279
async (faceTensor, i) => {
265-
const faceLandmarks = await landmarkNet.detectLandmarks(faceTensor)
280+
const faceLandmarks = await faceapi.detectLandmarks(faceTensor)
266281
return faceLandmarks.align(detections[i])
267282
}
268283
))
@@ -275,7 +290,7 @@ const alignedFaceTensors = (await faceapi.extractFaceTensors(input, alignedFaceB
275290

276291
// compute the face descriptors from the aligned face images
277292
const descriptors = await Promise.all(alignedFaceTensors.map(
278-
faceTensor => recognitionNet.computeFaceDescriptor(faceTensor)
293+
faceTensor => faceapi.computeFaceDescriptor(faceTensor)
279294
))
280295

281296
// free memory for face image tensors after we computed their descriptors

build/FullFaceDescription.d.ts

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
import { FaceDetection } from './faceDetectionNet/FaceDetection';
2+
import { FaceLandmarks } from './faceLandmarkNet/FaceLandmarks';
3+
export declare class FullFaceDescription {
4+
private _detection;
5+
private _landmarks;
6+
private _descriptor;
7+
constructor(_detection: FaceDetection, _landmarks: FaceLandmarks, _descriptor: Float32Array);
8+
readonly detection: FaceDetection;
9+
readonly landmarks: FaceLandmarks;
10+
readonly descriptor: Float32Array;
11+
forSize(width: number, height: number): FullFaceDescription;
12+
}

build/FullFaceDescription.js

Lines changed: 36 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

build/FullFaceDescription.js.map

Lines changed: 1 addition & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

build/NetInput.js.map

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

build/allFacesFactory.d.ts

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
import * as tf from '@tensorflow/tfjs-core';
2+
import { FaceDetectionNet } from './faceDetectionNet/FaceDetectionNet';
3+
import { FaceLandmarkNet } from './faceLandmarkNet/FaceLandmarkNet';
4+
import { FaceRecognitionNet } from './faceRecognitionNet/FaceRecognitionNet';
5+
import { FullFaceDescription } from './FullFaceDescription';
6+
import { NetInput } from './NetInput';
7+
export declare function allFacesFactory(detectionNet: FaceDetectionNet, landmarkNet: FaceLandmarkNet, recognitionNet: FaceRecognitionNet): (input: string | HTMLCanvasElement | HTMLImageElement | HTMLVideoElement | (string | HTMLCanvasElement | HTMLImageElement | HTMLVideoElement)[] | tf.Tensor<tf.Rank> | NetInput, minConfidence: number) => Promise<FullFaceDescription[]>;

build/allFacesFactory.js

Lines changed: 41 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

build/allFacesFactory.js.map

Lines changed: 1 addition & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

build/faceDetectionNet/index.d.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
import { FaceDetectionNet } from './FaceDetectionNet';
22
export * from './FaceDetectionNet';
3+
export * from './FaceDetection';
34
export declare function faceDetectionNet(weights: Float32Array): FaceDetectionNet;

build/faceDetectionNet/index.js

Lines changed: 1 addition & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)