Skip to content

Commit 7e766eb

Browse files
use resizeResults in examples + adjust readme
1 parent 17b72bd commit 7e766eb

File tree

5 files changed

+51
-35
lines changed

5 files changed

+51
-35
lines changed

README.md

Lines changed: 33 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -319,13 +319,13 @@ You can tune the options of each face detector as shown [here](#usage-face-detec
319319

320320
**After face detection, we can furthermore predict the facial landmarks for each detected face as follows:**
321321

322-
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks)>**:
322+
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes)>**:
323323

324324
``` javascript
325325
const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks()
326326
```
327327

328-
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[FaceDetectionWithLandmarks](#interface-face-detection-with-landmarks) | undefined**:
328+
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks for that face. Returns **[WithFaceLandmarks<WithFaceDetection<{}>>](#usage-utility-classes) | undefined**:
329329

330330
``` javascript
331331
const detectionWithLandmarks = await faceapi.detectSingleFace(input).withFaceLandmarks()
@@ -342,16 +342,16 @@ const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLand
342342

343343
**After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:**
344344

345-
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[FullFaceDescription](#interface-full-face-description)>**:
345+
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes)>**:
346346

347347
``` javascript
348-
const fullFaceDescriptions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
348+
const results = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
349349
```
350350

351-
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[FullFaceDescription](#interface-full-face-description) | undefined**:
351+
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#usage-utility-classes) | undefined**:
352352

353353
``` javascript
354-
const fullFaceDescription = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
354+
const result = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
355355
```
356356

357357
### Face Recognition by Matching Descriptors
@@ -361,43 +361,43 @@ To perform face recognition, one can use faceapi.FaceMatcher to compare referenc
361361
First, we initialize the FaceMatcher with the reference data, for example we can simply detect faces in a **referenceImage** and match the descriptors of the detected faces to faces of subsquent images:
362362

363363
``` javascript
364-
const fullFaceDescriptions = await faceapi
364+
const results = await faceapi
365365
.detectAllFaces(referenceImage)
366366
.withFaceLandmarks()
367367
.withFaceDescriptors()
368368

369-
if (!fullFaceDescriptions.length) {
369+
if (!results.length) {
370370
return
371371
}
372372

373373
// create FaceMatcher with automatically assigned labels
374374
// from the detection results for the reference image
375-
const faceMatcher = new faceapi.FaceMatcher(fullFaceDescriptions)
375+
const faceMatcher = new faceapi.FaceMatcher(results)
376376
```
377377

378378
Now we can recognize a persons face shown in **queryImage1**:
379379

380380
``` javascript
381-
const singleFullFaceDescription = await faceapi
381+
const singleResult = await faceapi
382382
.detectSingleFace(queryImage1)
383383
.withFaceLandmarks()
384384
.withFaceDescriptor()
385385

386-
if (singleFullFaceDescription) {
387-
const bestMatch = faceMatcher.findBestMatch(singleFullFaceDescription.descriptor)
386+
if (singleResult) {
387+
const bestMatch = faceMatcher.findBestMatch(singleResult.descriptor)
388388
console.log(bestMatch.toString())
389389
}
390390
```
391391

392392
Or we can recognize all faces shown in **queryImage2**:
393393

394394
``` javascript
395-
const fullFaceDescriptions = await faceapi
395+
const results = await faceapi
396396
.detectAllFaces(queryImage2)
397397
.withFaceLandmarks()
398398
.withFaceDescriptors()
399399

400-
fullFaceDescriptions.forEach(fd => {
400+
results.forEach(fd => {
401401
const bestMatch = faceMatcher.findBestMatch(fd.descriptor)
402402
console.log(bestMatch.toString())
403403
})
@@ -430,7 +430,7 @@ Drawing the detected faces into a canvas:
430430
const detections = await faceapi.detectAllFaces(input)
431431

432432
// resize the detected boxes in case your displayed image has a different size then the original
433-
const detectionsForSize = detections.map(det => det.forSize(input.width, input.height))
433+
const detectionsForSize = faceapi.resizeResults(detections, { width: input.width, height: input.height })
434434
// draw them into a canvas
435435
const canvas = document.getElementById('overlay')
436436
canvas.width = input.width
@@ -446,7 +446,7 @@ const detectionsWithLandmarks = await faceapi
446446
.withFaceLandmarks()
447447

448448
// resize the detected boxes and landmarks in case your displayed image has a different size then the original
449-
const detectionsWithLandmarksForSize = detectionsWithLandmarks.map(det => det.forSize(input.width, input.height))
449+
const detectionsWithLandmarksForSize = faceapi.resizeResults(detectionsWithLandmarks, { width: input.width, height: input.height })
450450
// draw them into a canvas
451451
const canvas = document.getElementById('overlay')
452452
canvas.width = input.width
@@ -579,23 +579,34 @@ export interface IFaceLandmarks {
579579
}
580580
```
581581

582-
<a name="interface-face-detection-with-landmarks"></a>
582+
<a name="with-face-detection"></a>
583583

584-
### IFaceDetectionWithLandmarks
584+
### WithFaceDetection
585585

586586
``` javascript
587-
export interface IFaceDetectionWithLandmarks {
587+
export type WithFaceDetection<TSource> TSource & {
588588
detection: FaceDetection
589+
}
590+
```
591+
592+
<a name="with-face-landmarks"></a>
593+
594+
### WithFaceLandmarks
595+
596+
``` javascript
597+
export type WithFaceLandmarks<TSource> TSource & {
598+
unshiftedLandmarks: FaceLandmarks
589599
landmarks: FaceLandmarks
600+
alignedRect: FaceDetection
590601
}
591602
```
592603

593-
<a name="interface-full-face-description"></a>
604+
<a name="with-face-descriptor"></a>
594605

595-
### IFullFaceDescription
606+
### WithFaceDescriptor
596607

597608
``` javascript
598-
export interface IFullFaceDescription extends IFaceDetectionWithLandmarks {
609+
export type WithFaceDescriptor<TSource> TSource & {
599610
descriptor: Float32Array
600611
}
601612
```

examples/examples-browser/public/js/drawing.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ function resizeCanvasAndResults(dimensions, canvas, results) {
77

88
// resize detections (and landmarks) in case displayed image is smaller than
99
// original size
10-
return results.map(res => res.forSize(width, height))
10+
return faceapi.resizeResults(results, { width, height })
1111
}
1212

1313
function drawDetections(dimensions, canvas, detections) {

examples/examples-nodejs/faceLandmarkDetection.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ async function run() {
1111

1212
const out = faceapi.createCanvasFromMedia(img) as any
1313
faceapi.drawDetection(out, results.map(res => res.detection))
14-
faceapi.drawLandmarks(out, results.map(res => res.faceLandmarks), { drawLines: true, color: 'red' })
14+
faceapi.drawLandmarks(out, results.map(res => res.landmarks), { drawLines: true, color: 'red' })
1515

1616
saveFile('faceLandmarkDetection.jpg', out.toBuffer('image/jpeg'))
1717
}

src/index.ts

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,5 @@ export * from './ssdMobilenetv1/index';
1717
export * from './tinyFaceDetector/index';
1818
export * from './tinyYolov2/index';
1919

20-
export * from './euclideanDistance';
20+
export * from './euclideanDistance';
21+
export * from './resizeResults';

src/resizeResults.ts

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,25 +5,29 @@ import { FaceLandmarks } from './classes/FaceLandmarks';
55
import { extendWithFaceDetection } from './factories/WithFaceDetection';
66
import { extendWithFaceLandmarks } from './factories/WithFaceLandmarks';
77

8-
export function resizeResults<T>(obj: T, { width, height }: IDimensions): T {
8+
export function resizeResults<T>(results: T, { width, height }: IDimensions): T {
99

10-
const hasLandmarks = obj['unshiftedLandmarks'] && obj['unshiftedLandmarks'] instanceof FaceLandmarks
11-
const hasDetection = obj['detection'] && obj['detection'] instanceof FaceDetection
10+
if (Array.isArray(results)) {
11+
return results.map(obj => resizeResults(obj, { width, height })) as any as T
12+
}
13+
14+
const hasLandmarks = results['unshiftedLandmarks'] && results['unshiftedLandmarks'] instanceof FaceLandmarks
15+
const hasDetection = results['detection'] && results['detection'] instanceof FaceDetection
1216

1317
if (hasLandmarks) {
14-
const resizedDetection = obj['detection'].forSize(width, height)
15-
const resizedLandmarks = obj['unshiftedLandmarks'].forSize(resizedDetection.box.width, resizedDetection.box.height)
18+
const resizedDetection = results['detection'].forSize(width, height)
19+
const resizedLandmarks = results['unshiftedLandmarks'].forSize(resizedDetection.box.width, resizedDetection.box.height)
1620

17-
return extendWithFaceLandmarks(extendWithFaceDetection(obj as any, resizedDetection), resizedLandmarks)
21+
return extendWithFaceLandmarks(extendWithFaceDetection(results as any, resizedDetection), resizedLandmarks)
1822
}
1923

2024
if (hasDetection) {
21-
return extendWithFaceDetection(obj as any, obj['detection'].forSize(width, height))
25+
return extendWithFaceDetection(results as any, results['detection'].forSize(width, height))
2226
}
2327

24-
if (obj instanceof FaceLandmarks || obj instanceof FaceDetection) {
25-
return (obj as any).forSize(width, height)
28+
if (results instanceof FaceLandmarks || results instanceof FaceDetection) {
29+
return (results as any).forSize(width, height)
2630
}
2731

28-
return obj
32+
return results
2933
}

0 commit comments

Comments
 (0)