Skip to content

Commit 5aa3329

Browse files
fixed some typos
1 parent 907ca4b commit 5aa3329

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

README.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -170,6 +170,7 @@ await faceapi.nets.ssdMobilenetv1.loadFromUri('/models')
170170
// accordingly for the other models:
171171
// await faceapi.nets.faceLandmark68Net.loadFromUri('/models')
172172
// await faceapi.nets.faceRecognitionNet.loadFromUri('/models')
173+
// ...
173174
```
174175

175176
In a nodejs environment you can furthermore load the models directly from disk:
@@ -274,13 +275,13 @@ const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLand
274275

275276
**After face detection and facial landmark prediction the face descriptors for each face can be computed as follows:**
276277

277-
Detect all faces in an image + computes 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
278+
Detect all faces in an image + compute 68 Point Face Landmarks for each detected face. Returns **Array<[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
278279

279280
``` javascript
280281
const results = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
281282
```
282283

283-
Detect the face with the highest confidence score in an image + computes 68 Point Face Landmarks and face descriptor for that face. Returns **[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
284+
Detect the face with the highest confidence score in an image + compute 68 Point Face Landmarks and face descriptor for that face. Returns **[WithFaceDescriptor<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
284285

285286
``` javascript
286287
const result = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
@@ -290,21 +291,21 @@ const result = await faceapi.detectSingleFace(input).withFaceLandmarks().withFac
290291

291292
**Face expression recognition can be performed for detected faces as follows:**
292293

293-
Detect all faces in an image + recognize face expressions. Returns **Array<[WithFaceExpressions<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
294+
Detect all faces in an image + recognize face expressions of each face. Returns **Array<[WithFaceExpressions<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
294295

295296
``` javascript
296297
const detectionsWithExpressions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions()
297298
```
298299

299-
Detect the face with the highest confidence score in an image + recognize the face expression for that face. Returns **[WithFaceExpressions<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
300+
Detect the face with the highest confidence score in an image + recognize the face expressions for that face. Returns **[WithFaceExpressions<WithFaceLandmarks<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
300301

301302
``` javascript
302303
const detectionWithExpressions = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions()
303304
```
304305

305306
**You can also skip .withFaceLandmarks(), which will skip the face alignment step (less stable accuracy):**
306307

307-
Detect all faces without face alignment + recognize face expressions. Returns **Array<[WithFaceExpressions<WithFaceDetection<{}>>](#getting-started-utility-classes)>**:
308+
Detect all faces without face alignment + recognize face expressions of each face. Returns **Array<[WithFaceExpressions<WithFaceDetection<{}>>](#getting-started-utility-classes)>**:
308309

309310
``` javascript
310311
const detectionsWithExpressions = await faceapi.detectAllFaces(input).withFaceExpressions()
@@ -320,27 +321,27 @@ const detectionWithExpressions = await faceapi.detectSingleFace(input).withFaceE
320321

321322
**Age estimation and gender recognition from detected faces can be done as follows:**
322323

323-
Detect all faces in an image + estimate age and recognize gender. Returns **Array<[WithAge<WithGender<WithFaceLandmarks<WithFaceDetection<{}>>>>](#getting-started-utility-classes)>**:
324+
Detect all faces in an image + estimate age and recognize gender of each face. Returns **Array<[WithAge<WithGender<WithFaceLandmarks<WithFaceDetection<{}>>>>](#getting-started-utility-classes)>**:
324325

325326
``` javascript
326327
const detectionsWithAgeAndGender = await faceapi.detectAllFaces(input).withFaceLandmarks().withAgeAndGender()
327328
```
328329

329-
Detect the face with the highest confidence score in an image + recognize the face expression for that face. Returns **[WithAge<WithGender<WithFaceLandmarks<WithFaceDetection<{}>>>>](#getting-started-utility-classes) | undefined**:
330+
Detect the face with the highest confidence score in an image + estimate age and recognize gender for that face. Returns **[WithAge<WithGender<WithFaceLandmarks<WithFaceDetection<{}>>>>](#getting-started-utility-classes) | undefined**:
330331

331332
``` javascript
332333
const detectionWithAgeAndGender = await faceapi.detectSingleFace(input).withFaceLandmarks().withAgeAndGender()
333334
```
334335

335336
**You can also skip .withFaceLandmarks(), which will skip the face alignment step (less stable accuracy):**
336337

337-
Detect all faces without face alignment + recognize face expressions. Returns **Array<[WithAge<WithGender<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
338+
Detect all faces without face alignment + estimate age and recognize gender of each face. Returns **Array<[WithAge<WithGender<WithFaceDetection<{}>>>](#getting-started-utility-classes)>**:
338339

339340
``` javascript
340341
const detectionsWithAgeAndGender = await faceapi.detectAllFaces(input).withAgeAndGender()
341342
```
342343

343-
Detect the face with the highest confidence score without face alignment + recognize the face expression for that face. Returns **[WithAge<WithGender<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
344+
Detect the face with the highest confidence score without face alignment + estimate age and recognize gender for that face. Returns **[WithAge<WithGender<WithFaceDetection<{}>>>](#getting-started-utility-classes) | undefined**:
344345

345346
``` javascript
346347
const detectionWithAgeAndGender = await faceapi.detectSingleFace(input).withAgeAndGender()

0 commit comments

Comments
 (0)