Skip to content

Commit 19e27a1

Browse files
Merge pull request justadudewhohacks#132 from justadudewhohacks/nodejs
nodejs support
2 parents 5057640 + 12981b6 commit 19e27a1

File tree

129 files changed

+2155
-714
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

129 files changed

+2155
-714
lines changed

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,5 @@ tmp
55
proto
66
weights_uncompressed
77
weights_unused
8-
docs
8+
docs
9+
out

.travis.yml

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,21 @@ node_js:
44
- "node"
55
- "10"
66
- "8"
7-
- "6"
7+
# node 6 is not compatible with tfjs-node
8+
# - "6"
89
env:
9-
- BACKEND_CPU=true EXCLUDE_UNCOMPRESSED=true
10+
global:
11+
- BACKEND_CPU=true EXCLUDE_UNCOMPRESSED=true
12+
matrix:
13+
- ENV=browser
14+
- ENV=node
1015
addons:
1116
chrome: stable
1217
install: npm install
1318
before_install:
1419
- export DISPLAY=:99.0
1520
- sh -e /etc/init.d/xvfb start
16-
- sleep 3 # give xvfb some time to start
1721
script:
18-
- npm run test-travis
22+
- if [ $ENV == 'browser' ]; then npm run test-browser; fi
23+
- if [ $ENV == 'node' ]; then npm run test-node; fi
1924
- npm run build

README.md

Lines changed: 107 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,9 @@ Table of Contents:
1616
* **[Face Detection Models](#models-face-detection)**
1717
* **[68 Point Face Landmark Detection Models](#models-face-landmark-detection)**
1818
* **[Face Recognition Model](#models-face-recognition)**
19+
* **[Getting Started](#getting-started)**
20+
* **[face-api.js for the Browser](#getting-started-browser)**
21+
* **[face-api.js for Nodejs](#getting-started-nodejs)**
1922
* **[Usage](#usage)**
2023
* **[Loading the Models](#usage-loading-models)**
2124
* **[High Level API](#usage-high-level-api)**
@@ -75,15 +78,42 @@ Check out my face-api.js tutorials:
7578

7679
## Running the Examples
7780

81+
Clone the repository:
82+
7883
``` bash
7984
git clone https://github.com/justadudewhohacks/face-api.js.git
80-
cd face-api.js/examples
85+
```
86+
87+
### Running the Browser Examples
88+
89+
``` bash
90+
cd face-api.js/examples/examples-browser
8191
npm i
8292
npm start
8393
```
8494

8595
Browse to http://localhost:3000/.
8696

97+
### Running the Nodejs Examples
98+
99+
``` bash
100+
cd face-api.js/examples/examples-nodejs
101+
npm i
102+
```
103+
104+
Now run one of the examples using ts-node:
105+
106+
``` bash
107+
ts-node faceDetection.ts
108+
```
109+
110+
Or simply compile and run them with node:
111+
112+
``` bash
113+
tsc faceDetection.ts
114+
node faceDetection.js
115+
```
116+
87117
<a name="models"></a>
88118

89119
# Available Models
@@ -130,6 +160,55 @@ The neural net is equivalent to the **FaceRecognizerNet** used in [face-recognit
130160

131161
The size of the quantized model is roughly 6.2 MB (**face_recognition_model**).
132162

163+
<a name="getting-started"></a>
164+
165+
# Getting Started
166+
167+
<a name="getting-started-browser"></a>
168+
169+
## face-api.js for the Browser
170+
171+
Simply include the latest script from [dist/face-api.js](https://github.com/justadudewhohacks/face-api.js/tree/master/dist).
172+
173+
Or install it via npm:
174+
175+
``` bash
176+
npm i face-api.js
177+
```
178+
179+
<a name="getting-started-nodejs"></a>
180+
181+
## face-api.js for Nodejs
182+
183+
We can use the equivalent API in a nodejs environment by polyfilling some browser specifics, such as HTMLImageElement, HTMLCanvasElement and ImageData. The easiest way to do so is by installing the node-canvas package.
184+
185+
Alternatively you can simply construct your own tensors from image data and pass tensors as inputs to the API.
186+
187+
Furthermore you want to install @tensorflow/tfjs-node (not required, but highly recommended), which speeds things up drastically by compiling and binding to the native Tensorflow C++ library:
188+
189+
``` bash
190+
npm i face-api.js canvas @tensorflow/tfjs-node
191+
```
192+
193+
Now we simply monkey patch the environment to use the polyfills:
194+
195+
``` javascript
196+
// import nodejs bindings to native tensorflow,
197+
// not required, but will speed up things drastically (python required)
198+
import '@tensorflow/tfjs-node';
199+
200+
// implements nodejs wrappers for HTMLCanvasElement, HTMLImageElement, ImageData
201+
import * as canvas from 'canvas';
202+
203+
import * as faceapi from 'face-api.js';
204+
205+
// patch nodejs environment, we need to provide an implementation of
206+
// HTMLCanvasElement and HTMLImageElement, additionally an implementation
207+
// of ImageData is required, in case you want to use the MTCNN
208+
const { Canvas, Image, ImageData } = canvas
209+
faceapi.env.monkeyPatch({ Canvas, Image, ImageData })
210+
```
211+
133212
# Usage
134213

135214
<a name="usage-loading-models"></a>
@@ -150,14 +229,38 @@ await faceapi.loadSsdMobilenetv1Model('/models')
150229
// await faceapi.loadFaceRecognitionModel('/models')
151230
```
152231

153-
Alternatively, you can also create instance of the neural nets:
232+
All global neural network instances are exported via faceapi.nets:
233+
234+
``` javascript
235+
console.log(faceapi.nets)
236+
```
237+
238+
The following is equivalent to `await faceapi.loadSsdMobilenetv1Model('/models')`:
239+
240+
``` javascript
241+
await faceapi.nets.ssdMobilenetv1.loadFromUri('/models')
242+
```
243+
244+
In a nodejs environment you can furthermore load the models directly from disk:
245+
246+
``` javascript
247+
await faceapi.nets.ssdMobilenetv1.loadFromDisk('./models')
248+
```
249+
250+
You can also load the model from a tf.NamedTensorMap:
251+
252+
``` javascript
253+
await faceapi.nets.ssdMobilenetv1.loadFromWeightMap(weightMap)
254+
```
255+
256+
Alternatively, you can also create own instances of the neural nets:
154257

155258
``` javascript
156259
const net = new faceapi.SsdMobilenetv1()
157260
await net.load('/models')
158261
```
159262

160-
Using instances, you can also load the weights as a Float32Array (in case you want to use the uncompressed models):
263+
You can also load the weights as a Float32Array (in case you want to use the uncompressed models):
161264

162265
``` javascript
163266
// using fetch
@@ -205,7 +308,7 @@ By default **detectAllFaces** and **detectSingleFace** utilize the SSD Mobilenet
205308

206309
``` javascript
207310
const detections1 = await faceapi.detectAllFaces(input, new faceapi.SsdMobilenetv1Options())
208-
const detections2 = await faceapi.detectAllFaces(input, new faceapi.inyFaceDetectorOptions())
311+
const detections2 = await faceapi.detectAllFaces(input, new faceapi.TinyFaceDetectorOptions())
209312
const detections3 = await faceapi.detectAllFaces(input, new faceapi.MtcnnOptions())
210313
```
211314

@@ -513,12 +616,6 @@ const landmarks2 = await faceapi.detectFaceLandmarksTiny(faceImage)
513616
const descriptor = await faceapi.computeFaceDescriptor(alignedFaceImage)
514617
```
515618

516-
All global neural network instances are exported via faceapi.nets:
517-
518-
``` javascript
519-
console.log(faceapi.nets)
520-
```
521-
522619
### Extracting a Canvas for an Image Region
523620

524621
``` javascript
File renamed without changes.

examples/public/js/bbt.js renamed to examples/examples-browser/public/js/bbt.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
const classes = ['amy', 'bernadette', 'howard', 'leonard', 'penny', 'raj', 'sheldon', 'stuart']
22

33
function getFaceImageUri(className, idx) {
4-
return `images/${className}/${className}${idx}.png`
4+
return `${className}/${className}${idx}.png`
55
}
66

77
function renderFaceImageSelectList(selectListId, onChange, initialValue) {

examples/public/js/commons.js renamed to examples/examples-browser/public/js/commons.js

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
function getImageUri(imageName) {
2-
return `images/${imageName}`
3-
}
4-
51
async function requestExternalImage(imageUrl) {
62
const res = await fetch('fetch_external_image', {
73
method: 'post',

examples/public/js/imageSelectionControls.js renamed to examples/examples-browser/public/js/imageSelectionControls.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,15 @@ function renderImageSelectList(selectListId, onChange, initialValue) {
1717
renderOption(
1818
select,
1919
imageName,
20-
getImageUri(imageName)
20+
imageName
2121
)
2222
)
2323
}
2424

2525
renderSelectList(
2626
selectListId,
2727
onChange,
28-
getImageUri(initialValue),
28+
initialValue,
2929
renderChildren
3030
)
3131
}

examples/server.js renamed to examples/examples-browser/server.js

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,10 @@ app.use(express.urlencoded({ extended: true }))
1010
const viewsDir = path.join(__dirname, 'views')
1111
app.use(express.static(viewsDir))
1212
app.use(express.static(path.join(__dirname, './public')))
13-
app.use(express.static(path.join(__dirname, '../weights')))
14-
app.use(express.static(path.join(__dirname, '../weights_uncompressed')))
15-
app.use(express.static(path.join(__dirname, '../dist')))
16-
app.use(express.static(path.join(__dirname, './node_modules/axios/dist')))
13+
app.use(express.static(path.join(__dirname, '../images')))
14+
app.use(express.static(path.join(__dirname, '../media')))
15+
app.use(express.static(path.join(__dirname, '../../weights')))
16+
app.use(express.static(path.join(__dirname, '../../dist')))
1717

1818
app.get('/', (req, res) => res.redirect('/face_and_landmark_detection'))
1919
app.get('/face_and_landmark_detection', (req, res) => res.sendFile(path.join(viewsDir, 'faceAndLandmarkDetection.html')))

examples/views/videoFaceTracking.html renamed to examples/examples-browser/views/videoFaceTracking.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
<div class="indeterminate"></div>
1919
</div>
2020
<div style="position: relative" class="margin">
21-
<video src="media/bbt.mp4" id="inputVideo" autoplay muted loop></video>
21+
<video src="bbt.mp4" id="inputVideo" autoplay muted loop></video>
2222
<canvas id="overlay" />
2323
</div>
2424

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
// import nodejs bindings to native tensorflow,
2+
// not required, but will speed up things drastically (python required)
3+
import '@tensorflow/tfjs-node';
4+
5+
// implements nodejs wrappers for HTMLCanvasElement, HTMLImageElement, ImageData
6+
const canvas = require('canvas')
7+
8+
import * as faceapi from '../../../src';
9+
10+
// patch nodejs environment, we need to provide an implementation of
11+
// HTMLCanvasElement and HTMLImageElement, additionally an implementation
12+
// of ImageData is required, in case you want to use the MTCNN
13+
const { Canvas, Image, ImageData } = canvas
14+
faceapi.env.monkeyPatch({ Canvas, Image, ImageData })
15+
16+
export { canvas, faceapi }
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
import { NeuralNetwork } from 'tfjs-image-recognition-base';
2+
3+
import { faceapi } from './env';
4+
5+
export const faceDetectionNet = faceapi.nets.ssdMobilenetv1
6+
// export const faceDetectionNet = tinyFaceDetector
7+
// export const faceDetectionNet = mtcnn
8+
9+
// SsdMobilenetv1Options
10+
const minConfidence = 0.5
11+
12+
// TinyFaceDetectorOptions
13+
const inputSize = 408
14+
const scoreThreshold = 0.5
15+
16+
// MtcnnOptions
17+
const minFaceSize = 50
18+
const scaleFactor = 0.8
19+
20+
function getFaceDetectorOptions(net: NeuralNetwork<any>) {
21+
return net === faceapi.nets.ssdMobilenetv1
22+
? new faceapi.SsdMobilenetv1Options({ minConfidence })
23+
: (net === faceapi.nets.tinyFaceDetector
24+
? new faceapi.TinyFaceDetectorOptions({ inputSize, scoreThreshold })
25+
: new faceapi.MtcnnOptions({ minFaceSize, scaleFactor })
26+
)
27+
}
28+
29+
export const faceDetectionOptions = getFaceDetectorOptions(faceDetectionNet)
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
export { canvas, faceapi } from './env';
2+
export { faceDetectionNet, faceDetectionOptions } from './faceDetection';
3+
export { saveFile } from './saveFile';
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
import * as fs from 'fs';
2+
import * as path from 'path';
3+
4+
const baseDir = path.resolve(__dirname, '../out')
5+
6+
export function saveFile(fileName: string, buf: Buffer) {
7+
if (!fs.existsSync(baseDir)) {
8+
fs.mkdirSync(baseDir)
9+
}
10+
11+
fs.writeFileSync(path.resolve(baseDir, fileName), buf)
12+
}
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
import { canvas, faceapi, faceDetectionNet, faceDetectionOptions, saveFile } from './commons';
2+
3+
async function run() {
4+
5+
await faceDetectionNet.loadFromDisk('../../weights')
6+
7+
const img = await canvas.loadImage('../images/bbt1.jpg')
8+
const detections = await faceapi.detectAllFaces(img, faceDetectionOptions)
9+
10+
const out = faceapi.createCanvasFromMedia(img) as any
11+
faceapi.drawDetection(out, detections)
12+
13+
saveFile('faceDetection.jpg', out.toBuffer('image/jpeg'))
14+
}
15+
16+
run()
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
import { canvas, faceapi, faceDetectionNet, faceDetectionOptions, saveFile } from './commons';
2+
3+
async function run() {
4+
5+
await faceDetectionNet.loadFromDisk('../../weights')
6+
await faceapi.nets.faceLandmark68Net.loadFromDisk('../../weights')
7+
8+
const img = await canvas.loadImage('../images/bbt1.jpg')
9+
const results = await faceapi.detectAllFaces(img, faceDetectionOptions)
10+
.withFaceLandmarks()
11+
12+
const out = faceapi.createCanvasFromMedia(img) as any
13+
faceapi.drawDetection(out, results.map(res => res.detection))
14+
faceapi.drawLandmarks(out, results.map(res => res.faceLandmarks), { drawLines: true, color: 'red' })
15+
16+
saveFile('faceLandmarkDetection.jpg', out.toBuffer('image/jpeg'))
17+
}
18+
19+
run()

0 commit comments

Comments
 (0)