diff --git a/.babelrc b/.babelrc
deleted file mode 100644
index bb605ecfd..000000000
--- a/.babelrc
+++ /dev/null
@@ -1,4 +0,0 @@
-{
- "plugins": ["transform-object-rest-spread"],
- "presets": ["es2015"]
-}
\ No newline at end of file
diff --git a/.editorconfig b/.editorconfig
new file mode 100644
index 000000000..beffa3084
--- /dev/null
+++ b/.editorconfig
@@ -0,0 +1,11 @@
+root = true
+
+[*]
+indent_style = space
+indent_size = 2
+charset = utf-8
+trim_trailing_whitespace = true
+insert_final_newline = true
+
+[*.md]
+trim_trailing_whitespace = false
diff --git a/.eslintignore b/.eslintignore
new file mode 100644
index 000000000..92c8257df
--- /dev/null
+++ b/.eslintignore
@@ -0,0 +1,6 @@
+test/unit/coverage/**
+test/unit/*.js
+test/e2e/*.js
+**.min.js
+dist/
+__coverage__/
diff --git a/.eslintrc.json b/.eslintrc.json
new file mode 100644
index 000000000..744dd9355
--- /dev/null
+++ b/.eslintrc.json
@@ -0,0 +1,21 @@
+{
+ "env": {},
+ "extends": ["airbnb", "prettier"],
+ "globals": {},
+ "parserOptions": {
+ "ecmaVersion": 8,
+ "sourceType": "module"
+ },
+ "root": true,
+ "rules": {
+ "class-methods-use-this": "off",
+ "linebreak-style": 0,
+ "no-continue": 0,
+ "no-multi-assign": "off",
+ "no-param-reassign": 0,
+ "no-plusplus": 0,
+ "no-prototype-builtins": 0,
+ "no-underscore-dangle": 0,
+ "semi": 1
+ }
+}
diff --git a/.gitignore b/.gitignore
index d6b19336c..033efbb45 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,4 +6,7 @@ node_modules
npm-debug.log
# Yarn
-yarn.lock
\ No newline at end of file
+yarn.lock
+
+# parcel bundler cache
+.cache
diff --git a/.npmignore b/.npmignore
index 29c550623..335c350b4 100644
--- a/.npmignore
+++ b/.npmignore
@@ -8,3 +8,8 @@
test/
coverage/
.github/
+.cache/
+__tests__
+__coverage__
+.cache
+.dist/
diff --git a/.prettierrc b/.prettierrc
new file mode 100644
index 000000000..36301bc5c
--- /dev/null
+++ b/.prettierrc
@@ -0,0 +1,5 @@
+{
+ "semi": false,
+ "singleQuote": true,
+ "trailingComma": "es5"
+}
diff --git a/.travis.yml b/.travis.yml
index 44beff74d..02c04d595 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,13 +1,12 @@
language: node_js
node_js:
- - '4'
+ - '8'
+ - '10'
+cache:
+ directories:
+ - node_modules
install:
- npm install
script:
- npm run test
-env:
- - CXX=g++-4.9
-before_install:
- - if [[ $TRAVIS_NODE_VERSION == 0.8 ]]; then npm install -g npm@1.4.28; fi
- - npm explore npm -g -- npm install node-gyp@latest
-sudo: false
\ No newline at end of file
+sudo: false
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index c10b4e9fd..b7dfcf84f 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,7 +1,20 @@
Thanks for taking the time to contribute to brain.js. Follow these guidelines to make the process smoother:
-1. One feature per pull request. Each PR should have one focus, and all the code changes should be supporting that one feature or bug fix. Using a [separate branch](https://guides.github.com/introduction/flow/index.html) for each feature should help you manage developing multiple features at once.
+1. One feature per pull request. Each PR should have one focus, and all the code changes should be supporting that one feature or bug fix. Using a [separate branch](https://guides.github.com/introduction/flow/index.html) for each feature should help you manage developing multiple features at once.
-2. Follow the style of the file when it comes to syntax like curly braces and indents.
+2. This repository uses `.editorconfig`, `eslint` (`airbnb`) and `prettier` for linting and formating to make coding style consistent thorught the repository. For this purpose, some helpfull scripts are also defined in project;
-3. Add a test for the feature or fix, if possible. See the `test` directory for existing tests and README describing how to run these tests.
+```bash
+npm run eslint # validate eslint rules
+npm run eslint:fix # validates and fix any fixable issues
+npm run prettier # format files
+```
+
+3. Add/update a test for the feature or fix, if possible. See the `__tests__` directory for existing tests. To run these tests:
+
+```bash
+npm run test # run tests and generate coverage docs
+npm run test:watch # run jest in watch mode
+```
+
+4. Please donot run build/dist script and donot bump version number for the script. These things will be handled by the maintainers when necessary.
diff --git a/README.md b/README.md
index 5754f37b8..d7195bf0c 100644
--- a/README.md
+++ b/README.md
@@ -4,13 +4,20 @@
[](https://npmjs.com/package/brain.js)
+[](#backers) [](#sponsors)
-[](https://gitter.im/brain-js/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
+[](https://gitter.im/brain-js/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://slack.bri.im)
+
+## About
`brain.js` is a library of [Neural Networks](http://en.wikipedia.org/wiki/Artificial_neural_network) written in JavaScript.
+**NEW!** [A fun and practical introduction to Brain.js](https://scrimba.com/g/gneuralnetworks)
+
:bulb: **Note**: This is a continuation of the [**harthur/brain**](https://github.com/harthur/brain) repository (which is not maintained anymore). For more details, check out [this issue](https://github.com/harthur/brain/issues/72).
+## Table of Contents
+
- [Examples](#examples)
+ [More Examples](#more-examples)
- [Usage](#usage)
@@ -18,12 +25,20 @@
+ [Browser](#browser)
- [Training](#training)
+ [Data format](#data-format)
+ + [For training with NeuralNetwork](#for-training-with-neuralnetwork)
+ + [For training with `RNNTimeStep`, `LSTMTimeStep` and `GRUTimeStep`](#for-training-with-rnntimestep-lstmtimestep-and-grutimestep)
+ + [For training with `RNN`, `LSTM` and `GRU`](#for-training-with-rnn-lstm-and-gru)
+ [Training Options](#training-options)
+ [Async Training](#async-training)
+ + [Cross Validation](#cross-validation)
+ + [Train Stream](#train-stream)
- [Methods](#methods)
- + [train](#train)
+ + [train](#traintrainingdata---trainingstatus)
+ + [run](#runinput---prediction)
+ + [forecast](#forecastinput-count---predictions)
- [Failing](#failing)
- [JSON](#json)
+- [Standalone Function](#standalone-function)
- [Options](#options)
+ [activation](#activation)
+ [hiddenLayers](#hiddenlayers)
@@ -33,46 +48,73 @@
+ [Transform](#transform)
- [Utilities](#utilities)
+ [`likely`](#likely)
+ + [`toSVG`](#toSVG)
- [Neural Network Types](#neural-network-types)
+ [Why different Neural Network Types?](#why-different-neural-network-types)
-
+
# Examples
Here's an example showcasing how to approximate the XOR function using `brain.js`:
+more info on config [here](https://github.com/BrainJS/brain.js/blob/develop/src/neural-network.js#L31).
```javascript
-//create a simple feed forward neural network with backpropagation
-var net = new brain.NeuralNetwork();
+// provide optional config object (or undefined). Defaults shown.
+const config = {
+ binaryThresh: 0.5,
+ hiddenLayers: [3], // array of ints for the sizes of the hidden layers in the network
+ activation: 'sigmoid', // supported activation types: ['sigmoid', 'relu', 'leaky-relu', 'tanh'],
+ leakyReluAlpha: 0.01 // supported for activation type 'leaky-relu'
+};
+
+// create a simple feed forward neural network with backpropagation
+const net = new brain.NeuralNetwork(config);
net.train([{input: [0, 0], output: [0]},
{input: [0, 1], output: [1]},
{input: [1, 0], output: [1]},
{input: [1, 1], output: [0]}]);
-var output = net.run([1, 0]); // [0.987]
+const output = net.run([1, 0]); // [0.987]
```
or
+more info on config [here](https://github.com/BrainJS/brain.js/blob/develop/src/recurrent/rnn.js#L726).
+
```javascript
-//create a simple recurrent neural network
-var net = new brain.recurrent.RNN();
+// provide optional config object, defaults shown.
+const config = {
+ inputSize: 20,
+ inputRange: 20,
+ hiddenLayers: [20,20],
+ outputSize: 20,
+ learningRate: 0.01,
+ decayRate: 0.999,
+};
+
+// create a simple recurrent neural network
+const net = new brain.recurrent.RNN(config);
net.train([{input: [0, 0], output: [0]},
{input: [0, 1], output: [1]},
{input: [1, 0], output: [1]},
{input: [1, 1], output: [0]}]);
-var output = net.run([0, 0]); // [0]
-output = net.run([0, 1]); // [1]
-output = net.run([1, 0]); // [1]
-output = net.run([1, 1]); // [0]
+const output = net.run([0, 0]); // [0]
+output = net.run([0, 1]); // [1]
+output = net.run([1, 0]); // [1]
+output = net.run([1, 1]); // [0]
```
-However, There's no reason to use a neural network to figure out XOR. (-: So, here's a more involved, realistic example:
+However, there is no reason to use a neural network to figure out XOR. (-: So, here is a more involved, realistic example:
[Demo: training a neural network to recognize color contrast](https://brain.js.org/).
## More Examples
-You check out this fantastic screencast, which explains how to train a simple neural network using a real world dataset: [How to create a neural network in the browser using Brain.js](https://scrimba.com/c/c36zkcb).
-* [writing a children's book using a recurrent neural neural network](./examples/childrens-book.js)
-* [simple letter detection](./examples/which-letter-simple.js)
+You can check out this fantastic screencast, which explains how to train a simple neural network using a real world dataset: [How to create a neural network in the browser using Brain.js](https://scrimba.com/c/c36zkcb).
+* [writing a children's book using a recurrent neural network](./examples/childrens-book.js) & [typescript version](./examples-typescript/childrens-book.ts)
+* [using cross validation with a feed forward net](./examples/cross-validate.js) & [typescript version](./examples-typescript/cross-validate.ts)
+* experimental (NeuralNetwork only, but more to come!) [using the gpu in a browser](./examples/gpu.html) or [using node gpu fallback to cpu](./examples/gpu-fallback.js) & [typescript version](./examples-typescript/gpu-fallback.ts)
+* [learning math using a recurrent neural network](./examples/learn-math.js) & [typescript version](./examples-typescript/learn-math.ts)
+* [predict next number, and forecast numbers](./examples/predict-numbers.js) & [typescript version](./examples-typescript/predict-numbers.ts)
+* [using node streams](./examples/stream-example.js) & [typescript version](./examples-typescript/stream-example.ts)
+* [simple letter detection](./examples/which-letter-simple.js) & [typescript version](./examples-typescript/which-letter-simple.ts)
# Usage
@@ -88,31 +130,28 @@ Or if you prefer yarn:
yarn add brain.js
```
-Alternatively, you can install `brain.js` with [bower](https://bower.io/):
-```
-bower install brain.js
-```
-
-At present, the npm version of brain.js is approximately 1.0.0, featuring only Feed forward NN. All other models are beta and are being jazzed up and battle hardened.
+At present, the published version of brain.js is approximately 1.0.0, featuring only Feed-forward NN. All other models are beta and are being jazzed up and battle hardened.
You can still download the latest, though. They are cool!
### Browser
-Download the latest [brain.js for browser](https://raw.githubusercontent.com/harthur-org/brain.js/master/browser.js). Training is computationally expensive, so you should try to train the network offline (or on a Worker) and use the `toFunction()` or `toJSON()` options to plug the pre-trained network into your website.
+Download the latest [brain.js for browser](https://cdn.rawgit.com/BrainJS/brain.js/master/browser.js). Training is computationally expensive, so you should try to train the network offline (or on a Worker) and use the `toFunction()` or `toJSON()` options to plug the pre-trained network into your website.
# Training
-Use `train()` to train the network with an array of training data. The network has to be trained with all the data in bulk in one call to `train()`. The more training patterns, the longer it will probably take to train, but the better the network will be at classifying new patterns.
+Use `train()` to train the network with an array of training data. The network has to be trained with all the data in bulk in one call to `train()`. More training patterns will probably take longer to train, but will usually result in a network better
+at classifying new patterns.
### Data format
+#### For training with `NeuralNetwork`
Each training pattern should have an `input` and an `output`, both of which can be either an array of numbers from `0` to `1` or a hash of numbers from `0` to `1`. For the [color contrast demo](https://brain.js.org/) it looks something like this:
```javascript
-var net = new brain.NeuralNetwork();
+const net = new brain.NeuralNetwork();
net.train([{input: { r: 0.03, g: 0.7, b: 0.5 }, output: { black: 1 }},
{input: { r: 0.16, g: 0.09, b: 0.2 }, output: { white: 1 }},
{input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1 }}]);
-var output = net.run({ r: 1, g: 0.4, b: 0 }); // { white: 0.99, black: 0.002 }
+const output = net.run({ r: 1, g: 0.4, b: 0 }); // { white: 0.99, black: 0.002 }
```
Here's another variation of the above example. (_Note_ that input objects do not need to be similar.)
```javascript
@@ -120,7 +159,74 @@ net.train([{input: { r: 0.03, g: 0.7 }, output: { black: 1 }},
{input: { r: 0.16, b: 0.2 }, output: { white: 1 }},
{input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1 }}]);
-var output = net.run({ r: 1, g: 0.4, b: 0 }); // { white: 0.81, black: 0.18 }
+const output = net.run({ r: 1, g: 0.4, b: 0 }); // { white: 0.81, black: 0.18 }
+```
+
+#### For training with `RNNTimeStep`, `LSTMTimeStep` and `GRUTimeStep`
+Each training pattern can either:
+* Be an array of numbers
+* Be an array of arrays of numbers
+
+Example using an array of numbers:
+```javascript
+const net = new brain.recurrent.LSTMTimeStep();
+
+net.train([
+ [1, 2, 3]
+]);
+
+const output = net.run([1, 2]); // 3
+```
+
+Example using an array of arrays of numbers:
+```javascript
+const net = new brain.recurrent.LSTMTimeStep({
+ inputSize: 2,
+ hiddenLayers: [10],
+ outputSize: 2
+});
+
+net.train([
+ [1, 3],
+ [2, 2],
+ [3, 1],
+]);
+
+const output = net.run([[1, 3], [2, 2]]); // [3, 1]
+```
+
+#### For training with `RNN`, `LSTM` and `GRU`
+Each training pattern can either:
+* Be an array of values
+* Be a string
+* Have an `input` and an `output`
+ * Either of which can have an array of values or a string
+
+CAUTION: When using an array of values, you can use ANY value, however, the values are represented in the neural network by a single input. So the more _distinct values_ has _the larger your input layer_. If you have a hundreds, thousands, or millions of floating point values _THIS IS NOT THE RIGHT CLASS FOR THE JOB_. Also, when deviating from strings, this gets into beta
+
+Example using direct strings:
+```javascript
+const net = new brain.recurrent.LSTM();
+
+net.train([
+ 'doe, a deer, a female deer',
+ 'ray, a drop of golden sun',
+ 'me, a name I call myself',
+]);
+
+const output = net.run('doe'); // ', a deer, a female deer'
+```
+
+Example using strings with inputs and outputs:
+```javascript
+const net = new brain.recurrent.LSTM();
+
+net.train([
+ { input: 'I feel great about the world!', output: 'happy' },
+ { input: 'The world is a terrible place!', output: 'sad' },
+]);
+
+const output = net.run('I feel great about the world!'); // 'happy'
```
@@ -142,23 +248,23 @@ net.train(data, {
});
```
-The network will train until the training error has gone below the threshold (default `0.005`) or the max number of iterations (default `20000`) has been reached, whichever comes first.
+The network will stop training whenever one of the two criteria is met: the training error has gone below the threshold (default `0.005`), or the max number of iterations (default `20000`) has been reached.
-By default training won't let you know how its doing until the end, but set `log` to `true` to get periodic updates on the current training error of the network. The training error should decrease every time. The updates will be printed to console. If you set `log` to a function, this function will be called with the updates instead of printing to the console.
+By default training will not let you know how it's doing until the end, but set `log` to `true` to get periodic updates on the current training error of the network. The training error should decrease every time. The updates will be printed to console. If you set `log` to a function, this function will be called with the updates instead of printing to the console.
-The learning rate is a parameter that influences how quickly the network trains. It's a number from `0` to `1`. If the learning rate is close to `0` it will take longer to train. If the learning rate is closer to `1` it will train faster but it's in danger of training to a local minimum and performing badly on new data.(_Overfitting_) The default learning rate is `0.3`.
+The learning rate is a parameter that influences how quickly the network trains. It's a number from `0` to `1`. If the learning rate is close to `0`, it will take longer to train. If the learning rate is closer to `1`, it will train faster, but training results may be constrained to a local minimum and perform badly on new data.(_Overfitting_) The default learning rate is `0.3`.
-The momentum is similar to learning rate, expecting a value from `0` to `1` as well but it is multiplied against the next level's change value. The default value is `0.1`
+The momentum is similar to learning rate, expecting a value from `0` to `1` as well, but it is multiplied against the next level's change value. The default value is `0.1`
-Any of these training options can be passed into the constructor or passed into the `updateTrainingOptions(opts)` method and they will be saved on the network and used any time you trian. If you save your network to json, these training options are saved and restored as well (except for callback and log, callback will be forgoten and log will be restored using console.log).
+Any of these training options can be passed into the constructor or passed into the `updateTrainingOptions(opts)` method and they will be saved on the network and used during the training time. If you save your network to json, these training options are saved and restored as well (except for callback and log, callback will be forgotten and log will be restored using console.log).
-There is a boolean property called `invalidTrainOptsShouldThrow` that by default is set to true. While true if you enter a training option that is outside the normal range an error will be thrown with a message about the option you sent. When set to false no error is sent but a message is still sent to `console.warn` with the information.
+A boolean property called `invalidTrainOptsShouldThrow` is set to `true` by default. While the option is `true`, if you enter a training option that is outside the normal range, an error will be thrown with a message about the abnormal option. When the option is set to `false`, no error will be sent, but a message will still be sent to `console.warn` with the related information.
### Async Training
-`trainAsync()` takes the same arguments as train (data and options). Instead of returning the results object from training it returns a promise that when resolved will return the training results object.
+`trainAsync()` takes the same arguments as train (data and options). Instead of returning the results object from training, it returns a promise that when resolved will return the training results object.
```javascript
- let net = new brain.NeuralNetwork();
+ const net = new brain.NeuralNetwork();
net
.trainAsync(data, options)
.then(res => {
@@ -169,25 +275,75 @@ There is a boolean property called `invalidTrainOptsShouldThrow` that by default
With multiple networks you can train in parallel like this:
```javascript
- var net = new brain.NeuralNetwork();
- var net2 = new brain.NeuralNetwork();
+ const net = new brain.NeuralNetwork();
+ const net2 = new brain.NeuralNetwork();
- var p1 = net.trainAsync(data, options);
- var p2 = net2.trainAsync(data, options);
+ const p1 = net.trainAsync(data, options);
+ const p2 = net2.trainAsync(data, options);
Promise
.all([p1, p2])
.then(values => {
- var res = values[0];
- var res2 = values[1];
+ const res = values[0];
+ const res2 = values[1];
console.log(`net trained in ${res.iterations} and net2 trained in ${res2.iterations}`);
// do something super cool with my 2 trained networks
})
.catch(handleError);
```
+### Cross Validation
+[Cross Validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) can provide a less fragile way of training on larger data sets. The brain.js api provides Cross Validation in this example:
+```js
+const crossValidate = new brain.CrossValidate(brain.NeuralNetwork, networkOptions);
+crossValidate.train(data, trainingOptions, k); //note k (or KFolds) is optional
+const json = crossValidate.toJSON(); // all stats in json as well as neural networks
+const net = crossValidate.toNeuralNetwork(); // get top performing net out of `crossValidate`
+
+
+// optionally later
+const json = crossValidate.toJSON();
+const net = crossValidate.fromJSON(json);
+```
+
+Use `CrossValidate` with these classes:
+* `brain.NeuralNetwork`
+* `brain.RNNTimeStep`
+* `brain.LSTMTimeStep`
+* `brain.GRUTimeStep`
+
+An example of using cross validate can be found in [examples/cross-validate.js](examples/cross-validate.js)
+
+### Train Stream
+Streams are a very powerful tool in node for massive data spread across processes and are provided via the brain.js api in the following way:
+```js
+const net = new brain.NeuralNetwork();
+const trainStream = new brain.TrainStream({
+ neuralNetwork: net,
+ floodCallback: function() {
+ flood(trainStream, data);
+ },
+ doneTrainingCallback: function(stats) {
+ // network is done training! What next?
+ }
+});
+
+// kick it off
+readInputs(trainStream, data);
+
+function readInputs(stream, data) {
+ for (let i = 0; i < data.length; i++) {
+ stream.write(data[i]);
+ }
+ // let it know we've reached the end of the inputs
+ stream.endInputs();
+}
+```
+
+An example of using train stream can be found in [examples/stream-example.js](examples/stream-example.js)
+
# Methods
-### train
+## `train(trainingData)` -> trainingStatus
The output of `train()` is a hash of information about how the training went:
```javascript
@@ -197,24 +353,89 @@ The output of `train()` is a hash of information about how the training went:
}
```
+## `run(input)` -> prediction
+Supported on classes:
+
+* `brain.NeuralNetwork`
+* `brain.NeuralNetworkGPU` -> All the functionality of `brain.NeuralNetwork` but, ran on GPU (via gpu.js in WebGL2, WebGL1, or fallback to CPU)
+* `brain.recurrent.RNN`
+* `brain.recurrent.LSTM`
+* `brain.recurrent.GRU`
+* `brain.recurrent.RNNTimeStep`
+* `brain.recurrent.LSTMTimeStep`
+* `brain.recurrent.GRUTimeStep`
+
+Example:
+```js
+// feed forward
+const net = new brain.NeuralNetwork();
+net.fromJSON(json);
+net.run(input);
+
+// time step
+const net = new brain.LSTMTimeStep();
+net.fromJSON(json);
+net.run(input);
+
+// recurrent
+const net = new brain.LSTMTimeStep();
+net.fromJSON(json);
+net.run(input);
+```
+
+## `forecast(input, count)` -> predictions
+
+Available with the following classes. Outputs a array of predictions. Predictions being a continuation of the inputs.
+
+* `brain.recurrent.RNNTimeStep`
+* `brain.recurrent.LSTMTimeStep`
+* `brain.recurrent.GRUTimeStep`
+
+Example:
+
+```js
+const net = new brain.LSTMTimeStep();
+net.fromJSON(json);
+net.forecast(input, 3);
+```
+
+## `toJSON() -> json`
+Serialize neural network to json
+
+## `fromJSON(json)`
+Deserialize neural network from json
+
# Failing
-If the network failed to train, the error will be above the error threshold. This could happen because the training data is too noisy (most likely), the network doesn't have enough hidden layers or nodes to handle the complexity of the data, or it hasn't trained for enough iterations.
+If the network failed to train, the error will be above the error threshold. This could happen if the training data is too noisy (most likely), the network does not have enough hidden layers or nodes to handle the complexity of the data, or it has not been trained for enough iterations.
+
+If the training error is still something huge like `0.4` after 20000 iterations, it's a good sign that the network can't make sense of the given data.
+
+## RNN, LSTM, or GRU Output too short or too long
+The instance of the net's property `maxPredictionLength` (default 100) can be set to adjust the output of the net;
+
+Example:
+```js
+const net = new brain.recurrent.LSTM();
-If the training error is still something huge like `0.4` after 20000 iterations, it's a good sign that the network can't make sense of the data you're giving it.
+// later in code, after training on a few novels, write me a new one!
+net.maxPredictionLength = 1000000000; // Be careful!
+net.run('Once upon a time');
+```
# JSON
Serialize or load in the state of a trained network with JSON:
```javascript
-var json = net.toJSON();
+const json = net.toJSON();
net.fromJSON(json);
```
+# Standalone Function
You can also get a custom standalone function from a trained network that acts just like `run()`:
```javascript
-var run = net.toFunction();
-var output = run({ r: 1, g: 0.4, b: 0 });
+const run = net.toFunction();
+const output = run({ r: 1, g: 0.4, b: 0 });
console.log(run.toString()); // copy and paste! no need to import brain.js
```
@@ -222,7 +443,7 @@ console.log(run.toString()); // copy and paste! no need to import brain.js
`NeuralNetwork()` takes a hash of options:
```javascript
-var net = new brain.NeuralNetwork({
+const net = new brain.NeuralNetwork({
activation: 'sigmoid', // activation function
hiddenLayers: [4],
learningRate: 0.6 // global learning rate, useful when training using streams
@@ -230,14 +451,15 @@ var net = new brain.NeuralNetwork({
```
### activation
-This parameter lets you specify which activation function your neural network should use. There are currently four supported activation functions, **sigmoid** being the default:
+This parameter lets you specify which activation function your neural network should use. There are currently four supported activation functions, **sigmoid** being the default:
- [sigmoid](https://www.wikiwand.com/en/Sigmoid_function)
- [relu](https://www.wikiwand.com/en/Rectifier_(neural_networks))
- [leaky-relu](https://www.wikiwand.com/en/Rectifier_(neural_networks))
+ * related option - 'leakyReluAlpha' optional number, defaults to 0.01
- [tanh](https://theclevermachine.wordpress.com/tag/tanh-function/)
-Here's a table (Thanks, Wikipedia!) summarizing a plethora of activation functions — [Activation Function](https://www.wikiwand.com/en/Activation_function)
+Here's a table (thanks, Wikipedia!) summarizing a plethora of activation functions — [Activation Function](https://www.wikiwand.com/en/Activation_function)
### hiddenLayers
You can use this to specify the number of hidden layers in the network and the size of each layer. For example, if you want two hidden layers - the first with 3 nodes and the second with 4 nodes, you'd give:
@@ -253,7 +475,7 @@ The network now has a [WriteStream](http://nodejs.org/api/stream.html#stream_cla
### Example
-Refer to [`stream-example.js`](./examples/cli/stream-example.js) for an example on how to train the network with a stream.
+Refer to [`stream-example.js`](examples/stream-example.js) for an example on how to train the network with a stream.
### Initialization
@@ -273,26 +495,79 @@ To train the network using a stream you must first create the stream by calling
Use a [Transform](http://nodejs.org/api/stream.html#stream_class_stream_transform) to coerce the data into the correct format. You might also use a Transform stream to normalize your data on the fly.
# Utilities
+
### `likely`
+
```js
-var likely = require('brain/likely');
-var key = likely(input, net);
+const likely = require('brain/likely');
+const key = likely(input, net);
```
+
Likely example see: [simple letter detection](./examples/which-letter-simple.js)
+### `toSVG`
+
+```js
+
+```
+Renders the network topology of a feedforward network
+```js
+document.getElementById('result').innerHTML = brain.utilities.toSVG(network,options)
+```
+
+toSVG example see: [network rendering](./examples/rendering-svg.html)
+
+The user interface used:
+
+
# Neural Network Types
* [`brain.NeuralNetwork`](src/neural-network.js) - [Feedforward Neural Network](https://en.wikipedia.org/wiki/Feedforward_neural_network) with backpropagation
+* [`brain.NeuralNetworkGPU`](src/neural-network-gpu.js) - [Feedforward Neural Network](https://en.wikipedia.org/wiki/Feedforward_neural_network) with backpropagation, GPU version
+* [`brain.recurrent.RNNTimeStep`](src/recurrent/rnn-time-step.js) - [Time Step Recurrent Neural Network or "RNN"](https://en.wikipedia.org/wiki/Recurrent_neural_network)
+* [`brain.recurrent.LSTMTimeStep`](src/recurrent/lstm-time-step.js) - [Time Step Long Short Term Memory Neural Network or "LSTM"](https://en.wikipedia.org/wiki/Long_short-term_memory)
+* [`brain.recurrent.GRUTimeStep`](src/recurrent/gru-time-step.js) - [Time Step Gated Recurrent Unit or "GRU"](https://en.wikipedia.org/wiki/Gated_recurrent_unit)
* [`brain.recurrent.RNN`](src/recurrent/rnn.js) - [Recurrent Neural Network or "RNN"](https://en.wikipedia.org/wiki/Recurrent_neural_network)
* [`brain.recurrent.LSTM`](src/recurrent/lstm.js) - [Long Short Term Memory Neural Network or "LSTM"](https://en.wikipedia.org/wiki/Long_short-term_memory)
* [`brain.recurrent.GRU`](src/recurrent/gru.js) - [Gated Recurrent Unit or "GRU"](https://en.wikipedia.org/wiki/Gated_recurrent_unit)
### Why different Neural Network Types?
-Different neural nets do different things well. For example:
+Different neural nets do different things well. For example:
* A Feedforward Neural Network can classify simple things very well, but it has no memory of previous actions and has infinite variation of results.
+* A Time Step Recurrent Neural Network _remembers_, and can predict future values.
* A Recurrent Neural Network _remembers_, and has a finite set of results.
# Get Involved!
+
### Issues
+
If you have an issue, either a bug or a feature you think would benefit your project let us know and we will do our best.
Create issues [here](https://github.com/BrainJS/brain.js/issues) and follow the template.
+
+### Contributors
+
+This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
+
+
+
+### Backers
+
+Thank you to all our backers! 🙏 [[Become a backer](https://opencollective.com/brainjs#backer)]
+
+
+
+
+### Sponsors
+
+Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [[Become a sponsor](https://opencollective.com/brainjs#sponsor)]
+
+
+
+
+
+
+
+
+
+
+
diff --git a/__coverage__/clover.xml b/__coverage__/clover.xml
new file mode 100644
index 000000000..640693147
--- /dev/null
+++ b/__coverage__/clover.xml
@@ -0,0 +1,7 @@
+
+
+ Press n or j to go to the next uncovered block, b, p or k for the previous block. +
+File | ++ | Statements | ++ | Branches | ++ | Functions | ++ | Lines | ++ |
---|