Encog 3 3 Quickstart
Encog 3 3 Quickstart
Encog 3 3 Quickstart
Jeff Heaton
v
Title
Author
Published
Copyright
ISBN
Price
File Created
vii
Contents
1 Using Encog for Java & C#
1.1
1.2
1.1.1
Installing Java
. . . . . . . . . . . . . . . . . . . . . .
1.1.2
Downloading Encog . . . . . . . . . . . . . . . . . . . .
1.1.3
1.1.4
1.1.5
1.2.1
Downloading Encog . . . . . . . . . . . . . . . . . . . .
1.2.2
1.2.3
12
2.2
17
17
2.1.1
19
2.1.2
20
2.1.3
21
2.1.4
22
2.1.5
22
. . . . . . . . . . . . . . . . . . .
27
29
viii
2.3
CONTENTS
2.2.2
31
2.2.3
32
2.2.4
33
2.2.5
34
40
2.3.1
41
2.3.2
44
2.3.3
45
2.3.4
47
2.3.5
48
Chapter 1
Using Encog for Java & C#
Encog Java Examples
Encog C# Examples
Using an IDE
Encog is available for both Java and .Net. The next sections will show you
how to make use of the Encog examples, as well as create your own Encog
projects.
1.1
Encog 3.3 requires Java 1.7 or higher. If you do not already have Java installed,
you will need to install Java. It is important that you properly install Java and
ensure that Java is both in your path and the JAVA HOME environmental
variable is defined.
1.1.1
Installing Java
The exact procedure to install Java varies greatly across Windows, Macintosh
and Linux. Installing Java is beyond the scope of this document. For complete
installation instructions for Java, refer to the following URL:
You can easily verify if Java is installed properly by running java -version
and echoing JAVA HOME. Here I perform this test on Windows.
M i c r o s o f t Windows [ V e r s i o n 6 . 3 . 9 6 0 0 ]
( c ) 2013 M i c r o s o f t C o r p o r a t i o n . A l l r i g h t s r e s e r v e d .
C: \ U s e r s \ J e f f >j a v a v e r s i o n
java version 1 . 7 . 0 45
Java (TM) SE Runtime Environment ( b u i l d 1 . 7 . 0 45b18 )
Java HotSpot (TM) 64 B i t S e r v e r VM ( b u i l d 24.45 b08 , mixed mode )
C: \ U s e r s \ J e f f >echo %java home%
C: \ j a v a \ j d k 1 . 7 . 0 4 5
C: \ U s e r s \ J e f f >
Now that you are sure Java is installed, you are ready to download Encog.
1.1.2
Downloading Encog
All of the important Encog links can be found at the following URL.
http://www.encog.org
At the above link you will find instructions for downloading the latest
version of Encog.
It is also possible to obtain the Encog examples directly from GitHub. The
following command will pull the latest Encog examples
g i t c l o n e h t t p s : // g i t h u b . com/ encog / encogj a v a e x a m p l e s . g i t
Once youve obtained the Encog examples, you are ready to run them.
1.1.3
All Encog examples can be run from the command line using the Gradle build
management system. It is not necessary to have Gradle installed to run the
examples. However, Gradle can be very useful when you choose to create your
own Encog projects. Gradle allows you to specify Encog as a dependency to
your project and download the correct version of Encog automatically. The
examples contain the Gralde wrapper. If you simply use the Gradle wrapper
you do not need to download and install Gradle. The following instructions
assume that you are using the Gradle wrapper.
If you are using a Linux/UNIX operating system, it may be necessary
to grant gradlew permission to execute. To do this, execute the following
command from the Encog examples directory.
chmod +x . / g r a d l e w
You can use the following Gradle command to determine what examples you
can run.
gradlew t a s k s
This will list all of the Encog examples and the tasks to run them. For example,
to run the XOR neural network Hello World example, use the following
command in Windows:
g r a d l e w runHelloWorld
: c l a s s e s UPTODATE
: runHelloWorld
Epoch #1 E r r o r : 0 . 3 2 1 6 9 9 0 8 4 6 5 9 9 7 2 9 3
Epoch #2 E r r o r : 0 . 3 0 0 1 5 8 3 9 1 1 6 3 8 9 0 3
Epoch #3 E r r o r : 0 . 2 7 8 1 4 8 0 0 0 4 7 8 3 0 2 0 7
Epoch #4 E r r o r : 0 . 2 5 9 1 3 5 0 4 0 8 8 4 8 9 2 9
Epoch #5 E r r o r : 0 . 2 4 8 0 7 2 5 7 6 1 1 3 5 3 6 2 5
Epoch #6 E r r o r : 0 . 2 4 6 2 3 2 3 3 9 6 4 5 1 9 3 3 7
Epoch #7 E r r o r : 0 . 2 4 4 8 9 9 3 4 5 9 2 4 7 4 2 4
Epoch #8 E r r o r : 0 . 2 4 0 5 4 4 5 4 2 3 0 1 6 4 8 2 3
Epoch #9 E r r o r : 0 . 2 3 6 8 2 0 0 1 9 3 8 8 6 5 7 2
Epoch #10 E r r o r : 0 . 2 3 2 1 9 9 7 0 7 5 4 0 4 1 1 1 4
...
Epoch #96 E r r o r : 0 . 0 1 7 0 8 0 3 3 5 4 9 9 9 2 7 9 0 7
Epoch #97 E r r o r : 0 . 0 1 2 4 8 7 0 3 1 2 3 0 1 8 6 4 9
Epoch #98 E r r o r : 0 . 0 0 9 1 8 5 7 2 0 0 8 5 7 2 4 4 3
Neural Network R e s u l t s :
0 . 0 , 0 . 0 , a c t u a l =0.037434516460193114 , i d e a l =0.0
1 . 0 , 0 . 0 , a c t u a l =0.8642455025347225 , i d e a l =1.0
0 . 0 , 1 . 0 , a c t u a l =0.8950073477748369 , i d e a l =1.0
1 . 0 , 1 . 0 , a c t u a l =0.0844306876871185 , i d e a l =0.0
BUILD SUCCESSFUL
T o t a l time : 4 . 4 0 1 s e c s
[ j h e a t o n @ j e f f d e v encogjava examples ] $
The XOR Hello World application shows how to train a neural network to
learn the XOR function.
1.1.4
</e x e c u t i o n >
</ e x e c u t i o n s >
<c o n f i g u r a t i o n >
<mainClass>HelloWorld </mainClass>
</ c o n f i g u r a t i o n >
</p l u g i n >
</ p l u g i n s >
</b u i l d >
<d e p e n d e n c i e s >
<dependency>
<groupId>o r g . encog </groupId>
<a r t i f a c t I d >encogc o r e </ a r t i f a c t I d >
<v e r s i o n >3.3.0 </ v e r s i o n >
</dependency>
</d e p e n d e n c i e s >
</ p r o j e c t >
The Gradle and Maven project files both make use of Listing 1.3,
Listing 1.3: Sample Encog Application (HelloWorld.java)
import o r g . encog . Encog ;
import o r g . encog . e n g i n e . network . a c t i v a t i o n . A c t i v a t i o n S i g m o i d ;
import o r g . encog . ml . data . MLData ;
import o r g . encog . ml . data . MLDataPair ;
import o r g . encog . ml . data . MLDataSet ;
import o r g . encog . ml . data . b a s i c . BasicMLDataSet ;
import o r g . encog . n e u r a l . networks . BasicNetwork ;
import o r g . encog . n e u r a l . networks . l a y e r s . B a s i c L a y e r ;
import o r g . encog . n e u r a l . networks . t r a i n i n g . p r o p a g a t i o n . r e s i l i e n t .
ResilientPropagation ;
public c l a s s HelloWorld {
/
The i n p u t n e c e s s a r y f o r XOR.
/
public s t a t i c double XOR INPUT [ ] [ ] = { { 0 . 0 , 0 . 0 } ,
{ 1.0 , 0.0 } , { 0.0 , 1.0 } , { 1.0 , 1.0 } };
/
The i d e a l d a t a n e c e s s a r y f o r XOR.
/
public s t a t i c double XOR IDEAL [ ] [ ] = { { 0 . 0 } ,
+ , a c t u a l= + output . getData ( 0 ) + , i d e a l=
+ p a i r . g e t I d e a l ( ) . getData ( 0 ) ) ;
}
Encog . g e t I n s t a n c e ( ) . shutdown ( ) ;
}
}
You can find this complete example on GitHub at the following URL.
https://github.com/encog/encog-sample-java
To run the project under Gradle, use the following command:
g r a d l e runExample
1.1.5
There are a number of different IDEs for the Java programming language.
Additionally, there are a number of different ways to make use of a third party
library, such as Encog, in each IDE. I make use of IntelliJ and simply import
the Gradle project. This allows my project to easily be used from either an
IDE or the command line. You might also be able to instruct your IDE to pull
the Encog JAR from Maven central:
http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22encog-core%22
1.2
Encog 3.3 requires Microsoft .Net 3.5 or higher. This is normally installed
with Visual Studio. For more information about .Net visit the following URL:
http://www.microsoft.com/net
Encog can be used with any .Net programming language. The instructions
in this guide pertain to using Encog with C#. With some adaptation, these
instructions are also be useful for other .Net languages.
1.2.1
Downloading Encog
All of the important Encog links can be found at the following URL.
http://www.encog.org
At the above link you will find instructions for downloading the latest
version of Encog.
It is also possible to obtain the Encog examples directly from GitHub. The
following command will pull the latest Encog examples and core:
g i t c l o n e h t t p s : // g i t h u b . com/ encog / encogd o t n e t c o r e . g i t
Once youve obtained the Encog examples, you are ready to run them.
1.2.2
The Encog C# examples and core are both contained in the encog-corecs.sln solution file, as seen in Figure 1.1.
Figure 1.1: Encog C# Examples and Core
10
As you can see, we specified the xor example and requested that a pause
occur before the program exited. You can also specify a ? to see all available
examples. This will produce output similar to the following.
adalinedigits
: ADALINE D i g i t s
analyst
: Encog A n a l y s t
a r t 1 c l a s s i f y
: C l a s s i f y P a t t e r n s with ART1
bam
: B i d i r e c t i o n a l A s s o c i a t i v e Memory
b a y e s i a n t a x i
: The t a x i c a b problem with Bayesian networks .
benchmark
: Perform an Encog benchmark .
benchmark e l l i o t t
: Perform a benchmark o f t h e E l l i o t t
activation function .
benchmarks i m p l e
: Perform a s i m p l e Encog benchmark .
cpn
: Counter P r o p a g a t i o n Neural Network (CPN)
csvmarket
: Simple Market P r e d i c t i o n
CSVPredict
: CSVPredict
encoder
: A Fahlman e n c o d e r .
e p l s i m p l e
: Simple EPL e q u a t i o n s o l v e .
forest
: F o r e s t Cover
Forex
: P r e d i c t Forex r a t e s v i a CSV .
f r e e f o r m c o n v e r t
: Freeform Network : c o n v e r t f l a t network t o
freeform
f r e e f o r m elman
: Freeform Network : Elman SRN
f r e e f o r m o n l i n e xor : Freeform Network : O n l i n e XOR
f r e e f o r m s k i p
: Freeform Network : Skip network
11
12
xor
helper functions
.
xorelman
xorf a c t o r y
network t y p e s .
xorj o r d a n
xorn e a t
xoro n l i n e
training .
xorpso
training .
1.2.3
Click the Install button, and Encog will be added to your project. You
should now modify your Program.cs file to look similar to the below example
in Listing 1.4. Note, that I named my project encog sample csharp, your
namespace line will match your project name.
Listing 1.4: Simple C# XOR Example
13
14
}
}
You can find this complete example at the following GitHub URL:
https://github.com/encog/encog-sample-csharp
15
17
Chapter 2
Encog Quick Start Examples
Using Encog for Classification
Using Encog for Regression
Using Encog for Time Series
This chapter will take you through three non-trivial Encog examples. These
examples are designed to be starting points for your own projects. These
examples demonstrate classification, regression and time-series.
2.1
Classification problems seek to place data set elements into predefined classes.
The dataset that will be used for this example is Fishers Iris dataset. This
is a classic dataset that contains measurements for 150 different Iris flowers.
Each of the 150 flowers contains four measurements. The species of Iris is also
provided. For this example we would like to train a machine-learning model
to classify the species of iris given the four measurements. This dataset can
be found at the following URL:
https://archive.ics.uci.edu/ml/datasets/Iris
A sampling of the dataset is shown here.
18
5.1
4.9
7.0
6.4
6.3
5.8
,3.5
,3.0
,3.2
,3.2
,3.3
,2.7
,1.4
,1.4
,4.7
,4.5
,6.0
,5.1
,0.2
,0.2
,1.4
,1.5
,2.5
,1.9
,
,
,
,
,
,
I r i s s e t o s a
I r i s s e t o s a
I r i s v e r s i c o l o r
I r i s v e r s i c o l o r
I r i s v i r g i n i c a
I r i s v i r g i n i c a
This dataset has no column headers and is comma delineated. Each additional
line provides the measurements and species of a particular flower.
We will create a program that generates a model to predict the type of iris,
based on the four measurements. This program will allow us to easily change
the model type to any of the following:
Feedforward Neural Network
NEAT Neural Network
Probabilistic Neural Network
RBF Neural Network
Support Vector Machine
When you change the model type, Encog will automatically change the way
that the data are normalized.
This program will split the training data into a training and validation set.
The validation set will be held until the end to see how well we can predict
data that the model was not trained on. Training will be performed using a
5-fold cross-validation.
This complete example can be found with the Encog examples. The Java
version contains this example here:
o r g . encog . examples . g u i d e . c l a s s i f i c a t i o n . I r i s C l a s s i f i c a t i o n
The C# version can be executed with the argument guide-iris, and can be
found at the following location:
Encog . Examples . Guide . C l a s s i f i c a t i o n . I r i s C l a s s i f i c a t i o n
2.1.1
19
20
species , 4 ,
ColumnType . Nominal ) ;
data . Analyze ( ) ;
The final step is to call the Analyze method. This reads the entire file and
determines the minimum, maximum, mean and standard deviations for each
column. These statistics will be useful for both normalization and interpolation
of missing values. Fortunately, the iris data set has no missing values.
2.1.2
Before we can normalize the data, we must choose our desired model type.
The model type often dictates how the data should be normalized. For this
example, I will use a feedforward neural network. We must also specify the
column that we are going to predict. In this case, we are predicting the iris
species. Because the iris species is non-numeric, this is a classification problem.
Performing a regression problem is simply a matter of choosing to predict a
numeric column.
We also choose to send all output to the console. Now that everything is
set, we can normalize. The normalization process will load the CSV file into
memory and normalize the data as it is loaded.
The following Java code accomplishes this.
// Map t h e p r e d i c t i o n column t o t h e o u t p u t o f t h e model , and a l l
// o t h e r columns t o t h e i n p u t .
data . d e f i n e S i n g l e O u t p u t O t h e r s I n p u t ( outputColumn ) ;
EncogModel model = new EncogModel ( data ) ;
model . s e l e c t M e t h o d ( data , MLMethodFactory .TYPE FEEDFORWARD) ;
model . s e t R e p o r t (new C o n s o l e S t a t u s R e p o r t a b l e ( ) ) ;
data . n o r m a l i z e ( ) ;
21
2.1.3
Before we fit the model we hold back part of the data for a validation set. We
choose to hold back 30%. We chose to randomize the data set with a fixed seed
value. This fixed seed ensures that we get the same training and validation
sets each time. This is a matter of preference. If you want a random sample
each time then pass in the current time for the seed. Finally, we fit the model
with a k-fold cross-validation of size 5.
The following Java code accomplishes this.
model . h o l d B a c k V a l i d a t i o n ( 0 . 3 , true , 1 0 0 1 ) ;
model . s e l e c t T r a i n i n g T y p e ( data ) ;
MLRegression bestMethod = ( MLRegression ) model . c r o s s v a l i d a t e ( 5 ,
true ) ;
22
2.1.4
We can now display several of the errors. We can check the training error and
validation errors. We can also display the stats gathered on the data.
The following Java code accomplishes this.
System . out . p r i n t l n ( T r a i n i n g e r r o r :
+ E n c o g U t i l i t y . c a l c u l a t e R e g r e s s i o n E r r o r ( bestMethod ,
model . g e t T r a i n i n g D a t a s e t ( ) ) ) ;
System . out . p r i n t l n ( V a l i d a t i o n e r r o r :
+ E n c o g U t i l i t y . c a l c u l a t e R e g r e s s i o n E r r o r ( bestMethod ,
model . g e t V a l i d a t i o n D a t a s e t ( ) ) ) ;
N o r m a l i z a t i o n H e l p e r h e l p e r = data . getNormHelper ( ) ;
System . out . p r i n t l n ( h e l p e r . t o S t r i n g ( ) ) ;
System . out . p r i n t l n ( F i n a l model : + bestMethod ) ;
2.1.5
Once youve trained a model you will likely want to use this mode. The best
model can be saved using normal serialization. However, you will need a way
to normalize data going into the model, and denormalize data coming out of
the mode. The normalization helper object, obtained in the previous section,
can do this for you. You can also serialize the normalization helper.
The following Java code opens the CSV file and predicts each iris using the
best model and normalization helper.
ReadCSV c s v = new ReadCSV(
i r i s F i l e , f a l s e , CSVFormat . DECIMAL POINT) ;
23
24
The output from this program will look similar to the following. First the
program downloads the data set and begins training. Training occurs over 5
folds. Each fold uses a separate portion of the training data as validation.
The remaining portion of the training data is used to train the model for that
fold. Each fold gives us a different model; we choose the model with the best
validation score. We train until the validation score ceases to improve. This
helps to prevent over-fitting. The first fold trains for 48 iterations before it
stops:
Downloading I r i s d a t a s e t t o : / var / f o l d e r s /m5/
g b c v p w z j 7 g j d b 4 1 z 1 x 9 r z c h 0 0 0 0 g n /T/ i r i s . c s v
1/5 : Fold #1
1/5 : Fold #1/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 3 4 7 5 1 7 0 8 ,
Validation Error : 1.42040606
1/5 : Fold #1/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 9 9 4 1 2 9 7 1 ,
Validation Error : 1.42040606
...
1/5 : Fold #1/5: I t e r a t i o n #47, T r a i n i n g E r r o r : 0 . 0 3 0 2 5 7 4 8 ,
Validation Error : 0.00397662
1/5 : Fold #1/5: I t e r a t i o n #48, T r a i n i n g E r r o r : 0 . 0 3 0 0 7 6 2 0 ,
Validation Error : 0.00558196
The first fold had a very good validation error, and we move on to the second
fold.
2/5 : Fold #2
2/5 : Fold #2/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 1 0 1 5 3 3 7 2 ,
Validation Error : 1.22069520
2/5 : Fold #2/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 5 8 5 4 3 1 5 1 ,
Validation Error : 1.22069520
...
2/5 : Fold #2/5: I t e r a t i o n #28, T r a i n i n g E r r o r : 0 . 0 4 3 5 1 3 7 6 ,
Validation Error : 0.15599265
25
The second fold did not have a very good validation error. It is important to
note that that the folds are independent of each other. Each fold starts with
a new model.
3/5 : Fold #3
3/5 : Fold #3/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 1 3 6 8 5 2 7 0 ,
Validation Error : 1.09062392
3/5 : Fold #3/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 7 8 5 6 7 1 6 5 ,
Validation Error : 1.09062392
...
3/5 : Fold #3/5: I t e r a t i o n #47, T r a i n i n g E r r o r : 0 . 0 1 8 5 0 2 7 9 ,
Validation Error : 0.04417794
3/5 : Fold #3/5: I t e r a t i o n #48, T r a i n i n g E r r o r : 0 . 0 1 8 8 9 0 8 5 ,
Validation Error : 0.05261448
Fold 3 did somewhat better than fold 2, but not as good as fold 1. We now
begin fold 4.
4/5 : Fold #4
4/5 : Fold #4/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 1 5 4 9 2 7 7 2 ,
Validation Error : 1.17098262
4/5 : Fold #4/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 5 6 0 9 5 8 1 3 ,
Validation Error : 1.17098262
...
4/5 : Fold #4/5: I t e r a t i o n #41, T r a i n i n g E r r o r : 0 . 0 1 9 8 2 7 7 6 ,
Validation Error : 0.08958218
4/5 : Fold #4/5: I t e r a t i o n #42, T r a i n i n g E r r o r : 0 . 0 2 2 2 5 7 1 6 ,
Validation Error : 0.09186468
26
After fold 5 is complete, we report the cross-validated score that is the average
of all 5 validation scores. This should give us a reasonable estimate of how
well the model might perform on data that it was not trained with. Using the
best model, from the 5 folds, we now evaluate it with the training data and
the true validation data that we set aside earlier.
Training e r r o r : 0.023942862952610295
Validation e r r o r : 0.061413317688009464
As you can see, the training error is lower than the validation error. This
is normal, as models always tend to perform better on data that they were
trained with. However, it is important to note that the validation error is close
to the cross-validated error. The cross-validated error will often give us a good
estimate of how our model will perform on untrained data.
Finally, we display the normalization data. This shows us the min, max,
mean and standard deviation for each column.
[ NormalizationHelper :
[ C o l u m n D e f i n i t i o n : s e p a l l e n g t h ( c o n t i n u o u s ) ; low =4.300000 , h i g h
=7.900000 , mean =5.843333 , sd = 0 . 8 2 5 3 0 1 ]
[ C o l u m n D e f i n i t i o n : s e p a l width ( c o n t i n u o u s ) ; low =2.000000 , h i g h
=4.400000 , mean =3.054000 , sd = 0 . 4 3 2 1 4 7 ]
[ C o l u m n D e f i n i t i o n : p e t a l l e n g t h ( c o n t i n u o u s ) ; low =1.000000 , h i g h
=6.900000 , mean =3.758667 , sd = 1 . 7 5 8 5 2 9 ]
[ C o l u m n D e f i n i t i o n : p e t a l width ( c o n t i n u o u s ) ; low =0.100000 , h i g h
=2.500000 , mean =1.198667 , sd = 0 . 7 6 0 6 1 3 ]
[ C o l u m n D e f i n i t i o n : s p e c i e s ( nominal ) ; [ I r i s s e t o s a , I r i s v e r s i c o l o r ,
I r i s v i r g i n i c a ] ]
]
Finally, we loop over the entire dataset and display predictions. This part
of the example shows you how to use the model with new data you might
acquire. However, for new data, you might not have the correct outcome, as
that is what you seek to predict.
F i n a l model : [ BasicNetwork : L a y e r s =3]
[ 5 . 1 , 3 . 5 , 1 . 4 , 0 . 2 ] > p r e d i c t e d : I r i s s e t o s a ( c o r r e c t : I r i s
setosa )
2.2
27
0 . 2 ] > p r e d i c t e d : I r i s s e t o s a ( c o r r e c t : I r i s
1 . 4 ] > p r e d i c t e d : I r i s v e r s i c o l o r ( c o r r e c t : I r i s
1 . 5 ] > p r e d i c t e d : I r i s v e r s i c o l o r ( c o r r e c t : I r i s
2 . 5 ] > p r e d i c t e d : I r i s v i r g i n i c a ( c o r r e c t : I r i s
Regression problems seek to produce a numeric outcome from the input data.
In this section we will create a model that attempts to predict the miles-pergallon that a particular car will achieve. This example makes use of the UCI
auto MPG dataset that can be found at the following URL:
https://archive.ics.uci.edu/ml/datasets/Auto+MPG
A sampling of the dataset is shown here.
18.0
8
307.0
c h e v e l l e malibu
15.0
8
350.0
s k y l a r k 320
18.0
8
318.0
satellite
16.0
8
304.0
sst
17.0
8
302.0
torino
130.0
3504.
12.0
70
1 chevrolet
165.0
3693.
11.5
70
1 buick
150.0
3436.
11.0
70
1 plymouth
150.0
3433.
12.0
70
1 amc r e b e l
140.0
3449.
10.5
70
1 ford
As you can see, from the data, there are no column headings and the data is
space-separated. This must be considered when mapping the file to a dataset.
The UCI database tells us that the columns represent the following:
1 . mpg :
2. cylinders :
3. displacement :
continuous
multi v a l u e d d i s c r e t e
continuous
28
horsepower :
weight :
acceleration :
model y e a r :
origin :
c a r name :
continuous
continuous
continuous
multi v a l u e d d i s c r e t e
multi v a l u e d d i s c r e t e
s t r i n g ( unique f o r each i n s t a n c e )
We will create a program that generates a model to predict the MPG for the
car, based on some of the other values. This program will allow us to easily
change the model type to any of the following:
Feedforward Neural Network
NEAT Neural Network
Probabilistic Neural Network
RBF Neural Network
Support Vector Machine
When you change the model type, Encog will automatically change the way
that the data are normalized.
This program will split the training data into a training and validation set.
The validation set will be held until the end to see how well we can predict
data that the model was not trained on. Training will be performed using a
5-fold cross-validation.
This complete example can be found with the Encog examples. The Java
version contains this example here.
o r g . encog . examples . g u i d e . r e g r e s s i o n . AutoMPGRegression
2.2.1
29
30
31
C o l u m n D e f i n i t i o n columnHorsePower =
data . DefineSourceColumn (
h o r s e p o w e r , 3 , ColumnType . Continuous ) ;
data . DefineSourceColumn (
w e i g h t , 4 , ColumnType . Continuous ) ;
data . DefineSourceColumn (
a c c e l e r a t i o n , 5 , ColumnType . Continuous ) ;
C o l u m n D e f i n i t i o n columnModelYear =
data . DefineSourceColumn (
m o d e l y e a r , 6 , ColumnType . O r d i n a l ) ;
columnModelYear . D e f i n e C l a s s (new [ ]
{ 70 , 71 , 72 , 73 , 74 , 75 , 76 ,
77 , 78 , 79 , 80 , 81 , 82 } ) ;
data . DefineSourceColumn ( o r i g i n , 7 , ColumnType . Nominal ) ;
// D e f i n e how m i s s i n g v a l u e s a r e r e p r e s e n t e d .
data . NormHelper . DefineUnknownValue ( ? ) ;
data . NormHelper . D e f i n e M i s s i n g H a n d l e r ( columnHorsePower ,
new MeanMissingHandler ( ) ) ;
// Analyze t h e data , d e t e r m i n e t h e min/max/mean/ sd
// o f e v e r y column .
data . Analyze ( ) ;
The final step is to call the Analyze method. This reads the entire file and
determines the minimum, maximum, mean and standard deviations for each
column. These statistics will be useful for both normalization and interpolation
of missing values. Fortunately, the iris data set has no missing values.
2.2.2
Before we can normalize the data, we must choose our desired model type.
The model type often dictates how the data should be normalized. For this
example, I will use a feedforward neural network. We must also specify the
column that we are going to predict. In this case, we are predicting the
mpg value. Because the MPG value is numeric, this is a regression problem.
Performing a classification problem is simply a matter of choosing to predict
a non-numeric column, as we did in the last section.
We also choose to send all output to the console. Now that everything is
set we can normalize. The normalization process will load the CSV file into
memory and normalize the data as it is loaded.
32
2.2.3
Before we fit the model we hold back part of the data for a validation set. We
choose to hold back 30%. We chose to randomize the data set with a fixed seed
value. This fixed seed ensures that we get the same training and validation
sets each time. This is a matter of preference. If you want a random sample
each time then pass in the current time for the seed. Finally, we fit the model
with a k-fold cross-validation of size 5.
The following Java code accomplishes this.
model . h o l d B a c k V a l i d a t i o n ( 0 . 3 , true , 1 0 0 1 ) ;
model . s e l e c t T r a i n i n g T y p e ( data ) ;
33
2.2.4
We can now display several of the errors. We can check the training error and
validation errors. We can also display the stats gathered on the data.
The following Java code accomplishes this.
// D i s p l a y t h e t r a i n i n g and v a l i d a t i o n e r r o r s .
System . out . p r i n t l n ( T r a i n i n g e r r o r :
+ model . c a l c u l a t e E r r o r ( bestMethod , model . g e t T r a i n i n g D a t a s e t ( ) )
);
System . out . p r i n t l n ( V a l i d a t i o n e r r o r :
+ model . c a l c u l a t e E r r o r ( bestMethod , model . g e t V a l i d a t i o n D a t a s e t
() ) ) ;
// D i s p l a y our n o r m a l i z a t i o n p a r a m e t e r s .
N o r m a l i z a t i o n H e l p e r h e l p e r = data . getNormHelper ( ) ;
System . out . p r i n t l n ( h e l p e r . t o S t r i n g ( ) ) ;
// D i s p l a y t h e f i n a l model .
System . out . p r i n t l n ( F i n a l model : + bestMethod ) ;
34
// D i s p l a y t h e t r a i n i n g and v a l i d a t i o n e r r o r s .
C o n s o l e . WriteLine (@ T r a i n i n g e r r o r :
+ model . C a l c u l a t e E r r o r ( bestMethod , model . T r a i n i n g D a t a s e t ) ) ;
C o n s o l e . WriteLine (@ V a l i d a t i o n e r r o r :
+ model . C a l c u l a t e E r r o r ( bestMethod , model . V a l i d a t i o n D a t a s e t ) ) ;
// D i s p l a y our n o r m a l i z a t i o n p a r a m e t e r s .
N o r m a l i z a t i o n H e l p e r h e l p e r = data . NormHelper ;
C o n s o l e . WriteLine ( h e l p e r . T o S t r i n g ( ) ) ;
// D i s p l a y t h e f i n a l model .
C o n s o l e . WriteLine ( F i n a l model : + bestMethod ) ;
2.2.5
Once youve trained a model you will likely want to use the model. The best
model can be saved using normal serialization. However, you will need a way
to normalize data going into the model, and denormalize data coming out of
the mode. The normalization helper object, obtained in the previous section,
can do this for you. You can also serialize the normalization helper.
The following Java code opens the CSV file and predicts each cars MPG
using the best model and normalization helper.
ReadCSV c s v = new ReadCSV( f i l e n a m e , f a l s e , format ) ;
S t r i n g [ ] l i n e = new S t r i n g [ 7 ] ;
MLData i n p u t = h e l p e r . a l l o c a t e I n p u t V e c t o r ( ) ;
while ( c s v . next ( ) ) {
S t r i n g B u i l d e r r e s u l t = new S t r i n g B u i l d e r ( ) ;
l i n e [ 0 ] = csv . get (1) ;
l i n e [ 1 ] = csv . get (2) ;
l i n e [ 2 ] = csv . get (3) ;
l i n e [ 3 ] = csv . get (4) ;
l i n e [ 4 ] = csv . get (5) ;
l i n e [ 5 ] = csv . get (6) ;
l i n e [ 6 ] = csv . get (7) ;
String c o r r e c t = csv . get (0) ;
h e l p e r . n o r m a l i z e I n p u t V e c t o r ( l i n e , i n p u t . getData ( ) , f a l s e ) ;
MLData output = bestMethod . compute ( i n p u t ) ;
String irisChosen =
h e l p e r . d e n o r m a l i z e O u t p u t V e c t o r T o S t r i n g ( output ) [ 0 ] ;
r e s u l t . append ( Arrays . t o S t r i n g ( l i n e ) ) ;
35
r e s u l t . append ( > p r e d i c t e d : ) ;
r e s u l t . append ( i r i s C h o s e n ) ;
r e s u l t . append ( ( c o r r e c t : ) ;
r e s u l t . append ( c o r r e c t ) ;
r e s u l t . append ( ) ) ;
System . out . p r i n t l n ( r e s u l t . t o S t r i n g ( ) ) ;
}
The output from this program will look similar to the following. First the
program downloads the data set and begins training. Training occurs over 5
folds. Each fold uses a separate portion of the training data as validation.
The remaining portion of the training data is used to train the model for that
36
fold. Each fold gives us a different model; we choose the model with the best
validation score. We train until the validation score ceases to improve. This
helps to prevent over-fitting. The first fold trains for 60 iterations before it
stops:
Downloading autompg d a t a s e t t o : / var / f o l d e r s /m5/
g b c v p w z j 7 g j d b 4 1 z 1 x 9 r z c h 0 0 0 0 g n /T/ autompg . data
1/5 : Fold #1
1/5 : Fold #1/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 5 8 7 4 1 3 1 1 ,
Validation Error : 1.38996414
1/5 : Fold #1/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 1 . 4 8 7 9 2 3 4 0 ,
Validation Error : 1.38996414
1/5 : Fold #1/5: I t e r a t i o n #3, T r a i n i n g E r r o r : 1 . 4 5 2 9 2 1 0 8 ,
Validation Error : 1.38996414
1/5 : Fold #1/5: I t e r a t i o n #4, T r a i n i n g E r r o r : 1 . 2 5 8 7 6 4 1 3 ,
Validation Error : 1.38996414
1/5 : Fold #1/5: I t e r a t i o n #5, T r a i n i n g E r r o r : 1 . 1 0 3 1 7 3 3 9 ,
Validation Error : 1.38996414
...
1/5 : Fold #1/5: I t e r a t i o n #60, T r a i n i n g E r r o r : 0 . 0 1 5 0 3 1 4 8 ,
Validation Error : 0.02394547
The first fold stopped with a validation error of 0.02. The second fold continues.
2/5 : Fold #2
2/5 : Fold #2/5: I t e r a t i o n #1, T r a i n i n g
Validation Error : 0.38868284
2/5 : Fold #2/5: I t e r a t i o n #2, T r a i n i n g
Validation Error : 0.38868284
2/5 : Fold #2/5: I t e r a t i o n #3, T r a i n i n g
Validation Error : 0.38868284
2/5 : Fold #2/5: I t e r a t i o n #4, T r a i n i n g
Validation Error : 0.38868284
2/5 : Fold #2/5: I t e r a t i o n #5, T r a i n i n g
Validation Error : 0.38868284
2/5 : Fold #2/5: I t e r a t i o n #6, T r a i n i n g
Validation Error : 0.06406355
2/5 : Fold #2/5: I t e r a t i o n #7, T r a i n i n g
Validation Error : 0.06406355
2/5 : Fold #2/5: I t e r a t i o n #8, T r a i n i n g
Validation Error : 0.06406355
...
Error : 0.41743768 ,
Error : 0.29303614 ,
Error : 0.23245726 ,
Error : 0.23780972 ,
Error : 0.12788026 ,
Error : 0.10327476 ,
Error : 0.06530528 ,
Error : 0.07534470 ,
37
The second fold stops with a validation error of 0.02. It is important to note
that that the folds are independent of each other. Each fold starts with a new
model.
3/5 : Fold #3
3/5 : Fold #3/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 0 . 5 1 5 8 7 6 8 2 ,
Validation Error : 0.62952953
3/5 : Fold #3/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 4 0 6 5 5 1 5 1 ,
Validation Error : 0.62952953
3/5 : Fold #3/5: I t e r a t i o n #3, T r a i n i n g E r r o r : 0 . 3 9 7 8 0 7 3 6 ,
Validation Error : 0.62952953
3/5 : Fold #3/5: I t e r a t i o n #4, T r a i n i n g E r r o r : 0 . 2 9 7 3 3 4 4 7 ,
Validation Error : 0.62952953
3/5 : Fold #3/5: I t e r a t i o n #5, T r a i n i n g E r r o r : 0 . 2 9 9 3 3 8 9 5 ,
Validation Error : 0.62952953
...
3/5 : Fold #3/5: I t e r a t i o n #90, T r a i n i n g E r r o r : 0 . 0 1 3 6 4 8 6 5 ,
Validation Error : 0.02184541
4/5 : Fold #4
4/5 : Fold #4/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 0 . 6 6 9 2 6 7 3 8 ,
Validation Error : 0.71307852
4/5 : Fold #4/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 4 4 8 9 3 0 9 5 ,
Validation Error : 0.71307852
4/5 : Fold #4/5: I t e r a t i o n #3, T r a i n i n g E r r o r : 0 . 5 5 1 8 6 6 5 1 ,
Validation Error : 0.71307852
4/5 : Fold #4/5: I t e r a t i o n #4, T r a i n i n g E r r o r : 0 . 5 3 7 5 4 1 4 5 ,
Validation Error : 0.71307852
4/5 : Fold #4/5: I t e r a t i o n #5, T r a i n i n g E r r o r : 0 . 2 3 6 4 8 4 6 3 ,
Validation Error : 0.71307852
...
4/5 : Fold #4/5: I t e r a t i o n #108 , T r a i n i n g E r r o r : 0 . 0 1 5 9 7 9 5 2 ,
Validation Error : 0.01835486
5/5 : Fold #5
5/5 : Fold #5/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 4 3 9 4 0 5 7 3 ,
Validation Error : 1.36648367
5/5 : Fold #5/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 5 7 3 3 4 5 2 9 ,
Validation Error : 1.36648367
5/5 : Fold #5/5: I t e r a t i o n #3, T r a i n i n g E r r o r : 0 . 6 5 7 6 5 0 2 5 ,
Validation Error : 1.36648367
5/5 : Fold #5/5: I t e r a t i o n #4, T r a i n i n g E r r o r : 0 . 4 2 3 8 4 5 3 6 ,
38
After fold 5 is complete, we report the cross-validated score that is the average
of all 5 validation scores. This should give us a reasonable estimate of how
well the model might perform on data that it was not trained with. Using the
best model, from the 5 folds, we now evaluate it with the training data and
the true validation data that we set aside earlier.
5/5 : Crossv a l i d a t e d s c o r e : 0 . 0 2 4 0 5 3 1 1 7 7 5 3 2 5 2 4 8
Training e r r o r : 0.016437770234365972
Validation e r r o r : 0.022529531723353303
As you can see, the training error is lower than the validation error. This
is normal, as models always tend to perform better on data that they were
trained with. However, it is important to note that the validation error is close
to the cross-validated error. The cross-validated error will often give us a good
estimate of how our model will perform on untrained data.
Finally, we display the normalization data. This shows us the min, max,
mean and standard deviation for each column.
[ NormalizationHelper :
[ C o l u m n D e f i n i t i o n : mpg( c o n t i n u o u s ) ; low =9.000000 , h i g h =46.600000 , mean
=23.514573 , sd = 7 . 8 0 6 1 5 9 ]
[ ColumnDefinition : c y l i n d e r s ( o r d i n a l ) ; [ 3 , 4 , 5 , 6 , 8 ] ]
[ C o l u m n D e f i n i t i o n : d i s p l a c e m e n t ( c o n t i n u o u s ) ; low =68.000000 , h i g h
=455.000000 , mean =193.425879 , sd = 1 0 4 . 1 3 8 7 6 4 ]
[ C o l u m n D e f i n i t i o n : h o r s e p o w e r ( c o n t i n u o u s ) ; low =? , h i g h =? ,mean=? , sd =?]
[ C o l u m n D e f i n i t i o n : w e i g h t ( c o n t i n u o u s ) ; low = 1 , 6 1 3 . 0 0 0 0 0 0 , h i g h
= 5 , 1 4 0 . 0 0 0 0 0 0 , mean = 2 , 9 7 0 . 4 2 4 6 2 3 , sd = 8 4 5 . 7 7 7 2 3 4 ]
[ C o l u m n D e f i n i t i o n : a c c e l e r a t i o n ( c o n t i n u o u s ) ; low =8.000000 , h i g h
=24.800000 , mean =15.568090 , sd = 2 . 7 5 4 2 2 2 ]
[ ColumnDefinition : model year ( o r d i n a l ) ; [ 7 0 , 71 , 72 , 73 , 74 , 75 , 76 ,
77 , 78 , 79 , 80 , 81 , 8 2 ] ]
[ C o l u m n D e f i n i t i o n : o r i g i n ( nominal ) ; [ 1 , 3 , 2 ] ]
]
F i n a l model : [ BasicNetwork : L a y e r s =3]
39
Finally, we loop over the entire dataset and display predictions. This part
of the example shows you how to use the model with new data you might
acquire. However, for new data, you might not have the correct outcome, as
that is what you seek to predict.
[ 8 , 3 0 7 . 0 , 1 3 0 . 0 , 3 5 0 4 . , 1 2 . 0 , 7 0 , 1 ] > p r e d i c t e d :
14.435441733777008( c o r r e c t : 18.0)
[ 8 , 3 5 0 . 0 , 1 6 5 . 0 , 3 6 9 3 . , 1 1 . 5 , 7 0 , 1 ] > p r e d i c t e d :
13.454496578812098( c o r r e c t : 15.0)
[ 8 , 3 1 8 . 0 , 1 5 0 . 0 , 3 4 3 6 . , 1 1 . 0 , 7 0 , 1 ] > p r e d i c t e d :
14.388722851782898( c o r r e c t : 18.0)
[ 8 , 3 0 4 . 0 , 1 5 0 . 0 , 3 4 3 3 . , 1 2 . 0 , 7 0 , 1 ] > p r e d i c t e d :
14.72605875261915( c o r r e c t : 16.0)
[ 8 , 3 0 2 . 0 , 1 4 0 . 0 , 3 4 4 9 . , 1 0 . 5 , 7 0 , 1 ] > p r e d i c t e d :
14.418818543779944( c o r r e c t : 17.0)
[ 8 , 4 2 9 . 0 , 1 9 8 . 0 , 4 3 4 1 . , 1 0 . 0 , 7 0 , 1 ] > p r e d i c t e d :
12.399521136402008( c o r r e c t : 15.0)
[ 8 , 4 5 4 . 0 , 2 2 0 . 0 , 4 3 5 4 . , 9 . 0 , 7 0 , 1 ] > p r e d i c t e d :
12.518569151158149( c o r r e c t : 14.0)
[ 8 , 4 4 0 . 0 , 2 1 5 . 0 , 4 3 1 2 . , 8 . 5 , 7 0 , 1 ] > p r e d i c t e d :
12.555365172162254( c o r r e c t : 14.0)
[ 8 , 4 5 5 . 0 , 2 2 5 . 0 , 4 4 2 5 . , 1 0 . 0 , 7 0 , 1 ] > p r e d i c t e d :
12.388570799526281( c o r r e c t : 14.0)
[ 8 , 3 9 0 . 0 , 1 9 0 . 0 , 3 8 5 0 . , 8 . 5 , 7 0 , 1 ] > p r e d i c t e d :
12.969680895760376( c o r r e c t : 15.0)
[ 8 , 3 8 3 . 0 , 1 7 0 . 0 , 3 5 6 3 . , 1 0 . 0 , 7 0 , 1 ] > p r e d i c t e d :
13.504299010941919( c o r r e c t : 15.0)
[ 8 , 3 4 0 . 0 , 1 6 0 . 0 , 3 6 0 9 . , 8 . 0 , 7 0 , 1 ] > p r e d i c t e d :
13.47743472814497( c o r r e c t : 14.0)
[ 8 , 4 0 0 . 0 , 1 5 0 . 0 , 3 7 6 1 . , 9 . 5 , 7 0 , 1 ] > p r e d i c t e d :
13.076737534131402( c o r r e c t : 15.0)
[ 8 , 4 5 5 . 0 , 2 2 5 . 0 , 3 0 8 6 . , 1 0 . 0 , 7 0 , 1 ] > p r e d i c t e d :
14.54484159281664( c o r r e c t : 14.0)
[ 4 , 1 1 3 . 0 , 9 5 . 0 0 , 2 3 7 2 . , 1 5 . 0 , 7 0 , 3 ] > p r e d i c t e d :
24.169018638449415( c o r r e c t : 24.0)
...
40
2.3
MON SSN
1 58.0
2 62.6
3 70.0
4 55.7
5 85.0
6 83.5
7 94.8
8 66.3
9 75.9
10 7 5 . 5
11 1 5 8 . 6
12 8 5 . 2
1 73.3
2 75.9
3 89.2
4 88.3
DEV
24.1
25.1
26.6
23.6
29.4
29.2
31.1
25.9
27.7
27.7
40.6
29.5
27.3
27.7
30.2
30.0
We will create a program that generates a model to predict the sunspots for a
month, based on previous values. This program will allow us to easily change
the model type to any of the following:
Feedforward Neural Network
NEAT Neural Network
Probabilistic Neural Network
RBF Neural Network
Support Vector Machine
41
When you change the model type, Encog will automatically change the way
that the data are normalized.
This program will split the training data into a training and validation set.
The validation set will be held until the end to see how well we can predict
data that the model was not trained on. Training will be performed using a
5-fold cross-validation.
This complete example can be found with the Encog examples. The Java
version contains this example here.
o r g . encog . examples . g u i d e . t i m e s e r i e s . S u n S p o t T i m e s e r i e s
2.3.1
42
The VersatileMLDataSet allows you to specify a lead and a lag for timeboxing. We are using a lag of 3, and a lead of 1. This means that we will use
the last three SSN and DEV values to predict the next one. It takes a few
months to build up the lag. Because of this we are cannot use the first two
months to generate a prediction. Figure 2.2 shows how the time-box is built
up.
43
You can also specify a lead value and predict further into the future than
just one unit. Not all model types support this. A model type must support
multiple outputs to predict further into the future than one unit. Neural
networks are a good choice for multiple outputs; however, models such as
support vector machines do not.
If you would like to predict further into the future than one unit, there
are ways of doing this without multiple outputs. You can use your predicted
value as part of the lag values and extrapolate as far into the future as you
wish. The following shows you how the numbers 1 through 10 would look with
different lead and lag values.
Lag 0 ; Lead 0 [ 1 0 rows ] 1>1 2>2 3>3 4>4 5>5 6>6 7>7 8>8
9>9 10>10
Lag 0 ; Lead 1 [ 9 rows ] 1>2 2>3 3>4 4>5 5>6 6>7 7>8 8>9
9>10
Lag 1 ; Lead 0 [ 9 rows , not u s e f u l ] 1,2>1 2,3>2 3,4>3 4,5>4
5,6>5 6,7>6
7,8>7 8,9>8 9,10>9
Lag 1 ; Lead 1 [ 8 rows ] 1,2>3 2,3>4 3,4>5 4,5>6 5,6>7 6,7>8
7,8>9
8,9>10
Lag 1 ; Lead 2 [ 7 rows ] 1 ,2 >3 ,4 2 ,3 >4 ,5 3 ,4 >5 ,6 4 ,5 >6 ,7
5 ,6 >7 ,8 6 ,7 >8 ,9
7 ,8 >9 ,10
Lag 2 ; Lead 1 [ 7 rows ] 1 ,2 ,3 >4 2 ,3 ,4 >5 3 ,4 ,5 >6 4 ,5 ,6 >7
5 ,6 ,7 >8 6 ,7 ,8 >9
7 ,8 ,9 >10
44
2.3.2
Before we can normalize the data, we must choose our desired model type.
The model type often dictates how the data should be normalized. For this
45
example, I will use a feedforward neural network. We must also specify the
column that we are going to predict. In this case, we are predicting the
SSN value. Because the SSN value is numeric, this is a regression problem.
Performing a classification problem is simply a matter of choosing to predict
a non-numeric column, as we did in the last section.
We also choose to send all output to the console. Now that everything is
set we can normalize. The normalization process will load the CSV file into
memory and normalize the data as it is loaded.
The following Java code accomplishes this.
// Map t h e p r e d i c t i o n column t o t h e o u t p u t o f t h e model , and a l l
// o t h e r columns t o t h e i n p u t .
data . d e f i n e S i n g l e O u t p u t O t h e r s I n p u t (columnMPG) ;
EncogModel model = new EncogModel ( data ) ;
model . s e l e c t M e t h o d ( data , MLMethodFactory .TYPE FEEDFORWARD) ;
// Send any o u t p u t t o t h e c o n s o l e .
model . s e t R e p o r t (new C o n s o l e S t a t u s R e p o r t a b l e ( ) ) ;
// Now n o r m a l i z e t h e d a t a . Encog w i l l a u t o m a t i c a l l y
// d e t e r m i n e t h e c o r r e c t n o r m a l i z a t i o n t y p e b a s e d
// on t h e model you c h o s e i n t h e l a s t s t e p .
data . n o r m a l i z e ( ) ;
2.3.3
Before we fit the model we hold back part of the data for a validation set. We
choose to hold back 30%. We chose to randomize the data set with a fixed seed
46
value. This fixed seed ensures that we get the same training and validation
sets each time. This is a matter of preference. If you want a random sample
each time then pass in the current time for the seed. We also establish the lead
and lag window sizes. Finally, we fit the model with a k-fold cross-validation
of size 5.
The following Java code accomplishes this.
// S e t time s e r i e s .
data . setLeadWindowSize ( 1 ) ;
data . setLagWindowSize (WINDOW SIZE) ;
// Hold b a c k some d a t a f o r a f i n a l v a l i d a t i o n .
// Do not s h u f f l e t h e d a t a i n t o a random o r d e r i n g .
// ( n e v e r s h u f f l e time s e r i e s )
// Use a s e e d o f 1001 so t h a t we a l w a y s use t h e same
// h o l d b a c k and w i l l g e t more c o n s i s t e n t r e s u l t s .
model . h o l d B a c k V a l i d a t i o n ( 0 . 3 , f a l s e , 1 0 0 1 ) ;
// Choose w h a t e v e r i s t h e d e f a u l t t r a i n i n g t y p e f o r t h i s model .
model . s e l e c t T r a i n i n g T y p e ( data ) ;
// Use a 5 f o l d c r o s s v a l i d a t e d t r a i n . Return t h e
// b e s t method found .
// ( n e v e r s h u f f l e time s e r i e s )
MLRegression bestMethod =
( MLRegression ) model . c r o s s v a l i d a t e ( 5 , f a l s e ) ;
model . h o l d B a c k V a l i d a t i o n ( 0 . 3 , true , 1 0 0 1 ) ;
model . s e l e c t T r a i n i n g T y p e ( data ) ;
MLRegression bestMethod =
( MLRegression ) model . c r o s s v a l i d a t e ( 5 , true ) ;
47
2.3.4
We can now display several of the errors. We can check the training error and
validation errors. We can also display the stats gathered on the data.
The following Java code accomplishes this.
// D i s p l a y t h e t r a i n i n g and v a l i d a t i o n e r r o r s .
System . out . p r i n t l n ( T r a i n i n g e r r o r :
+ model . c a l c u l a t e E r r o r ( bestMethod , model . g e t T r a i n i n g D a t a s e t ( ) )
);
System . out . p r i n t l n ( V a l i d a t i o n e r r o r :
+ model . c a l c u l a t e E r r o r ( bestMethod , model . g e t V a l i d a t i o n D a t a s e t
() ) ) ;
// D i s p l a y our n o r m a l i z a t i o n p a r a m e t e r s .
N o r m a l i z a t i o n H e l p e r h e l p e r = data . getNormHelper ( ) ;
System . out . p r i n t l n ( h e l p e r . t o S t r i n g ( ) ) ;
// D i s p l a y t h e f i n a l model .
System . out . p r i n t l n ( F i n a l model : + bestMethod ) ;
48
// Hold b a c k some d a t a f o r a f i n a l v a l i d a t i o n .
// D i s p l a y t h e t r a i n i n g and v a l i d a t i o n e r r o r s .
C o n s o l e . WriteLine (@ T r a i n i n g e r r o r :
+ model . C a l c u l a t e E r r o r ( bestMethod , model . T r a i n i n g D a t a s e t ) ) ;
C o n s o l e . WriteLine (@ V a l i d a t i o n e r r o r :
+ model . C a l c u l a t e E r r o r ( bestMethod , model . V a l i d a t i o n D a t a s e t ) ) ;
// D i s p l a y our n o r m a l i z a t i o n p a r a m e t e r s .
N o r m a l i z a t i o n H e l p e r h e l p e r = data . NormHelper ;
C o n s o l e . WriteLine ( h e l p e r . T o S t r i n g ( ) ) ;
// D i s p l a y t h e f i n a l model .
C o n s o l e . WriteLine ( F i n a l model : + bestMethod ) ;
2.3.5
Once youve trained a model you will likely want to use the model. The best
model can be saved using normal serialization. However, you will need a way
to normalize data going into the model, and denormalize data coming out of
the mode. The normalization helper object, obtained in the previous section,
can do this for you. You can also serialize the normalization helper.
The following Java code opens the CSV file and predicts each months
sunspot number (SSN) using the best model and normalization helper.
ReadCSV c s v = new ReadCSV( f i l e n a m e , true , format ) ;
S t r i n g [ ] l i n e = new S t r i n g [ 2 ] ;
// C r e a t e a v e c t o r t o h o l d each times l i c e , as we b u i l d them .
// These w i l l be grouped t o g e t h e r i n t o windows .
double [ ] s l i c e = new double [ 2 ] ;
VectorWindow window = new VectorWindow (WINDOW SIZE + 1 ) ;
MLData i n p u t = h e l p e r . a l l o c a t e I n p u t V e c t o r (WINDOW SIZE + 1 ) ;
// Only d i s p l a y t h e f i r s t 100
int s t o p A f t e r = 1 0 0 ;
while ( c s v . next ( ) && s t o p A f t e r > 0 ) {
S t r i n g B u i l d e r r e s u l t = new S t r i n g B u i l d e r ( ) ;
l i n e [ 0 ] = c s v . g e t ( 2 ) ; // s s n
l i n e [ 1 ] = c s v . g e t ( 3 ) ; // dev
helper . normalizeInputVector ( line , s l i c e , false ) ;
// enough d a t a t o b u i l d a f u l l window?
i f ( window . isReady ( ) ) {
window . copyWindow ( i n p u t . getData ( ) , 0 ) ;
String c o r r e c t = csv . get (2) ;
}
Add t h e n o r m a l i z e d s l i c e t o t h e window . We do t h i s j u s t a f t e r
t h e a f t e r c h e c k i n g t o s e e i f t h e window i s r e a d y so t h a t t h e
window i s a l w a y s one b e h i n d t h e c u r r e n t row . This i s b e c a u s e
we a r e t r y i n g t o p r e d i c t n e x t row .
window . add ( s l i c e ) ;
s t o p A f t e r ;
49
50
//
//
//
//
The output from this program will look similar to the following. First the
program downloads the data set and begins training. Training occurs over 5
folds. Each fold uses a separate portion of the training data as validation.
The remaining portion of the training data is used to train the model for that
fold. Each fold gives us a different model; we choose the model with the best
validation score. We train until the validation score ceases to improve. This
helps to prevent over-fitting. The first fold trains for 24 iterations before it
stops:
Downloading s u n s p o t d a t a s e t t o : / var / f o l d e r s /m5/
g b c v p w z j 7 g j d b 4 1 z 1 x 9 r z c h 0 0 0 0 g n /T/ autompg . data
1/5 : Fold #1
1/5 : Fold #1/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 0 9 9 0 2 9 4 4 ,
Validation Error : 1.02673263
1/5 : Fold #1/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 6 4 3 5 2 9 7 9 ,
Validation Error : 1.02673263
1/5 : Fold #1/5: I t e r a t i o n #3, T r a i n i n g E r r o r : 0 . 2 2 8 2 3 7 2 1 ,
Validation Error : 1.02673263
1/5 : Fold #1/5: I t e r a t i o n #4, T r a i n i n g E r r o r : 0 . 2 7 1 0 6 7 6 2 ,
Validation Error : 1.02673263
...
1/5 : Fold #1/5: I t e r a t i o n #24, T r a i n i n g E r r o r : 0 . 0 8 6 4 2 0 4 9 ,
Validation Error : 0.06355912
The first fold gets a validation error of 0.06 and continues into the second fold.
51
2/5 : Fold #2
2/5 : Fold #2/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 0 . 8 1 2 2 9 7 8 1 ,
Validation Error : 0.91492569
2/5 : Fold #2/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 3 1 9 7 8 7 1 0 ,
Validation Error : 0.91492569
...
2/5 : Fold #2/5: I t e r a t i o n #30, T r a i n i n g E r r o r : 0 . 1 1 8 2 8 3 9 2 ,
Validation Error : 0.13355361
The second fold gets a validation error of 0.13 and continues on to the third
fold. It is important to note that that the folds are independent of each other.
Each fold starts with a new model.
3/5 : Fold #3
3/5 : Fold #3/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 4 2 3 1 1 9 1 4 ,
Validation Error : 1.36189059
3/5 : Fold #3/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 9 7 5 9 8 9 3 5 ,
Validation Error : 1.36189059
3/5 : Fold #3/5: I t e r a t i o n #3, T r a i n i n g E r r o r : 0 . 2 6 4 7 2 2 3 3 ,
Validation Error : 1.36189059
3/5 : Fold #3/5: I t e r a t i o n #4, T r a i n i n g E r r o r : 0 . 2 6 8 6 1 9 1 8 ,
Validation Error : 1.36189059
3/5 : Fold #3/5: I t e r a t i o n #5, T r a i n i n g E r r o r : 0 . 2 6 4 7 2 2 3 3 ,
Validation Error : 1.36189059
...
3/5 : Fold #3/5: I t e r a t i o n #126 , T r a i n i n g E r r o r : 0 . 0 4 7 7 7 1 7 4 ,
Validation Error : 0.04556459
The third fold gets a validation error of 0.045 and continues on to the fourth
fold.
4/5 : Fold #4
4/5 : Fold #4/5: I t e r a t i o n #1, T r a i n i n g
Validation Error : 0.41741128
4/5 : Fold #4/5: I t e r a t i o n #2, T r a i n i n g
Validation Error : 0.41741128
4/5 : Fold #4/5: I t e r a t i o n #3, T r a i n i n g
Validation Error : 0.41741128
4/5 : Fold #4/5: I t e r a t i o n #4, T r a i n i n g
Validation Error : 0.41741128
4/5 : Fold #4/5: I t e r a t i o n #5, T r a i n i n g
Validation Error : 0.41741128
...
Error : 0.43642221 ,
Error : 0.26367259 ,
Error : 0.25940789 ,
Error : 0.20787347 ,
Error : 0.18484274 ,
52
The fourth fold gets a validation error of 0.03 and continues on to the fifth
fold.
5/5 : Fold #5
5/5 : Fold #5/5: I t e r a t i o n #1, T r a i n i n g E r r o r : 1 . 0 3 5 3 7 8 8 6 ,
Validation Error : 1.13457447
5/5 : Fold #5/5: I t e r a t i o n #2, T r a i n i n g E r r o r : 0 . 6 1 2 4 8 3 5 1 ,
Validation Error : 1.13457447
5/5 : Fold #5/5: I t e r a t i o n #3, T r a i n i n g E r r o r : 0 . 3 5 7 9 9 7 6 3 ,
Validation Error : 1.13457447
5/5 : Fold #5/5: I t e r a t i o n #4, T r a i n i n g E r r o r : 0 . 3 4 9 3 7 2 0 4 ,
Validation Error : 1.13457447
5/5 : Fold #5/5: I t e r a t i o n #5, T r a i n i n g E r r o r : 0 . 3 2 8 0 0 7 3 0 ,
Validation Error : 1.13457447
...
5/5 : Fold #5/5: I t e r a t i o n #30, T r a i n i n g E r r o r : 0 . 0 6 5 6 0 9 9 1 ,
Validation Error : 0.07119405
5/5 : Crossv a l i d a t e d s c o r e : 0 . 0 6 9 6 5 8 2 4 2 4 9 4 7 9 7 6
Training e r r o r : 0.1342019169847873
Validation e r r o r : 0.15649156756982546
,
,
,
,
29.4]
29.2]
31.1]
25.9]
>
>
>
>
predicted :
predicted :
predicted :
predicted :
58.52699993534398( c o r r e c t
64.45005584765465( c o r r e c t
73.24597015866078( c o r r e c t
55.5113451251101( c o r r e c t :
: 85.0)
: 83.5)
: 94.8)
66.3)
53
Index
ant, 13, 9, 21, 22, 25, 26, 29, 32, 34, regression, 17, 20, 31, 40, 45
37, 38, 41, 46, 48, 51
sample, 12, 21, 32, 46
data set, 17, 1921, 24, 31, 32, 36, 40, sampling, 17, 27, 40
46, 50
scores, 26, 38
solution, 9
function, 4
species, 1720
statistics, 20, 31
individual, 9
support vector machines, 43
iris, 17, 18, 20, 22, 31
iris data, 20, 31
values, 20, 28, 29, 31, 40, 42, 43
iris species, 20
vector, 43
iterations, 24, 36, 50
mate, 21, 26, 33, 38, 47
max, 20, 26, 31, 38
measurements, 1719
model, 17, 18, 2022, 2429, 3134,
3641, 43, 4548, 5052
models, 26, 38, 43
neural, 3, 4, 20, 31, 45
neural network, 3, 4, 20, 31, 45
numbers, 43
numeric, 20, 27, 29, 31, 45
outputs, 43
pow, 29
programming language, 8
random, 21, 32, 46