Chaotic aspects of the dripping faucet
Citation for published version (APA):
Bonekamp, J. G., Friso, K., Pruis, G. W., & Mirle, A. (1993). Chaotic aspects of the dripping faucet. (Opleiding
wiskunde voor de industrie Eindhoven : student report; Vol. 9309). Eindhoven University of Technology.
Document status and date:
Published: 01/01/1993
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be
important differences between the submitted version and the official published version of record. People
interested in the research are advised to contact the author for the final version of the publication, or visit the
DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page
numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please
follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Download date: 26. Nov. 2021
Opleiding
Wiskunde voor de Industrie
Eindhoven
STUDENT REPORT 93-09
CHAOTIC ASPECTS OF THE DRIPPING FAUCET
H.
K.
G.
A.
Bonekamp
Friso
Pruis
Mirle
April - July, 1993
Chaotic aspects of the dripping faucet
H. Bonekamp
K. Friso
G. Pruis
A. Mirle
supervisor: J. Molenaar
april - july, 1993
Chaotic aspects of the dripping faucet
Contents
1 Introduction
2
2 Definitions
3
3 The dripping faucet
5
4
Calculation of the largest Lyapunov exponent
7
4.1
The tent map . . .
7
4.2
The Wolf package .
8
4.3
Direct calulation of
4.3.1
4.3.2
4.3.3
4.4
12
).1
Calculation with sorting of the points
Properties of the attractor . . . . . .
Analytical methods to determine
Calculation of
).1
).1
using the Wales method
12
16
18
27
4.4.1
Application on the tent map. . . .
28
4.4.2
Application to the dripping faucet.
29
5 Stochastic Prediction
5.1 The Box-Jenkins method. . . . . . .
5.1.1 Stage 1: Model Identification.
32
32
36
5.1.2
Stage 2: Model Estimation and Diagnostic Checking.
36
5.1.3
Stage 3: Model Forecasting.
37
5.2
The Linpred predictor . . . . . .
37
5.3
Results of Autobox and Linpred .
38
6 Towards chaotic behaviour of the dripping faucet
43
7 Conclusions and recommendations
45
1
1
Introduction
This is a small study of the chaotic behaviour of a dripping faucet. The idea to
study this system stems from Shaw[1984J. It is easily shown in a measurement
that the system can be in both periodic and chaotic modes. A model for the
dripping faucet is given by Molenaar [1992] . A first analysis of this model is
reported by Noack[1992]. The model is based on a simple mass-spring system.
The governing equations are linear apart from the action of dripping which is
formulated as an instantaneous) non-linear event.
In this project we analyze the chaotic behaviour of time series of the dripping
faucet by using different methods. The key issue is to find a reliable value
of the largest Lyapunov exponent, which is a measure of chaotic behaviour.
Some of the used methods are based on the idea of reconstruction. In section
2 we will shortly explain some important terms from chaos theory. The model
of the dripping faucet is presented in section 3. For this model we calculated
the largest Lyapunov exponent by three different methods, i.e. the "Wolf
method", as is described in Wolf[1985]' a direct method, and a method based
on Wales[1991]. These methods can be found in section 4.
In all three methods it was explicitly used that the system is considered to
be chaotic. However, it could be that the system is not (or weak) chaotic.
Therefore we also considered a stochastic prediction method which is described
in section 5.
Another aspect of the project is to investigate the behaviour of the model as a
function of the flow velocity. It is known that for small flow velocity the system
is not chaotic whereas for a higher velocity it is. In section 6 we consider what
happens in between.
Finally in section 7 we present our conclusions and give some recommendations
for future research.
2
2
Definitions
In this paragraph we will explain some terms which are frequently referred to
in this report such as chaotic system, Lyapunov exponent, (chaotic) attractor
and reconstruction. The explanation will be kept short. For more details we
refer to Molenaar[1992].
The Lyapunov exponents (LE) Ai, i 1, .. n, of an n-dimensional system give
an indication of the strength of the divergence (if any) on an object X in
lRn. The sum of the i largest Ai is a measure for the magnification of an
arbitrarily chosen i-dimensional volume while it evolves in time. For example,
if Al (the largest eigenvalue) is positive, the length of intervals will on average
be stretched. The dynamics on X is then called chaotic and X is called a
strange attractor.
Consider a one-dimensional system. If co is the length of an interval the interval
length at time t has changed into
on average. From here it follows that the Laypunov exponent is equal to
In many practical situations no appropriate model for the system under consideration is known. One does often not know the phase space variables
needed to fully describe the dynamics, and even the number of the relevant degrees of freedom is often uncertain. Only measurements are available. One would like to have at one's disposal measurements of the complete state vector (Xl, X2, ....... ). In practice only one or a few components
are measured. The measured quantity is referred to as the readout function.
We shall denote it by x(t). In general x is a function of all state variables,
so x(t) = X(Xl(t),X2(t), ...... ). How could one determine properties of the
complete system, and in particular of the (possible) attractor, from these incomplete data? There is fortunately a partial answer to this question. The
standard references are Packard et al.[1980] and Takens[1980]. We call this
approach reconstruction.
The key idea is to embed the scalar series X(ti) into a higher dimensional space
with dimension de, say. First we have to choose an embedding dimension de E IN
3
and a delay k E IN. Then, we construct a series of de-dimensional vectors by
the procedure:
Yl -
(Xl, Xl+k, Xl+2k, •••• , Xl+d".k)
-
(X2' X2+k, X2+2k, •••• , X2+d e ·k)
Y2
(1)
The points Yi, i = 1,2, ... lie on an artificial attractor Y in the artificial de dimensional phase space, and perform there evolutions. A very important
observation is (see the standard references mentioned before), that the dynamics on Y has the same characteristics as the dynamics on X. For example,
X and Y have identical dimensions and Lyapunov exponents.
In practice de is not known in advance. To estimate a reliable value for de
one applies the embedding procedure for increasing de values: de = 1,2, .....
The theoretical lower bound on de is given by de = 2d + 1, where d is the
(capacity or correlation) dimension of Y, thus X. For an explanation of the
term dimension, we refer to Molenaar[1992]. However, a short impression can
be given as follows: If the attractor is for instance a point then the dimension
is 1. If the attractor is a line then the dimension is 2. For chaotic atttactors
also a fractal dimension is possible.
The choice of k is not very critical. The value of the time interval ktlt, where
tlt is a small period of time, should not be too small, because then the components of the Yi are nearly identical. If ktlt is too large, i.e., much larger than
the information decay time determined by the largest Lyapunov exponent,
then there is no dynamical correlation between the points.
4
3
The dripping faucet
Shaw[1984] modelled the dripping faucet in terms of a vibrating mass point.
The mass point is attached to the faucet by means of a spring. Its mass m(t)
increases at a constant rate, so one of the model equation reads:
Om
(2)
f3.
ot
Due to gravity the mass point will move downwards. When it reaches a certain
level, part of its mass is instantly cut off at a rate proportional to its velocity.
Due to the spring force the mass point will jump upwards. After that the
process will repeat itself more or less. The system is one-dimensional, because
the motion is only in the vertical direction. The position of the mass is y(t),
and its velocity v(t) = dy/dt. The second law of Newton reads in this case:
omv = -mg at
ky - IV'
(3)
We rewrite this as a first order ODE system:
oy
ot
ov
ot
-
V
ky
b + (3)v
m
m
-9---
orr/,
ot =
(4)
f3.
When y(t) passes Yo from above, the mass is instantly reduced:
m(t)
-+
m(t) * (1 - e-.:;,lIv{t)l)
The parameters of the system are:
• k: spring constant;
• a: parameter determining the cutting rate;
• f3 : rate of mass increas representing the faucet flow velocity;
5
(5)
• i : coefficient of the friction.
In the following sections the parameters will be set to (k, 0, /3, i)
Only in section 6 we will change the value of f3.
= (10,0.1,0.6,1).
The ODE system (5) can be integrated yielding m, v and y as functions of
time.
6
4
Calculation of the largest Lyapunov exponent
In this section we describe three methods to estimate the largest Lyapunov
exponents. Some of these methods have been tested by applying them to
systems with known Lyapunov exponents. One of thes systems is the tent
map, the use of which is presented in some detail.
4.1
The tent map
The tent map is a well known discrete time system which features chaotic behaviour. Time series produced by the tent map are often used to demonstrate
chaotic behaviour. We apply it also as a test for the calculation of the first
Lyapunov exponent using a prediction method, see Wales [1991 } and section
4.4.
Xn+l
-
f(xn) -
(6)
f(xn)
Z
(1 - 21~
Xnl)
The system is chaotic for Z > 1/2 and it has a Lyapunov exponent of Al
The name tent map becomes clear from Figure 1.
= In2z.
From the positive Lyapunov exponent of the tent map we know that we have
to face the loss of significant decimals. Direct implementation of the described
model yielded only zeros after approximately 100 iterations due to the finite
computer precision. A number is written as a finite number of bits by a
computer. If this number is multiplied by 2 in binary code it means the bits
are shifted one to the left. It appeared that the computer always writes a 0
behind this string of bits. If the bit string is n bits long, after n multiplications
the whole string consists of bits equal to O. Multiplication from now on only
returns zero's. To overcome this problem we use the following trick. We replace
equation (7) by
(7)
where An > 0 has a random value in every iteration
7
In fact this trick
1
0.8
0.6
X n +l
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Figure 1: Tent map for z = 1.0
means that the starting value Xo is written as a bit string with infinitely many,
arbitrarily chosen decimals.
4.2
The Wolf package
One way to determine the largest Lyapunov exponent is by following two
neighbouring trajectories for a certain number of time steps. This approach
is worked out in a software package by Wolf[1985]. The approach used in this
package is the following:
In a reconstructed state space (see section 2) we choose a point that is called
reference point and a point close to it. The orbits through these points are
followed and after some time steps the distance between the orbits is calculated.
If the distance has become too large (much divergence) then a new trajectory
is sought in the neighbourhood of the reference trajectory and the procedure is
repeated. The average divergence yields a measure for the Lyapunov exponent.
Several parameters have to be chosen to run the program. First the embedding
dimension and the time delay have to be specified. Furthermore an evolution
time has to be set. This value says how many time steps the two points will
be followed. If the distance between these points has become larger than the
parameter maximum seperation a new neighbour of the reference trajectory is
chosen. Otherwise the process continues with the two trajectories.
8
During running the trajectories of the points for evolution time time steps
are plotted on the screen. From these pictures you can see if the points keep
close to each other or not. This can be a bit confusing because you see a
two-dimensional plot of the trajectories while you are working in a three or
four dimensional state space. So) two points which seem to be close to each
other can be far apart in reality. But if it happens a lot of times that you see
orbital divergence it is better to change the chosen parameters. For example,
reduction of the evolution time or the maximum seperation. During running
the program the current estimate of the Lyapunov expnent is printed on top
of the screen. In this way it is easily checked if this value converges or not.
First we checked if the Wolf package really does what we want. Therefore we
made a time series for the following function:
x(t)
= 2sin(t) + 5sin(3t)
Of course in this series there is no chaos present and the estimated Lyapunov
exponent was indeed almost zero. Another test was the tent map described
in section 4.1 for which it is known that the Lyapunov exponent is equal to
In 2. The estimate from the Wolf program was 0.923 * In 2. This was enough
evidence that the Wolf package can make reliable estimates.
From the time series of the dripping faucet we obtain four different series:
- position data y(t)j
velocity data v{t)j
- mass data m(t)j
- time between two drippings D.T(t).
On average the time between two drippings is about 2 seconds. Because the
time step in the data series is 0.4 seconds, 5 or 6 points will be measured
between two drippings. For reconstruction it is the best to use points which
are in between two drippings. Therefore the time delay should not be too
large. In the runs we used 1 and 3 for the time delay. If the time delay would
set larger there is no dynamical correlation between the points any more. The
embedding dimension was set to 3 or 4 for the several runs.
In theory every function should be applicable for estimating the Lyapunov
exponent. The function chosen is called the readout junction. We used all four
9
#
data points
5000
5000
5000
5000
5000
5000
5000
5000
10000
10000
32000
time delay
(time steps)
Position data yet)
embedding dim. evolution time Lyapunov exponent
(time steps)
(l/s)
3
3
3
3
3
3
4
4
4
4
3
3
6
4
2
1
2
1
1
1
3
4
5
1
1
3
2
3
3
3
4
1
2
0.20
0.23
0.21
0.21
0.29
0.27
0.20
0.21
0.34
0.33
0.21
Table 1: Estimates for the Lyapunov exponent with position as readout function
# data points
5000
5000
5000
5000
10000
32000
time delay
(time steps)
3
3
1
1
3
3
Velocity data vet)
embedding dim. evolution time Lya.punov exponent
(time steps)
(l/s)
0.25
4
3
4
4
5
4
4
5
1
1
3
2
0.24
0.25
0.24
0.26
0.25
Table 2: Estimates for the Lyapunov exponent with velocity as readout function
10
#
data points
5000
5000
32000
time delay
(time ste
3
2
3
I
Mass data m( t)
embedding dim. evolution time
(time steps)
4
4
4
5
5
2
Lyapunov exponent
(l/s)
0.40
0.42
1.21
Table 3: Estimates for the Lyapunov exponent with mass as readout function
#
data points
2450
2450
2450
2450
Time between drippings !J.T
time delay embedding dim. evolution time
(time steps)
(time steps)
1
1
4
1
1
3
2
1
3
2
1
4
Lyapunov exponent
(l/s)
0.52
0.52
0.50
0.48
Table 4: Estimates for the Lyapul10v exponent with time between drippings
as readout function
11
series mentioned above as a readout function. The results are shown in Tables
1 to 4, respectively.
We see that the estimates of the Lyapunov exponent using position data or
velocity data coincide with each other. But if the mass or the times between
drips are considered, the estimate for the Lyapunov exponent becomes much
bigger. An explanation could be the following argument:
For two close points in the reconstructed state space it is calculated how the
distance between their trajectories increases or decreases in a time interval.
The smaller the starting distance and the time interval the better the estimate
for the Lyapunov exponent. The function of the mass is discontinous in time,
so at jumps the divergence can be very big. Also for the series of the times
between jumps two close points can be far apart one iteration later. The
position and velocity function are continous in time and are therefore the best
to use.
From each table it can be seen that the estimate is quite insensitive for the
parameter values. Also the length of the time series seems not to be of great
importance. Of course there is a transient period but for this model 5000 data
points seem to be enough. This is remarkable compared with the results of the
other methods described further on in this section. In those methods longer
time series give more accurate estimates.
From Table 1 it follows that an embedding dimension equal to 3 is too small
because the largest Lyapunov exponent for those runs show a significant difference with runs with larger embedding dimension. This feature becomes more
clear the longer the time series are.
4.3
Direct calulation of Al
4.3.1
Calculation with sorting of the points
The behaviour of the dripping faucet is described by a system of linear differential equations and a sudden mass decrease in the plain Y = Ylos' If we plot
the plain Y = Ylos just before and after the jump, the points of the attractor
seem to form two lines. These lines are shown in Figure 2 and Figure 3 for a
test series with 10000 data points (2453 jumps).
The state vector of the faucet follows a trajectory described by the differential
equations until it reaches the plain Y = Ylos- Then it jumps from one line to
the other and follows its trajectory again. The attracor is a kind of cut ted
Mobius band with a gap where the dripping takes place. The Figures 2 and 3
12
I
I
I
I
I
I
I
-
1.8 r-
Mass
1.3
I..--_J.L.-_ _..I....-I_ _..J.1_ _---l1_ _ _L . -_ _..l.-.._ _...L---J
-1.4
-1.2
-1
-0.8
-0.6
-0.4
-0.2
Velocity
= Ylos;
Figure 2: Cross section of the strange attractor with the plane Y
section before the jump.
I
0.8
I
I
!
I
I
I
'/
0.7
0.6
I
0.4
0.3
--=--------.. _. _. .---J
-1.4
I
-1.2
-1
-'-~
t i
~/
0.5
0.1
i: -
/1
f-
Mass
0.2
I_
t!
:
-
I
I
I
-0.8
-0.6
-0.4
-0.2
Velocity
Figure 3: Same cross section as in Figure 2, but after the jump.
13
the
suggest that the band is completely fiat, i.e., it is two-dimensionaL Later on
this suggestion will be checked. In the first instance, the assumption is made,
that the attractor is a two-dimensional band and the attention is directed to
the calculation of the average stretching during jumping.
Sorting of the points
If we want to determine the stretching factor in a point, we need another point
in the neighbourhood. Then we can determine the ratio of the distances of
these points before and after the jump. In general calculations for finding a
neighbouring point are time consuming. Under the assumption that the crosssections in Figures 2 and 3 are really lines, we can order the points on the line
before the jump and then consider two neighbouring points in these order.
Since the line is not a function, it is not possible to order all points at the
same time. But by splitting the graph partial functions can be formed. These
can be ordered after which they are put together again.
Calculation of the stretching factor during the jump
The program CHAOSA orders the points in Figures 2 and 3 and computes
then from the sorted list the stretching factor. As described in section 2,
the Lyapunov exponent can be numerically determined as the logarithm of
the stretching factor of two points starting very close together. Therefore the
program takes two neighbouring points of the list, computes the fraction of the
distances of these two points before and after the jump and takes the logarithm
of this value. To average these values, it adds the logarithms for all points in
the list and divides by the number of the points. So it gets a weighted average
for the Lyapunov exponent. The results are to be seen in Table 1.
The largest Lyapunov exponent is decreasing with increasing of the number
of points. But if the difference between neighbouring points is very small, the
results are not very exact since the accuracy of the data is only 5 decimals.
So we can only conclude, that the largest Lyapunov exponent is a very small
positive value or zero.
14
II # points I # points of jump I stretching factor II
1,000
5,000
10,000
244
979
2,453
0.07363
0.04698
0.03032
Table 5: Results of program CHAOSA
The stretching factor of the regular motion
The dynamics on the band is described by linear differential equations. This
does not imply that no stretching takes place. Since the stretching· factor of
the jump is very small, we compute the stretching during the regular motion.
We look at two neighbouring points after one jump and at the same points
just before the next jump. The pascal program CHAOS_5 computes like in
the previous part the logarithm of the fraction of the distances and averages
this value. The results of CHAOS_5 are summarized in Table 6.
II
#
points I
1,000
5,000
10,000
#
points of jump I stretching factor 11
244
979
2,453
0.33155
0.33350
0.35732
Table 6: Results of program CHAOS..5
These values are nearly the same and clearly greater than zero. The interesting
aspect of this result is, that the stretching during the regular motion is a
clearly present. However, since we consider neither the influence of jump,
nor the dependence on time the values are not an estimate for the Lyapunov
exponent. They only show, that there is stretching.
The stretching factor of the jump and the regular motion
The pascal program CHA OS_6 tries to determine an estimate for '\1 taking into
account the dynamics of both the band and the jump. Under the assumption of
15
a two-dimensional band, we observe two neighbouring points before one jump
and the same points before the next jump. So, the trajectories are followed
so long that exactly one jump is taken into account. The results of CHAOS_6
are shown in Table 3.
I #
points
1,000
5,000
10,000
I#
points of jump
244
979
2,453
I stretching factor I
0.10771
0.11010
0.12652
Table 7: Results of program CHAOS_6
This value is much smaller than the results of the WOLF package (see section 5.1). There are two possible reasons:
• Since the time between two considered successive points is nearly 2 seconds, they evolve faster apart than in the WOLF package. Therefore the
result is not very exact.
• The curves may look like a line, but they are not lines.
To find out, if the second reason is true, we analyse the structure of the attractor more exactly in the next part.
4.3.2
Properties of the attractor
To check, if the cross-sections shown in Figures 2 and 3 are really one-dimensional
lines, we can look at points after one jump and the same points before the next
jump. Since the system of differential equations describing the faucet is autonomous, the trajectories can not cross. But if we look at the ten first points
of a series, we find that these points change their order. They not only reverse
their order like on an Mobius band, it is also possible, that at the beginning of
the regular motion a point lies between two other points but afterwards is not.
This is shown in Figure 4 and Figure 5. As far the points with the numbers 2,
6 and 9, the point 6 starts between point 2 and point 9, but after the regular
motion the point 9 lies in the middle.
Therefore we analysed the attractor more exactly. We split ted an ordered
output of DRIPS into very little parts of only ten points and looked at these
16
0.45
0.4
I
·W
I
I
I
. 6
l-
0.35
-
·2
0.3 -
. 8
Mass 0.25 -
0.1 -
-
'4
. 9
-
0.2 0.15 -
-
·5
·1
. 7
0.05
-1.2
-
'3
I
I
-1
-0.4
-0.8
-0.6
-0.2
Velocity
Figure 4: 10 points after the jump
I
Mass
1.8
I-
1.7
l-
I
I
I
·7' 3 ·1
. 5
-
1.6 ,. .
-
1.5
~
1.4 '-
. 9-
. 6
'2
1.3
-1.2
-1
·8
'4
-0.6
-0.8
-0.4
Velocity
Figure 5: 10 points before the next jump
17
-0.2
points after the jump and their position before the next jump to find the
exact irregular behaviour. But these small parts showed regular behaviour,
the points had before the next jump the same order like after the previous
Jump.
Then we considered 24 points nearly uniformly distributed on the line just
after one jump and determined their position just before the next jump. From
Figures 6 and 7 one can see, that the two parts of the folded line after one
jump come together. This means that the trajectories don't cross, but the
pictures are not really lines, there are a lot of lines very close together.
If you look at a very small part of the line before the jump you can see, that
there are two different lines very close together (see Figure 8). The jump brings
these lines so close together, that after the jump only one line is observable
seen (see Figure 9). There are two lines, but you can see them only in even
smaller parts of the whole line (see Figure 10). Before the next jump there are
these two lines too, and they are splitted in two lines as result of the folding
during the jump. So we can say that there are a lot of lines very close together,
but this can only be seen if you zoom in.
Finally we can conclude that the attractor has the dough like structure which
is characteristic for chaotic attractors. It consists of infinitely many layers on
top of each other. The stretching takes place in these layers. The orbits in
each layer diverge from each other. The orbits mainly tend to visit the upper
part of the line just before the dripping. The whole structure can be thought
of as being constructed by stretching a sheet during the regular motion and
folding it during the dripping.
4.3.3
Analytical methods to determine Al
From the above we know that the attractor band is three-dimensional, although
it is very thin. The dynamics of the jump discrete from JR2 into 'R2.
The jump occurs only once between two parts of regular motion. So we can
consider the Jacobian of this mapping only in the points just before the jump,
we don't have to follow the trajectories.
The regular motion is described by a system of differential equations. So we
can follow the motion along the trajectory and determine the greatest possible
stretching with the method described in section 4c1 of Molenaar[1992].
18
0.7
0.6
I
I
I
I
I
'16
-
I-
0.5
I-
0.4
I-
0.3
I-
0.2
I-
·17
.18 . 15
20 .19
.. '14
24....
Mass
0.1
I
07 '8 '9
'1
'2 '3'4
-1.4
-1.2
I
-0.8
Velocity
-0.6
-0.4
-0.2
Figure 6: 24 points after one jump
1.75
1.7
f4 '5' 6 '7
1.65
1.6
Mass
'8
'16
1.55
1.5
'17
1.45
1.4
15. 18
·19
1.35
21 22;;!.324 'u
20
14
1.3
-1.4
-1.2
-1
-0.8
Velocity
13
-0.6
'10
12
-0.4
Figure 7: 24 points before the next jump
19
-
-
~
-1
-
'io
'5 '6
0
-
-0.2
1.3429
.---..,.,--"'1'",--..--,-..,.,--"'1'",--..--,-".---,,----,
1.3428
I-
-
1.3427
1.3426
1.3425
Mass
1.3424
-
1.3423
-
1.3422
-
1.3421 1.342
I
I
I
I
I
I
-0.598 -0.596 -0.594 -0.592 -0.59 -0.588 -0.586 -0.584 -0.582 -0.58
Velocity
Figure 8: Two lines before the jump
0.213 ,--....,,--..,-'--...-1-....,',.----.-1--.,..-1--1,----,-'----,
0.212 r-
-
0.211
I-
-
0.21 '-
0.209 r-
-
0.208
r-
-
0.207 f:'
-
Mass
0.206
I
-0.598 -0.596 -0.594 -0.592 -0.59 -0.588 -0.586 -0.584 -0.582 -0.58
Velocity
Figure 9: Only one line after the jump?
20
1.34265 ,------,1-----,-1---.-1---,.-1-:---1,------,1,....----,
-
1.3426
1.34255
1.3425
-
Maas34245
1.3424 -
-
1.34235
1.3423
1.34225 L -_ _L -_ _L--I_ _' _L-I_ _L--_ _L-.I_ _-.lL-._-----'
-0.58
-0.594 -0.592
-0.59
-0.588 -0.586 -0.584 -0.582
Velocity
Figure 10: Also two lines after the jump
Eigenvalues of the Jacobi matrix of the jump
The dripping is modelled by a mapping
maJter
{
VaJter
=
=
(1 - exp( a/VbeJore)) mbeJore
VbeJore
This can be written as a function from JR,2 in JR2 :
The Jacobi matrix of this mapping is
1
(
0:
a
v 2me V'
This is a matrix with all zeros above the main diagonal, and therefore it is
easy to determine the eigenvalues of this matrix: these are the elements in the
main diagonal. One of the eigenvalues is one and the other is smaller than
21
one. The logarithms of the eigenvalues are
Al
0
A2 = In(l - e-;)
Since the argument of the logarithm in A2 is smaller than one, A2 is a negative
value. The program CHAOS.2 computes for a test series made by DRIPS the
weighted average of A2. The result for a test series with 1000 measure points
(244 jumps) was
A2 = -1.83313
Because the greatest eigenvalue of the Jacobian is one, there is no stretching
during the jump in all points of the attractor just before the jump. But one
can easily see that this cannot be correct. There are points on the attractor
with stretching during the jump. For instance, two points with the same mass
but different velocities just before the jump do not change their velocities, but
have different masses after the jump. So there is stretching in some points
during the dripping. Therefore, the greatest eigenvalue can not contain the
whole information about the stretching factor. The reason for this fact is that
the eigenvalues only give the stretching in the direction of the eigenvectors,
whereas the greatest stretching can occur in any direction.
Operator norm of the Jacobi matrix of the jump
The whole information about the largest stretching of a mapping contains the
operator norm of the Jacobi matrix J, that means the greatest eigenvalue of
JT J. This matrix has the following form :
(~
me~)(l
- e~)
)
(1 - e~)2
If we write the Jacobi matrix J in short notation, the structure of JT J becomes
more clear:
The eigenvalues of this matrix are :
2
"'1/2
2
a +b +1 ±/(a2 +b2 +1)(a 2 +b2 +1) -b2
2
4
22
The greatest of these two values is :
This value gives the greatest stretching factor of the considered mapping, and
Al log /11 is the greatest Lyapunov exponent of the mapping. The program
CHAOS_3 computes for a test series made by DRIPS the weighted average of
).1. The results of this program are given in Table 4.
=
#
points
1,000
5,000
10,000
#
points of jump stretching ~
0.15734
244
0.17524
979
0.17582
2453
Table 8: Results of program CHAOS_3
The results for 5000 and 10000 points are nearly the same, so we can say that
the real value is between 0.175 and 0.176.
Stretching during the regular motion
The differential equations describing the regular motion are
dy
dt
dv
dt
dm
dt
=
v
k
(3+,
-g--y---v
m
m
=
f3
In a shorter way we can write this as
i: = f(;r.)
In a discrete dynamical system ;r.n+l
f(;r.n) we can consider the Jacobian
of the function f to get the stretching factor. This method is described in
23
Molenaar[1992]. But in the continuous case the function f determines the time
deravitive and not the next point in a time series. So we cannot take the
eigenvalues of the Jacobi matrix of f to get the stretching factor. Therefore
we need a description of a mapping, which attaches to a given point the next
point in a discrete time series resulting from the differential equations.
A point 5f. = (y, v, m) moves in the time dt to (y
can describe this motion by:
4>(5f.)
~
5f.
+ f (5f.)dt = (
v - (9
+ dy, v + dv, m + dm).
y + vdt
+ ! y + !3;? v) dt
m + (3dt
We
)
The Jacobi matrix of this mapping has the form
dt
.6+-r dt
m
o
~ydt
0)
m2
+ .6+'1
vdt
m
2
1
To compute the Lyapunov exponent in 5f.a we have to follow the trajectory
starting in the point 5f.o • We can start with an arbitrary distance vector fa
between 5f.o and a neighbouring point 5f.a +fa and observe the stretching of the
interval [5f.o 1 5f.o + fa]. After one iteration we get
The approximation with the Jacobian only holds if dt ~
iterations we get
O. Repeating the
The value In{ \11i:11) gives the stretching in n time steps. To get the average
stretching in one time step we have to divide this value by n. So we get a
sequence
(8)
The definition of
).1
is then given by
24
At
= n-+oo
lim '!'In(lllflnIJI, = lim At(n)
n
fo
n-+oo
We have to take two limits: first dt -+ 0 and then n -+
00.
To get a trajectory describing the whole attractor, we need an output file of
DRIPS containing a lot of jumps. The time step between two output points
can not be so small, that we can describe the behaviour for dt -+ 00 exactly.
We have two possibilities to compute the Lyapunov exponent :
1. We linearize the trajectory between two output points of DRIPS and get
an error resulting from the relative large time between these two points.
2. We linearize the trajectory only with a very small time step dt. But we
have no points on the trajectory between the output points of DRIPS to
compute the Jacobian. So we must take the same Jacobian on the whole
trajectory between two output points. Therefore we get here an error,
too.
Since the jump has nothing to do with the Jacobian of the regular motion we
can not apply the linearization at a measure point if it is directly followed by
a jump. So we can follow a trajectory at most until the next jump. That is
another source of error.
The pascal programs LYAPUN_l and LYAPUN.2 compute the Lyapunov exponent with the two respective methods mentioned above. With a short time
series of only 1000 points and a time step of 0.1 we tested the influence of the
parameters
accuracy
ddt
The program computes Al(n), n = 1,2, ... and stops if
IlAl(n) - At(n + 1)11 < accuracy, since the series
At (n) shall converge.
The program splits every input time interval dt into smaller
parts of length ddt for the second method
to get the best parameters for the computations with larger files. The test
results for the two programs are shown in Tables 5 and 6. The value for
convergence steps gives on average how many steps were neccesary to reach
the given accuracy for the convergence.
Program LYAPUN _1 shows no convergence. We can only get an accuracy
of 10- 2 • If we want to have a better accuracy, then we reach the next jump.
25
II
accuracy
10-3
10 2
I
Al I convergence steps
10.35964
0.86891
7.66034
1.20908
II
Table 9: Test results of program LYAPUN_1
11 accuracy I
1010- 4
10- 5
101010-6
Al
ddt I
0.010 1.01691
0.010 1.30177
0.010 0.25933
0.010 0.03264
0.001 0.21905
0.001 0.23721
I convergence steps II
5.64635
Table 10: Test results of program LYAPUN_2
This we can see from the number of convergence steps of more than 10. Since
the time between two jumps is nearly 2 seconds, there are 20 points between
two drops. So the average number of following points until the next jump is 10.
If the number of convergence steps is nearly 10, then all the following points
were used for the computations, but no convergence occurred before the next
jump. So the program LYAPUN_1 can not be applied to time series with time
step 0.1.
The program LYAPUN_2 with time step 0.01 for computing the Jacobian converges until an accuracy of 10- 5 • If the accuracy is 10- 6 , then the number of
convergence steps is greater than 100, and from the same reason like above,
now with a time step of 0.01 and there is no convergence. But since the accuracy of the input data is 10- 5 , this accuracy is enough for the Lyapunov
exponent. So we can say, this program can be applied to time series with time
step 0.1 with an accuracy of 10- 5 •
If we take a smaller time step for the Jacobian, then the convergence speed
increases, since the Jacobian is nearly the unit matrix for very small time
intervals and the considered Jacobian remains constant during a lot of steps.
If the Jacobian changes, then also the directions of the eigenvectors change, and
the differences between the computed eigenvectors and the real eigenvectors
increase. The results are nearly the same for different time steps. We conclude
26
that this program can be used for computing Lyapunov exponents.
We applied the program LYAPUN-2 on ten different time series with time
step 0.1 and 10,000 points and got
ddt = 0.01
ddt = 0.001
, accuracy = 10- 5
, accuracy = 10- 5
-+
-+
Al = 0.265 ± 0.01
Al = 0.223 ± 0.04
If ddt is smaller, the linearization is more reliable, but then the influence of the
errors resulting from the assumption of the Jacobian being constant increasing.
We take the average of both values to get an estimate for the greatest Lyapunov
exponent:
It makes no sense to compute Al with the same program for a time series with
smaller time step, because then the series must be very long to describe the
whole attractor, and the computing time will consequently increase dramatically.
A possibility to improve the programs is to consider also the exact time point of
a jump and apply the Jacobian of the jump in this point. Then the trajectory
can be followed further, and the convergence would be better.
4.4
Calculation of A1 using the Wales method
By ca.lculating the tent map series we saw that chaotic behaviour is seen as a
loss of information due to the fact that small errors can blow up. The rate of
this loss is a measure of the chaotic behaviour and related to the Lyapunov
exponents. In Wales[1991] a theoretical description is given based on this idea.
This article uses the concept of the Kolmogorov entropy J{, which is equivalent
to the mean loss of information per unit time. The Kolmogorov entropy J{
is equal to the sum of the positive Lyapunov exponents and in our dripping
faucet model, where we have only one positive Lyapunov exponent, J{ is equal
to AI'
The "Wales" method works as follows: First, the time series is divided in two
parts. The first part is used as a data-base for a prediction method to be
described here after. The values in the second part are used as starting points
for predictions. Second, the correlation l' between the predicted values and
the real values is calculated a.s a function of the number of prediction steps.
Third, J{ and thus Al is given by half the initial slope of the plot of In(l - r)
27
against the number of predictions steps in time. The initial slope is the slope
of the graph at prediction step zero.
In more detail the prediction works as follows (see also Molenaar[1992]) for
more details): Consider a time series {Xi, i = 1, ... , M}. The procedure to get
an estimate for XM+l is based on finding local portions of the time series in
the past, which closely resemble the last part and basing the predictions on
what occurred immediately after these past events. It should be emphasized
that no attempt is made to fit a function to the whole time series at once.
The approach can quite simply be automated and may give striking results,
if the attractor dimension is not too high and enough data is available. The
procedure can be summarized as follows :
• Apply reconstruction. The procedure dealt with in section 2 will yield the
embedding dimension de. Meanwhile the scalar series Xi, i = 1, ... , M is
transformed into a series Yi, i = 1, .... , M' of de-dimensional vectors with
M' = M - (de-I). The problem is now to estimate YM'+l'
• Search neighbours of YM' in the series Yi.
• Find the evolution of these neighbours after one iteration.
• Fit a (linear, quadratic, cubic, or .... ) mapping to the one-step-ahead
evolutions of the neighbours.
• Apply this map to YM' This yields an estimate for
element of YM'+l is an estimate for XM+l'
YM'+l,
This algorithm is applied several times to obtain estimates for
4.4.1
and the last
XM+b XM+2, •••••
Application on the tent map
In Wales[1991] the above described method is checked for the tent map (4.1).
We repeated these calculations to verify these results using the chaotic predictor PREDCORS. PREDCORS is the implementation of the chaotic prediction
method described in Molenaar[1992].
28
CHAOTIC PREDICTION CORRELATION
1.-=~_
0.8
0.6
z
~
I-
a:
-'
w
0.4
DC
DC
C
u
0.2
...!
PREDICTION STEP
Figure 11: Correlation vs. prediction step using position data
Theoretically the value of Al is In 2 = 0.69315.... In Wales[1991 J the value
Al = 0.69310 was found for a serie of 1000 points in which the first 500 were
used to fit the model. Figure 11 shows our results for the same parameters
as used in Wales[1991]. We calculated the initial slope using a spline function
(see also the next section). We obtained Al = 0.70. This result is in good
agreement with the real value.
4.4.2
Application to the dripping faucet
We have applied the method above on the position data produced by DRIPS
consisting of 14000 time steps. The length of the time steps was 0.1 seconds.
The first 13000 were used to search for neighbouring points. We used a long
correlation interval, i.e. 500. We have predicted 65 timesteps of 0.1 seconds.
This corresponds to about 3 drippings of the faucet. An important parameter
in the prediction is the value of the embedding dimension. Theoretically, the
value for the embedding dimension must be de ~ 2d + 1 (Molenaar[1992], page
26) where d is the dimension of the chaotic attractor. For the dripping faucet d
is probably slightly bigger than two. This means that an embedding dimension
of 5 seems to be a reasonable choice. In the Figure 13 we give the correlation
results for this case.
29
CHAOTIC PREDICTION CORRELATION
1~-.
o
-1
-2
z
~
l-
-3
tt
..J
W
Ii
8I
-4
!:! -5
3
-6
4
5
6
7
a
10
PREDICTION STEP
Figure 12: Fitted spline using tent map data
From the logaritmic plot in Figure 14 we have to calculate the initial slope.
However, the plot is contaminated by noise. It seems to be that the length of
the correlation interval was not long enough but because of memory problems
with larger correlation intervals this is the best we can get. We have to smooth
the graph to remove the effects of the noise. To do this we fitted a spline
through the graph as can be seen in figure 14. Varying the stiffness of the spline,
we find 0.09 ::; Al ::; 0.33. The fitted spline shown in the figure corresponds to
Al = 0.16. We like to stress here that using a spline is just one of the methods
to remove noise. An estimate of the order of magnitude is the best result we
can get.
Remark: The results above are obtained only for the position data of the dripping faucet. We have also made several calculations on the velocity and mass
data. However, these results are worse compared to the results of the position
data in the sense that larger wiggles were present and that the correlation
decreases faster. This can also be seen at the following correlation plots for
a series of 1000 points (see Figures 15-17). They denote the correlations as a
function of time differences between two points. From the about first 50 points
in the correlation plots one can see that the position data is on the average
more correlated than the velocity or mass data. Consequently, predictions
made by using position data will probably give better estimates.
30
CHAOTIC PRED1CT[ON CORRELATION
1
0.98
0.96
0.94
z
-
a
I0:
..J
0.92
W
~
DC
0.9
u
0.88
10.86
0.84
10.82
III
10
210
30
40
50
6e1
70
PREDI CTI ON STEP
Figure 13: Correlation vs. prediction step using position data
Theoretically, the velocity and mass are proper read-out functions of the model
and should provide us the same results.
31
CHAOTIC PREDICTION CORRELATION
-1
-2
-3
-4
Z
0
...,
I-
-5
([
...J
w
O!:
O!:
0
(.)
=
I
-6
Z
...J
-7
-8
-9
0
-18
0
10
30
20
40
50
60
70
PREDICHON STEP
Figure 14: Fitted spline using position data
5
Stochastic Prediction
In the previous section we used a chaotic prediction method. In this section we shall apply a stochastic prediction method. Two programs will be
used: Autobox, based on the Box-Jenkins model, and Linpred, based on an
autoregressive model. First we introduce the Box-Jenkins model, second we
introduce the Linpred predictor and finally we discuss the results obtained
with both models.
5.1
The Box-Jenkins method
The B(ox)-J(enkins) method (Box & Jenkins[1976]) is a time series modeling
process which describes a variable as a function of its past values (the AutoRegressive part) and the past values of the noise (the Moving Average part). The
purpose of the B-J method is to find the linear model (or filter) which describes
the underlying structure of the time series best. For the reduced time series
(the series from which the effects of the underlying structure is "subtracted")
it must hold that no structure is present. In other words, the reduced time
series has to be white noise. After finding the model it can be used to forecast
32
1~-.
o
50
100
150
200
250
Figure 15: Position correlation plot
future values of the series.
The modeling procedure itself is a three stage iterative process:
• Identification: To make the time series stationary.
• Estimation and Diagnostic checks: To estimate the parameters in the
identified model.
• Forecasting: To use the model to generate forecasts for future values of
the time series.
Before discussing these points in more detail, it is necessary to introduce some
terms. A B-J model, often referred to as an ARIMA-model, can be expressed
in the following form:
(9)
where
(10)
This form defines a so-called ARIMA(p,d,q)-model. The factors are summarized below:
33
1.2.------------------------------------,
1
e.a
0.6
0.4
0.2
-0.2
-0 .4+-,,.--,---,-.,......,1--r---,--.-r-+-.----,---,,--,-...;-.,....-,-,..-...,-+--,--,..--,-..--I
(1)
2130
250
50
100
150
Figure 16: Mass correlation plot
•
Zt ::::::;
• J1
the dicrete time series.
= the average value of Zt.
• 4>p::::::; the autoregressive factor(s) of order p (AR(p».
•
at::::::;
the white noise at time step t.
• ()o = the deterministic trend (constant).
• ()q
•
= the moving average factor(s)
Wt ::::::;
of order q (MA( q».
the differenced time series of order d and degree c.
• B::::::; the backshift operator, i.e.
BZ t
=
Zt-l.
It is possible that the time series includes also some seasonal effects, for instance an annual cyclus. In that case we can have more than one autoregressive, moving average and/or differencing factors. For instance, if we have
monthly data and an annual effect, the model can be an ARIM A 12 (P12, d12 , q12) x
ARI M A(p, d, q)-model.
We will now further explain the various components of the above equation.
34
1~-
0.S
e.G
0.4
0.2
-0.4
-0 • G+-'-,...-,.-'-+-r-T""""""'I--r-+-..-r-r--r-!-,....,-......,.-I-..........,r-r--r--1
250
o
50
100
150
200
Figure 17: Velocity correlation plot
Differencing factor (s).
A model may have a number of differencing factors. Each differencing factor is
a polynomial of the form (1 - BC)d. For a model with one or more differencing
factor (s ), the expectance of Zt (Jl) can be taken to be equal to zero. This
remark also applies to the deterministic trend parameter (80 ),
<pp(B» Autoregressive factor(s)
A model may have a number of autoregressive factors. Each autoregressive
factor is a polynomial of the form:
(11)
where <P1) ... , ¢>p are the (possibly vanishing) parameter values of the polynomial.
8q (B) Moving average factor(s)
A model may have a number of moving average factors. Each moving average
factor is a polynomial of the form
(12)
where 01 , ••• , Bp are the (possibly vanishing) parameter values of the polynomial.
35
We now describe the three-stage modeling process in more detail.
5.1.1
Stage 1: Model Identification.
In the identification phase the time series is examined in order to choose a
tentative model form. There are several key statistical tools used during this
phase. The most important of these are the autocorrelations and the partial
autocorrelation of the time series.
The first step in identification is to make the time series stationary. In a
stationary series, the mean and the variance are constant over time. Differencing makes the mean constant over time. Looking at the autocorrelations
of the time series give clues as to the appropriate level of differencing that is
required. An autocorrelation function that starts out high (.9 or above) and
decays slowly indicates the need for differencing. The order of the differencing is determined by the number of time periods between the relatively high
autocorrelations. For example, if the autocorrelation function is high at lags
of 12 (for annual data) and dies out very slowly, then this is an indication for
the need of differencing of order 12 and degree greater or equal to 1.
Making the variance constant in time can be achieved by a transformation of
the data. We will not consider this technique in detail. For the interested
reader we recommend the book of Box & Jenkins[1976].
The step succeeding the inducement of stationarity is the tentative identification of the autoregressive/moving average structure. The autocorrelations and
the partial autocorrelations of the appropriately differenced and transformed
series have patterns which are associated with a particular model form. The
analyst can make a conjecture about model form by examining the information
in the (partial) autocorrelation functions. In general, decaying autocorrelations
and a cut-off in the partial autocorrelation indicate an autoregressive structure and decaying partial autocorrelations and a cut-off in the autocorrelation
indicate a moving average structure.
5.1.2
Stage 2: Model Estimation and Diagnostic Checking.
The second stage of the model building process is estimation of the coefficients
in the tentatively identified model. This is done by a nonlinear least squares
estimation procedure, which is fully described in Box & Jenkins[1976].
There are three basic diagnostic checks that must be performed on the estimated model. These tests are for necessity, invertibility and sufficiency. Each
36
parameter included in the model should be statistically significant (necessary)
and each factor must be invertible. In addition, the residuals from the estimated model should be white noise (model sufficiency).
The test for necessity is performed by examining the t-ratios for the individual parameter estimates. Parameters with nonsignificant estimates should be
deleted from the model.
Invertibility is determined by calculating the roots from each factor in the
model. All of the roots must lie outside the unit circle. If one of the factors is
noninvertible then the model must be adjusted. The appropriate adjustment
is dictated by the type of the factor that is noninvertible. For example, a
noninvertible autoregressive factor usually indicates under-differencing, while
a noninvertible moving average factor may indicate over-differencing or the
present of a deterministic factor.
Model sufficiency is tested in the same way as model identification is performed. If there are patterns in the residual autocorrelations and partial auto correlations of the residuals, then there is the necessity to add parameters
to the model.
If all of the parameters are necessary, if each factor is invertible and if the
model is sufficient, then the ARIMA model is adequate and it can be used for
forecasting.
5.1.3
Stage 3: Model Forecasting.
Model forecasting with the properly identified and estimated model is simply
an algebraic process of applying the model form to the actual time series data
and computing the forecast values from a given time origin. The confidence
intervals give a measure of the uncertainty in the point forecasts.
Remark: Autobox performs the above described steps automatically.
5.2
The Linpred predictor
This program can be probably best explained by comparing it with the Autobox program:
• Linpred calculates only AR-factors and difference factors, whereas Autobox also calculates MA-factors. MA-factors are not that important for
37
forecasting, because they don not change the forecasts, but only influence
the confidence interval of the forecasts.
• Linpred can only calculate one factor in the autoregressive model. This
is not a very serious restriction, because seasonal effects can always be
captured by choosing p in the AR(p )-modellarge enough .
• The size of the dataset is restricted to 300 in the Autobox program,
whereas Linpred can handle much larger datasets (in our case 13000,
with the points 13001 to 14000 used to calculate the forecast correlation).
5.3
Results of Autobox and Linpred
We applied Linpred to a dataset consisting of 14000 position points, with 0.1
seconds between two points and we applied Autobox on subsets of this dataset.
On the average there is a drop every 21 points. However, the spreading is quite
large, i.e. from 16 to 27.
We start our discussion with the results of Autobox. The model which fulfilled
the necessity requirements was an ARM A(2, 0, 1)-model. No differencing was
necessary. This was to be expected, because there is no trend in the data of
the dripping faucet, i.e. the position is after each drip more or less the same.
Moreover, the variance is stationary. The fraction of the data that could be
explained by the model was 99% (R2 = 0.99). All the autoregressive factors
were invertible. Only 7 auto correlations and 10 of the residuals remained significant (out of 100 calculated autocorrelations). The residuals were normally
distributed with an extremely small probability level. Furthermore, they were
close to zero except for some outliers which had an average lag between them
of 21. This equals the average time between two drops and as a result it could
be removed by extending the model with a ARIMA-component with lag 2L
However, this showed not to be significant. The explanation for this could be
that the series was not long enough (only 300 points). Furthermore, the lag is
on the average 21, but ranges from 16 to 27. The influence of this phenomenon
can also be seen in Figure 18, which represents an illustrative part (lag 241 to
300) of the plot of the fit (B) versus the actual values (A), where X denotes
the case when the fit and the actual data point coincide.
It can be seen that the inaccuracies occur in the neighborhood of the jump
(the low values).
The problem discussed above has important influence on the accuracy of the
forecasts as is shown in Figure 19.
The results get even worse after timestep 320. Figure 20 also suggests a poor
38
PLOT OF FIT (A) vs ACTUAL (B)
=============================
1----+----+----+----+----+----+----+----+----+----+----+----+
.5000E-011
T -.2900E-011
X
H -.1080E+001
xx
XX
xxx
E -.1870E+001
X X
X
-.2660E+001
X
X X
o -.3450E+001
B
X
X
X
B -.4240E+001
A
S -.5030E+00
X
B
X
B
E -.5820E+00
X
A
B
A
R -.6610E+00
X
A
V -.7400E+00
X
X
E -.8190E+00
X
X
A
D -.8980E+00
X
B
X
-.9770E+00
A
X
X
S -.1056E+01
B
X
E -.1135E+01 A
X
X
X
R -.1214E+01 BXXA
B
X
A B
A
I -.1293E+01
BXA
BA
BA B
E -.1372E+01
BXA A
BA A
BA
S -.1451E+01
BX
BX
BXA
-.1530E+01
----+----+----+----+----+----+----+----+----+----+----+----+
TIME
241
250
260
270
280
290
300
Figure 18:
39
--T--riME-T----FoREcAsr----T-----AcruAL-----T------ERROR-----T-----X-ERROR--I PERIOD I
VALUE
I
VALUE
I
I
I
=============================================================================
I
301
-.12934E+Ol
-.13218E+Ol
-.28447E-01
2.15
I
302
-.12798E+Ol
-.13790E+Ol
-.99198E-Ol
7.19
303
-.12143E+01
-.14197E+Ol
-.20546E+00
14.47
304
-.11118E+01
-.14479E+01
-.33608E+00
23.21
305
-.98976E+00
-.14674E+01
-.47763E+00
32.55
306
-.86549E+00
-.14457E+01
-.58019E+00
40.13
I
307
-.75426E+00
-.13423E+01
-.58807E+00
43.81
I 308
-.66769E+00
-.11947E+01
-.52700E+00
44.11
I 309
-.61283E+OO
-.10348E+Ol
-.42198E+00
40.78
I 310
-.59197E+00
-.88705E+00
-.29508E+00
33.27
I 311
-.60298E+00
-.76825E+00
-.16527E+00
21.51
I 312
-.64018E+00
-.68849E+OO
-.48310E-01
7.02
I 313
-.69553E+OO
-.65220E+OO
.43327E-01
6.64
I 314
-.75988E+00
-.65932E+00
.10056E+00
15.25
I
315
-.82427E+00
-.70650E+00
. 11777E+00
16.67
I 316
-.88093E+00
-.78813E+00
.92802E-Ol
11.77
I 317
-.92410E+00
-.89730E+00
. 26803E-01
2.99
I 318
-.95043E+00
-.10266E+01
-.76124E-Ol
7.42
I 319
-.95903E+OO
-.11685E+Ol
-.20944E+00
17.92
I 320
-.95130E+OO
-.13162E+01
-.36486E+00
27.72
I
=============================================================================
I
I
I
I
Figure 19: Forecasts made with Autobox
40
PLOT OF THE FORECAST VALUES VS TIME
===================================
KEY : . = ACTUALS (IF KNOWN)
F = FORECAST VALUES
+ = UPPER LIMIT VALUES
- = LOWER LIMIT VALUES
----+----+----+----+----+----+----+----+----+----+----+----+
.3000E+00
T
. 1900E+00
H .8000E-Ol
E -.3000E-Ol
-.1400E+00
0 -.2500E+00
B -.3600E+OO
S -.4700E+00
E -.5800E+00
R -.6900E+OO
V -.8000E+OO
E -.9100E+OO
0 -.1020E+Ol
-.1130E+Ol
S -.1240E+Ol
E -.1350E+Ol
R -.1460E+Ol
I -.1570E+Ol
E -.1680E+Ol
S -.1790E+Ol
-.1900E+Ol
TIME
+++
++++ ++
+++++
++
++
+++++++
+
+
+
FFF
F FF
+ F
FF
FFFFFFFF
F
FFFFFFFF
+
F
F
.FFF
----+----+----+----+----+----+----+----+----+----+----+----+
288
297
307
317
327
337
347
Figure 20: Forecasts made with Autobox
prediction capacity.
This poor prediction ability is probably due to the jump. This causes that the
MA-factor is very important (the t~raio
is about 11, whereas to be included
in the model it has to be at least 2). As discussed before, the MA-factor has
no influence on the predicted value, but more on the width of the confidence
interval.
As can be expected, the conclusions drawn from the Linpred-results do not
differ very much from the conclusions drawn from the Autobox-modeL Figure
21 and 22 obtain the results of two models, an AR(2)~model
and an AR(25)modeL The first is similar to the model found above, except for the MA-factor
(which is not important for the predicted values). The second is a result of
a closer examination of the AR(2)-plotj we can see that there seems to be a
period of about 21 in the series. This was also noted in the discussion of the
41
Autobox results. An AR(2) x AR21 (p) seems to be usefuL This can be partially
simulated by an AR(25)-model. The models are equal if the 3rd until the 20th
and the 22th until the 25th coefficients in the AR(25}-model are zero. The
resulting graph from the AR(25)-model is given in figure 22. The period of 21
seems to be removed. Moreover, the accuracy of the prediction is improved,
but still much worse than that achieved using the chaotic prediction method
and given in Figure 13.
LINEAR PREDICTION CORRELATION
1
0.8
0.6
0.4
:z
0
I<I:
...J
w
~
0.2
~
0
u
0
-0.2
-0.4
-0.6
0
10
20
30
40
PRED I en ON STEP
Figure 21: AR(2)-model
42
50
60
70
LINEAR PREDICTION CORRELATION
1~-.
a.s
0.8
0.7
a.s
0.4
0.3
fa.2·+-rT"T......-+..,.......,....,.-M-.,..,,....,.-I.....,...,,....,...,r-!-,...,-,...,-l---r-..--r+.,....-r-rl
70
9
19
40
sa
60
20
30
PREDICT[ON STEP
Figure 22:
6
AR(25)~model
Towards chaotic behaviour of the dripping
faucet
Until now, in this project we have always set the flow velocity fJ of the faucet
equal to 0.6. It is known that the system is chaotic for this value. For small
flow velocities no chaos is present. Here, we want to find out how the system
becomes chaotic as a function of parameter fJ.
For a certain flow velocity we calculate values of the times between two drips
Tn, n 1,2, ... using the program DRIPS. From the plot of Tn versus Tn+! we
can see if the system is chaotic or not for this flow velocity. If the plotted points
are centered at n points this means that the system has period n. Otherwise
the system is probably chaotic.
For 25 values for fJ in the interval [0.5 - 0.6J we followed this procedure. We
calculated 2000 points with a time step 0.2 and skipped the first 1000 points.
This is to avoid a possible transient period. So, we obtained about 85 points
of times between jumps. The results of the period for these values are given
in Figure 23. For practical reasons chaotic behavior is indicated by a "0"
although it corresponds to a period 00 . .
43
-----
vs. beta ------Pe~iod
6r------------------------------------,
4
2
1
. ._+.~r
-1~_+.r;
0.5
0.512 1il.524 0.536
0.54B
0.56
0.572 0.594 0.596
BETA
Figure 23:
We see that there is no period doubling as in the logistic map. For some
values it is difficult to say if the period is equal to 3 or 4 (but that is not
very important). Remarkable is that at the end (f3 between 0.576 and 0.6)
the system is first chaotic, then periodic, and afterwards chaotic again. In the
interval [0.588 - 0.592J we made 10 runs to see what happens there. The result
is shown in Figure 24. For f3 increasing from 0.590 it suddenly changes from
periodic to chaotic behavior. The intervals can be made smaller and smaller
but we did not do this because it takes too much computing time. In this
run we took 5000 points of which we skipped the first 4000 points because the
transient can be very long. For this last run the calculation took about 50
minutes for each f3 value.
44
-----
P&~iod
vs. beta -------
6
5
4
c
3
I:t
I:t
I:t
I:t
I:t
0
ii!
w
Q..
2
1
0.5884
0.5892
0.590.5904
0.5912
0.592
BETA
Figure 24:
7
Conclusions and recommendations
In this report we described several methods to calculate the Lyapunov exponent of the model of the dripping faucet. The outcomes are presented in Table
11.
I
method
WOLF package
Direct
Prediction
I Lyapunov exponent I lower bound I upperbound II
0.20
0.22
0.09
0.23
0.24
0.16
0.25
0.26
0.33
Table 11: Best values for the Lyapunov exponent
From this Table we may conclude that the largest Lyapunov exponent of the
model of the dripping faucet is about 0.24. Consequently the dripping faucet
is a weakly chaotic system.
It is hard to compare the used methods with each other. Looking at the
length of the interval of estimates it seems that the direct method gives the
45
best results, followed by the Wolf package.
The following remarks should be made about the several methods:
Wolf package
It seems that the results depend on the used readout function. Using the
time series of the dripping intervals or the mass series does not give reliable
results. The reason probably is the discontinuity of these series. The position
and velocity series give better results. This does not match with the theory
which says that every function of the state variables can be used as a readout
function (Takens[1981]).
Another aspect found is that the embedding dimension used for reconstruction
should be bigger than 3.
Direct method
Since the attractor is not two-dimensional, it is not useful to order the points
in the plain of the jump. A two-dimensional set can not be ordered like a line.
So the results of the sorting gave only the hint to analyse the attractor more
exactly, but have nothing to do with the largest Lyapunov exponent.
The method takes into account the behaviour of the system from its description
by differential equations. So this must be the most exact method to determine
the largest Lyapunov exponent, because it uses all information about the system. In practice it often happens that the exact description is not known, only
a time series is given. Then the direct method is not applicable.
Prediction method
By the prediction method it is also found that the results depend on the used
readout function.
The method of Wales produces estimates of the Lyapunov exponent which lie
in the largest interval of the 3 methods. The reason could be that the time
series were not long enough. This can be seen from the correlation plot, which
still contains wiggles. Longer time series, however, introduce computability
problems. Wiggles were not present in the correlation plot of the tent map.
For this map the value found for the Lyapunov exponent coincides with the
value in Wales[I991 J and with the theoretical value.
We also applied a stochastic description. The underlying statistical model
seems to bean ARIMA(2,O, I)-model (AUTOBOX)or perhaps an ARIMA(2,O, 1)
x ARI M A21 (2,0,1 )-model. In both cases, the predictions are bad. Probably due to the fact that the jumps in the data have a large influence on the
46
predictions.
If the flow velocity is equal to 0.5 the system is periodic. For a flow velocity
equal to 0.6 it is chaotic. In the interval in between the system is sometimes
periodic and sometimes chaotic. No period doubling has been observed, as is
seen for the well-known logistic map.
In future research it would be interesting to look at the influence of the readout
function. As we have seen there are big differences by using the position series
or, for example, the series of times between jumps.
Another recommendation is that the stochastic model could be extended to a
model which can handle jumps (see Box & Jenkins[1976]).
47
References
[*] G.E.P. Box & G.M. Jenkins, Time series analysis: forecasting and control,
Holden-Day, San Fransisco, 1976.
[*] J. Molenaar, Chaos theory and time series, Report IWDE 92-05 T.U.E.
Eindhoven, 1992.
[*] A. Noack, Der tropfende Wasserhahn - ein Beispiel fur ein System mit
chaotischem Verhalten, ISAM'92 Nonlinear Dynamics: Attractor Approximation and Global Behaviour, pp. 167-178, 1992
[*J N.H. Packard, J.P. Crutchfield, J.D. Farmer, R.S. Shaw, Geometry from
the time series, Phys. Rev. Lett., voL 45, pp. 712, 1980.
[*] R.S. Shaw, The dripping faucet as a model chaotic system, Aerial Press,
Santa Cruz, 1984.
[*] G. Sugihara & R.M. May, Nonlinear forecasting as a way of distinguishing
chaos from measurement error in time series, Nature, vol. 344, pp. 734741, 1990.
[*] F. Takens, Detecting strange attractors in turbulence, in: Dynamical systems and turbulence, Warwick. Lecture Notes in Math 898 (Springer,
Berlin, 1981), pp. 336-381.
[*] D.J. Wales, Calculating the rate of loss of information from chaotic time
series by forecasting, Nature, voL 350, pp. 485-488, 1991.
[*] A. Wolf et aI, Determining Lyapunov exponents from a time series, Physica 16D, pp. 285-317, 1985.
48