The Detection of Cavitation in Hydraulic Machines by Use of Ultrasonic Signal Analysis
The Detection of Cavitation in Hydraulic Machines by Use of Ultrasonic Signal Analysis
The Detection of Cavitation in Hydraulic Machines by Use of Ultrasonic Signal Analysis
net/publication/277008293
Conference Paper in IOP Conference Series Earth and Environmental Science · September 2014
DOI: 10.1088/1755-1315/22/5/052005
CITATIONS READS
2 907
5 authors, including:
Peter Gruber
Lucerne University of Applied Sciences and Arts
68 PUBLICATIONS 316 CITATIONS
SEE PROFILE
All content following this page was uploaded by Peter Gruber on 21 May 2015.
1. Introduction
Cavitation is generated if the static pressure in a fluid falls beneath the evaporating pressure of
the fluid under constant water temperature. Cavitation bubbles emerge around germs in the
fluid. These germs weaken locally the adhesive forces in the fluid which makes cavitation
easier. The germs might be impurities, trapped gases or other tiny cavities. The necessary
reduction of the static pressure is due to local pressure fluctuations or to an increase of the fluid
velocity.
The cavitation bubbles which implode near the surfaces of the mechanical components of the
machine, generate a micro-jet. This leads to high pressure and velocity peaks, which cause an
abrasive and damaging effect on the components. Big cavitation bubbles or vapour regions do
not generate an abrasive effect, they disturb however the flow field, can cause flow separation
and therefore reduce the efficiency of the machine. Both effects are unwanted. In a hydraulic
machine cavitation might occur at a variety of locations. Figure 1 lists different types of
cavitation in a hydraulic machine, Avellan [1] describes the fact that different dangerous and not
dangerous cavitation cannot be distinguished from operating point data as it is shown in the hill
chart of Figure 2. Classical monitoring methods based on frequency content of pressure, noise
and vibrational signals are described in Escaler et al. [2].
1
To whom any correspondence should be addressed.
Figure 1. Different type of cavitation
Figure 2. Hill chart for Francis turbine: regions of different types of cavitation
1: leading edge cavitation suction side, 2: leading edge cavitation pressure side
3: interblade cavitation, 4: ring swirl cavitation
Here a new approach is applied. The deterioration the ultrasonic signals due to the various
cavitation effects is exploited in a statistical way.
2. Measurements
Measurements were carried out at three objects by mounting the acoustic sensors in a clamp-on
fashion from the outside of the fluidized section of the installations (Müller [3], Gruber et al.
[4]):
- sphere in a vertical pipe of perspex at the hydraulic laboratory at the HSLU (Figure 3)
- different profiles in the cavitation channel of the EPFL-LMH laboratory (Figure 4)
- Francis model test turbine at the test rig at EPFL-LMH (Figure 5)
Figures 3 – 5 show the three installations tested and, Figures 6 - 8 the corresponding
acoustic path locations.
Figure 3. Sphere Figure 4. Profile in cavitation tunnel Figure 5. Francis model test
turbine
The operating points of the three objects were chosen such that the different types of cavitation
belonging to each operating point was visible or was known to the test engineers. Therefore the
data could later be used for training the classifiers.
Signalanimation Signalanimation
0.8 0.15
Signal=9
0.6
0.1
0.4
0.05
0.2
0
A m p litu d e [V ]
A m p litu d e [ V ]
-0.05
-0.2
-0.1
-0.4
-0.15
-0.6
-0.8 -0.2
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200
Zeit [s] Zeit [s]
Figure 11: samples of correlation functions: upper left undisturbed signal, other signals:
different degree of disturbance
( y f ( p ))
1 1
MSE ei2 i
2
N N i
i 1 i 1
Figure 12. Definition of some parameters of the correlation function
Both are measures of the asymmetry of the correlation function. With each of the third input it
was possible to distinguish all four states. Table 1 gives the details of the experiment, while
Figure 13 shows the chosen network structure.
5. Classification with decision trees (Breiman et al. [11], [12], Hand et al. [13])
A binary decision tree is a sequence of conditions factored into a tree-structured series of
branches as shown in Figure 14. Each node consists of a condition for one attribute or input
parameter of the data set. The result of the check of the condition is either Yes or No depending
on whether the attribute is above or under a certain threshold . Each node leads therefore to
two outgoing branches. To know the order in which the attributes must be chosen to split the
data in two classes, a measure is needed that allows to compare the effect of the attributes and
choose then one above the other. One of these measures is called impurity and could be defined
as the amount of uncertainty present in the data. The input parameter which reduces the
impurity together with a threshold value for the condition in the node must be found by
optimization. Given the probability p of an input parameter data set belonging to one target
value, and 1-p the probability for the same data set belonging to the other target value, a
common impurity function used in applications is the Gini index criteria
J ( p ) p (1 p )
The input parameter for which J is minimized is then chosen as the node parameter with a
corresponding threshold . The optimization of the Gini index tries to split the input parameter
data such that all the data of one input parameter for which the probability p is maximal is sent
to one of the branches while all the others are sent to the other branch. The minimization of the
Gini index leads therefore to pure nodes as best as possible.
With the decision tree training data set, the tree structure and
each branch threshold is determined. Table 2 shows the
validation matrix, for which a much larger data set including
training data set is classified by the decision tree shown in
Figure 14.
The cavitation detection is correct except for one state which
is out of the diagonal where inter blade vortex cavitation is
classified as a draft tube swirl. This leads to a success rate of
98% compared to 100% with the neural network presented
above for the same data set. However, compared to the neural
network, the architecture of the decision tree is much simpler
Table 2: Validation matrix,
and more transparent. It can be easily implemented by simple
target versus classification
rules.
The number of weights and thresholds to be optimized gets soon pretty large with no
clear connections to the underlying physics. This makes it hard to interpret the resulting
function of the classifier.
The high number of weights and thresholds leads to a nonlinear optimization problem of
high dimensionality. The choice of initial conditions for the optimization is therefore a
delicate issue because the found solution depends on them.
The found decision tree is understandable, easy readable and implementable. The tree
creates after each branch a subspace, whereas a feed forward neural net tries to classify
all data in one space with the dimension of the numbers of different parameters. A
graphical interpretation is therefore easy.
An input parameter can be used several times which is not possible with a single feed
forward neural network.
From a decision tree we learn directly which input parameter can split best one class from
the other classes. Also the threshold for each parameter condition has a physical
meaning.
6.3 Conclusions
Both classifier methods are applicable and led to the required distinction of the different water
states. For all the experiments at least one classifier could be found with both methods. The
solutions however are not unique and are driven by the training data. The decision tree
approach is easier interpretable and therefore its acceptance is higher. The method provides the
user automatically with the most important input parameters. From this point of view, the
decision tree method could also be used for the choice of input parameters for a neural network
approach. A general automated search for the best inputs and structure of a neural net is huge
and could not be carried out. The most important input parameter found have a physical
meaning:
1. The attenuation observed in the mean of the signal amplitudes can be explained by the
concentration and size of particles or bubbles in the water.
2. The coefficient of variation of the signal amplitude and therefore also the standard
deviation can be interpreted by a concentration of cavitation or air bubbles. If there is
only a small amount of bubbles, the standard deviation is small and increases with a
higher bubble concentration. If there are only a few bubbles, most of the sent ultrasound
signals are unaffected. So for the most part of the recorded signals, there is no
difference to a signal sent through clear water – consequently the few disturbed signals
by bubbles have only a small impact on the standard deviation. Interesting is the fact
that air filled bubbles influence the standard deviation much more than vapor filled
bubbles – this helps to distinct air bubbles from cavitation bubbles in the water.
3. On the other hand, the physical impact on the cross correlated signals is not so clear, but
with large measurement series and well defined boundary conditions, relations from
water states to signal interaction can be extracted experimentally. The important cross
correlation parameters describe the distortion of the shape of the signal.
Both classification methods will be applied in a next step to field experiments. An open issue is
the question of how generic a classifier can be specified and how much of the classifier has to
be trained on site. A special focus will also be given to the combination of the ultrasonic
classifier in combination with operating point information and analysis of vibrational,
hydrophone and noise signals. By merging the different sources of information by rule based or
decision tree methods the chance to be able to separate dangerous from not dangerous cavitation
will increase.
References
[1] Avellan F 2004 Introduction to cavitation in hydraulic machinery (The 6th International
Conference on Hydraulic Machinery and Hydrodynamics Timisoara, Romania)
[2] Escaler X, Egusquiza E, Farhat M, Avellan F, Coussirat M 2006 Detecton of cavitation in
hydraulic turbines (Mechanical Systems and Signal Processing 20) p.983-1007
[3] Müller C 2008 Untersuchung der Kavitation mit Ultraschall an zwei Prüfstrecken
(Bachelor Diplomarbeit HSLU Luzern)
[4] Gruber P, Roos D, Müller C, Staubli T 2011 Detection of damaging cavitation states by
means of ultrasonic signal parameter patterns (WIMRC 3rd International Cavitation
Forum, July 2011, Warwick, England)
[5] Hassoun M H 1995 Fundamentals of artificial neural networks (MIT Press)
[6] Press W, Teukolski S A, Vetterling W T, Flannery B P 1986 Numerical Recipes
(Cambridge University Press)
[7] Etterlin M 2012 Klassifizierung von Wasserzuständen mithilfe von Ultraschallsignalen
und neuronalen Netzen (Industrieprojekt, HSLU Luzern)
[8] Lerch T 2013 Klassifizierung von Kavitationszuständen mithilfe von Ultraschallsignalen
(Industrieprojekt, HSLU Luzern)
[9] Gruber P, Odermatt P, Etterlin M, Lerch T, Farhat M 2013 Cavitation Detection via
Ultrasonic Signal Characteristics (IAHR, 5th International Workshop on Cavitation
and Dynamic Problems in Hydraulic Machinery, Lausanne, Switzerland)
[10] Gruber P, Odermatt P, Etterlin M, Lerch T 2013 The detection of cavitation in hydraulic
machines by use of ultrasonic signal analysis (CTI-Report)
[11] Breiman L, Friedman J H, Olshen R A, Stone C J 1984 Classification and regression
trees (Chapman and Hall)
[12] Breiman L 1996 Technical Note: Some Properties of Splitting Criteria (Machine
Learning) p. 24, 41-47
[13] Hand D, Mannila H, Smyth P 2001: Data Mining (MIT Press)
[14] Frei M 2013 Klassifizierung von Kavitationszuständen mithilfe von Ultraschallsignalen
und regelbasierten Methoden (Industrieprojekt, HSLU Luzern)