An Intelligent Approach To Evaluate Drilling Performance
An Intelligent Approach To Evaluate Drilling Performance
An Intelligent Approach To Evaluate Drilling Performance
DOI 10.1007/s00521-010-0457-6
ORIGINAL ARTICLE
Received: 21 April 2010 / Accepted: 2 October 2010 / Published online: 20 October 2010
Ó Springer-Verlag London Limited 2010
123
764 Neural Comput & Applic (2012) 21:763–770
approach for the prediction is necessary, and an artificial To study the influence of various parameters on the
intelligence (AI) comes in handy to fulfill this approach. performance of diamond drilling, work done by various
The artificial neural network (ANN) model has been one researchers has been reviewed [7, 21, 22, 24]. The rate of
of the attractive tools used in geo-engineering applications penetration increases linearly with the increase in thrust on
due to its high performance in the modeling of non-linear bit for each rotational speed. However, for each rpm, there
multivariate problems [27]. The artificial neural network exists an optimum thrust on bit and beyond which there is
(ANN) is a new branch of intelligence science and has no appreciable increase in rate of penetration [3]. The
developed rapidly since the 1980s. Nowadays, ANN is magnitude of torque developed at the bit–rock interface
considered to be one of the intelligent tools to understand increases linearly with the increase in the thrust on bit at
the complex problems. Neural network has the ability to each rotational speed.
learn from the pattern acquainted before. Once the network
has been trained, with sufficient number of sample data
sets, it can make predictions, on the basis of its previous 3 The philosophy of artificial neural network
learning, about the output related to new input data set of
similar pattern [10]. Due to its multidisciplinary nature, Artificial neural network (ANN) is a branch of the ‘artificial
ANN is becoming popular among the researchers, planners, intelligence’, which also includes case-based reasoning,
designers, etc. as an effective tool for the accomplishment expert systems and genetic algorithms. The classical statis-
of their work. Therefore, ANN is being successfully used in tics, fuzzy logic and chaos theory are also considered to be
many industrial areas as well as in research also. ANN related fields. ANN is an information-processing system
model has superiority in solving problems in which many simulating the structure and functions of the human brain. It
complex parameters influence the process and results, is a highly interconnected structure that consists of many
when process and results are not fully understood and simple processing elements (called neurons) capable of
where historical or experimental data are available. The performing massively parallel computation for data pro-
prediction of ROP is also of this type. cessing and knowledge representation. The neural network
In the present investigation, different drilling and rock is first trained by processing a large number of input patterns
parameters have been used to predict the rate of penetration and the corresponding output. After proper training, the
(ROP) using artificial neural network and multivariate neural network is able to recognize similarities and predict
regression analysis, and predicted results are compared the output pattern, when presented with a new input pattern.
with actual field data. The basic idea is to find the scope Neural networks are able to detect similarities in inputs
and suitability of the ANN for prediction of ROP. even though a particular input may never have been known
previously. This property allows its excellent interpolation
capabilities, especially when the input data are noisy (not
2 Factors influencing drilling performance exact). Neural networks may be used as an alternative for
autocorrelation, multivariable regression, linear regression,
Drilling is an operation in which rock is fragmented under trigonometric and other statistical analysis techniques.
the influence of drilling forces like thrust and torque while A particular network can be defined using three funda-
the broken chips are flushed out of the hole through cir- mental components: transfer function, network architecture
culating water. The drill performance depends upon the and learning law [11, 26]. One has to define these com-
following: ponents depending upon the problem to be solved.
1. the physico-mechanical properties of rock,
3.1 Network training
2. the shape of cutting tool,
3. the magnitude of drilling forces acting at bit–rock
A network first needs to be trained before interpreting new
interface and
information. A number of algorithms are available for
4. the flushing rate.
training of neural networks, but the back-propagation
The detailed study of the relationship between the rate algorithm is the most versatile and robust technique. It
of penetration and various rock as well as machine provides the most efficient learning procedure for multi-
parameters is carried out by several researchers [7, 16, 17, layer neural networks. Also, the fact that back-propagation
19, 20]. Studies of the effect of polymer mixed in flushing algorithms are especially capable of solving predictive
water on the performance of the diamond drilling show a problems makes them so popular [15]. The feed-forward
substantial increase in performance. However, the work in back-propagation neural network (BPNN) always consists
this regard has not considered all machine parameters and of at least three layers: input layer, hidden layer and output
has been confined to laboratory tests only. layer. Each layer consists of a number of elementary
123
Neural Comput & Applic (2012) 21:763–770 765
processing units, called neurons, and each neuron is con- In Fig. 1, the jth neuron, in the hidden layer, is con-
nected to the next layer through weights, i.e., neurons in the nected to a number of inputs
input layer will send their output as input to neurons in the xi ¼ ðx1 ; x2 ; x3 ; . . .; xn Þ: ð1Þ
hidden layer and similar is the connection between hidden
The net input values in the hidden layer will be as
and output layer. The number of hidden layers and neurons
follows:
in the hidden layer changes according to the problem to be
solved. The number of input and output neurons is the same X n
Netj ¼ xi wij þ hj ð2Þ
as the number of input and output variables. i¼1
To differentiate between the various processing units,
values called biases are introduced into the transfer func- where xi = input units, wij = weight on the connection of
tions. Except for the input layer, all neurons in the back- ith input and jth neuron, hj = bias neuron (optional) and
propagation network are associated with a bias neuron and n = number of input units.
a transfer function. The bias is much like a weight, except So, the net output from hidden layer is calculated using a
that it has a constant input of 1, while the transfer function logarithmic sigmoid function
filters the summed signals received from this neuron. These Oj ¼ f ðNetj Þ ¼ 1=1 þ eðNetj þhj Þ : ð3Þ
transfer functions are designed to map the net output of a
The total input to the kth unit is as follows:
neuron or layer to its actual output. The application of these
Xn
transfer functions depends on the purpose of the neural Netk ¼ wjk Oj þ hk ð4Þ
network. The output layer produces the computed output j¼1
vectors corresponding to the solution [9].
where hk = bias neuron, wjk = weight between jth neuron
During training of the network, data are processed
and kth output.
through the input layer to hidden layer until it reaches the
So, the total output from kth unit will be
output layer (forward pass). In this layer, the output is
compared to the measured values (i.e., the ‘‘true’’ output). Ok ¼ f ðNetk Þ: ð5Þ
The difference or error between both is propagated back In the learning process, the network is presented with a
through the network (backward pass) updating the indi- pair of patterns, an input pattern and a corresponding
vidual weights of the connections and the biases of the output pattern. The network computes its own output
individual neurons. The input and output data are mostly pattern using its (mostly incorrect) weights and thresholds.
represented as vectors called training pairs. The process as Now, the actual output is compared with the desired output.
mentioned above is repeated for all the training pairs in the Hence, the error at any output in layer k is
data set until the network error converges to a threshold
el ¼ tk Ok ð6Þ
defined by a corresponding function: usually the root mean
squared error (RMS) or summed squared error (SSE). where tk = desired output, and Ok = actual output.
1 u 1 v 1 w
I J K E = 0.5(t – O)2
O t
I J K
. . .
O Compare Target
I J K
. . .
I J K
Feed Forward
Error Back propagation
123
766 Neural Comput & Applic (2012) 21:763–770
Normalization of data
5 Network architecture
Testing
Table 1 Input and output parameters with their range, mean and
standard deviation
S. no. Input parameter Range Mean Standard
If the error of deviation
Yes testing is declined
1. Thrust (N) 325–820 546.914 151.858
2. RPM 285–1122.2 631.404 308.09
No
3. Flushing media 0–30 13.605 10.092
Finish 4. Compressive 24.5–77.8 45.354 11.022
strength (MPa)
5. ROP 0.244–18.111 6.136 4.09
Fig. 2 ANN process flowchart [18]
123
Neural Comput & Applic (2012) 21:763–770 767
in this study. However, the number of neurons is the most input/output mapping problem. The closer the mapping,
critical task in the ANN structure. The heuristics proposed better the performance of the network is.
for this purpose are summarized in Table 4. As can be seen A three-layer feed-forward back-propagation neural
from Table 4, the number of neurons that may be used in the network was developed to predict the ROP. The input layer
hidden layer varies between 2 and 12, depending on the has four input neurons and the output layer has one neuron,
proposed heuristics in the literature. The ANN structures while the hidden layer comprises seven hidden neurons
were trained by using number of hidden neurons defined (Fig. 3). Training of the network was carried out using 472
above. By considering the findings obtained from different cases, whereas testing of the network was performed using
trials, the ANN structure consisting of one hidden layer with 146 different cases.
seven neurons (Fig. 3) was selected for the given problem. The number of training cycles is important to obtain
The data sets were normalized between zero and one con- proper generalization of the ANN structure. Theoretically,
sidering the maximum values of input parameters. excessive training, which is also known as over-learning, can
Feed-forward back-propagation neural network archi- result in near-zero error on predicting training data. How-
tecture (4-7-1) is adopted due to its appropriateness for the ever, this over-learning may result in loss of the ability of the
identification problem. Pattern matching is basically an ANN to generalize from the test data, as shown in Fig. 4 [2].
The increasing point in the error of the test data or the closest
Table 2 Sample data set used for the training of ANN and MVRA
model
S. no. Thrust RPM Flushing Compressive ROP
(N) media strength (MPa) Thrust
Compressive
Table 3 Sample data set used for the testing of ANN and MVRA Strength
model
S. no. Thrust RPM Flushing Compressive ROP
(N) media strength (MPa) Fig. 3 Suggested architecture for the case study
B2 9 Ni ? 1 9 Hecht-Nielsen [5]
3Ni 12 Hush [6]
(Ni ? N0)/2 3 Ripley [23]
2Ni/3 3 Wang [28]
H(Ni 9 N0) 2 Masters [14]
2Ni 8 Kanellopoulas
and Wilkinson [8]
Fig. 4 Criteria for the termination of training and selection of
Ni number of input neurons, N0 number of output neurons optimum network architecture [2]
123
768 Neural Comput & Applic (2012) 21:763–770
point to the training curve is considered to represent the variables and a dependent or criterion variable. The goal of
optimal number of cycles for the ANN architecture. regression analysis is to determine the values of parameters
All the input and output parameters were normalized for a function that cause the function to best fit a set of data
between 0 and 1. Equation 10 was used for the scaling of observations provided. In linear regression, the function is
input and output parameters. a linear (straight line) equation. When there is more than
Normalized value ¼ ðmax :value unnormalized valueÞ= one independent variable, multivariate regression analysis
is used to get the best-fit equation. Multiple regression
ðmax :value min :valueÞ ð10Þ
analysis solves the data sets by performing least squares fit.
The architecture of the network is tabulated below: It constructs and solves the simultaneous equations by
forming the regression matrix and solving for the coeffi-
1. Number of input neurons: 4 cient using the backslash operator. The MVRA has been
2. Number of output neurons: 1 done by same data sets and same input parameters which
3. Number of hidden layers: 1 we used in ANN.
4. Number of hidden neurons: 7 The equation for prediction of ROP by MVRA is
5. Number of training datasets: 472
6. Number of testing datasets: 146 ROP ¼ 8:1649 þ 0:02 Thrust þ 0:0045 RPM
7. Error goal: 0.0 þ 0:0617 Flushing media 0:0093 Comp St:
Fig. 5 Measured versus ANN-predicted ROP Fig. 6 Measured versus MVRA-predicted ROP
123
Neural Comput & Applic (2012) 21:763–770 769
References
Fig. 7 Comparison of measured and predicted ROP
1. Baheer I (2000) Selection of methodology for modeling hyster-
esis behavior of soils using neural networks. J Comput Aided
Civil Infrastruct Eng 5(6):445–463
2. Basheer IA, Hajmeer M (2000) Artificial neural networks: fun-
damentals, computing, design, and application. J Microbiol Meth
43:3–31
3. Bhatnagar A, Khandelwal M, Rao KUM (2010) Performance
enhancement by addition of non-ionic polymer in flushing media
for diamond drilling in rock phosphate. Min Sci Technol
20(3):400–405
4. Chugh CP (1992) High technology in drilling and exploration.
Oxford and IBH, India
5. Hecht-Nielsen R (1987) Kolmogorov’s mapping neural network
existence theorem, Proceedings of the first IEEE international
conference on neural networks, San Diego CA, USA, pp 11–14
6. Hush DR (1989) Classification with neural networks: a perfor-
mance analysis. Proceedings of the IEEE international conference
on systems engineering Dayton OH, USA, 277–280
7. John LP (1994) Influence of RPM and flushing media on the
Fig. 8 Comparison of measured and predicted ROP on 1:1 slope line performance of diamond drilling, B. Tech. Thesis, Department of
Mining Engineering, I.I.T Kharagpur India
8. Kanellopoulas I, Wilkinson GG (1997) Strategies and best
Table 5 CoD and MAE of ROP by ANN and MVRA practice for neural network image classification. Int J Remote
Sens 18:711–725
Model CoD MAE 9. Khandelwal M, Kumar DL, Mohan Y (2009) Application of soft
computing to predict blast-induced ground vibration, Engineering
ANN 0.984 0.3254
with computers (Online Published)
MVRA 0.769 1.2993 10. Khandelwal M, Singh TN (2009) Prediction of blast-induced
ground vibrations using artificial neural network. Int J Rock
Mech Min Sci 46:1214–1222
11. Kosko B (1994) Neural networks and fuzzy systems: a dynamical
9 Conclusions systems approach to machine intelligence. Prentice Hall, New
Delhi
Based on the study, it is established that the feed-forward 12. Lippmann RP (1987) An introduction to computing with neural
back-propagation neural network approach seems to be the nets. IEEE ASSP Mag 4:4–22
13. MacKay DJC (1992) Bayesian interpolation. Neural Comput
better option for close and appropriate prediction of rate of 4:415–447
penetration. ANN results indicate very close agreement for 14. Masters T (1994) Practical neural network recipes in C??.
the ROP with the field data sets, whereas MVRA shows Academic Press, Boston MA
high error and it was not able to predict the ROP up to the 15. Maulenkamp F, Grima MA (1999) Application of neural net-
works for the prediction of the unconfined compressive strength
mark. By adopting ANN technique, ROP can be predicted (UCS) from Equotip hardness. Int J Rock Mech Min Sci
prior to the drilling. The drilling system can be modified 36:29–39
accordingly so that drilling loses can be minimized as well 16. Miller D, Ball A (1990) Rock drilling with impregnated diamond
as higher utilization of energy can also be achieved. micro bits-an experimental study. Int J Rock Mech Min Sci
27:363–371
Considering the complexity of the relationship between 17. Miller D, Ball A (1991) The wear of diamonds in impregnated
the inputs and outputs, the results obtained by ANN are diamond bit drilling. Wear 141:311–320
123
770 Neural Comput & Applic (2012) 21:763–770
18. Monjezi M, Dehghani H (2008) Evaluation of effect of blasting chaos-statistical and probabilistic aspects. Chapman & Hall,
pattern parameters on back break using neural networks. Int J London, pp 40–123
Rock Mech Min Sci 45(8):1446–1453 24. Rowlands D (1975) Rock fracture by diamond drilling. Ph.D.
19. Paone J, Bruce WE (1963) Drillability studies-diamond drilling. Thesis. University of Melbourne, Australia
RI-USBM 6324, US Bureau of Mines 25. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal
20. Paone J, Madson D (1966) Drillability studies-impregnated dia- representation by error propagation. In: Rumelhart DE, McClel-
mond bits. RI-USBM 6776, US Bureau of Mines land JL (eds) Parallel distributed processing, vol 1, pp 318–362
21. Rao KUM, Misra B (1994) Design of spoked wheel dynamometer 26. Simpson PK (1990) Artificial neural system—foundation, paradigm,
for simultaneous monitoring of thrust and torque developed at bit application and implementations. Pergamon Press, New York
rock interface during drilling. Int J Surf Min Reclam Environ 27. Sonmez H, Gokceoglu C, Nefeslioglu HA, Kayabasi A (2006)
8:146–147 Estimation of rock modulus: for intact rocks with an artificial
22. Rao KUM (1993) Experimental and theoretical investigations of neural network and for rock masses with a new empirical equa-
drilling of rocks by impregnated diamond core bits, Ph.D. Thesis, tion. Int J Rock Mech Min Sci 43:224–235
Department of Mining Engineering, I.I.T. Kharagpur 28. Wang C (1994) A theory of generalization in learning machines
23. Ripley BD (1993) Statistical aspects of neural networks. In: with neural application. Ph.D. thesis, The University of Penn-
Barndoff-Neilsen OE, Jensen JL, Kendall WS (eds) Networks and sylvania, USA
123