Report Title
Report Title
Report Title
Contents
Abstract...................................................................................................................................3
1 Introduction......................................................................................................................3
2 Artificial Neural Networks...........................................................................................493
Figure 1: A Biological Neuron.....................................................................................493
Figure 2: A Processing Element or Neuron.................................................................494
Figure 3: The Neural Network Architecture used in this Study...................................494
3 Methodology................................................................................................................494
3.1 The Neural Network Architecture.........................................................................494
3.2 Training..................................................................................................................495
3.3 Results....................................................................................................................495
Table 1: Data Range for Input Parameters in this study..............................................495
Figure 5: Regression analysis of the neutral network performance for Validation.....496
Figure 7: Neural Network Performance in terms of Mean Squared Error...................496
References..........................................................................................................................497
Abstract
Gas compressibility factor is an essential requirement for the determination of several natural gas properties. Formation
volume factor, density, compressibility and viscosity all require accurate knowledge of the gas compressibility factor.
Limited availability of experimental data makes it necessary to employ correlations to calculate the gas compressibility
factor. This study evaluates the usefulness of Artificial Neural Networks (ANN) as an alternative to published correlations in
predicting gas compressibility factor. The ANN model correlates gas compressibility factor as a function of reservoir
temperature and dew point pressure. This method is of particular importance in generating gas compressibility data for
natural gas reservoirs where no samples have been taken.
ANN was applied to the 40 raw data sets in the range of 105-226 oF and 2445 - 4843 psia for temperature, and pressure,
respectively. The ANN network was constructed using MATLAB. To develop the ANN model, the samples were divided
into three groups. One set contained 24 samples which was used to train the network, one set of 8 samples was used for
validation while and the remaining 8 samples were used as the test sets. The performance analysis of ANN showed that the
mean square error (MSE) was 4.0769E-6 while the R 2 value for test data was equal to 0.99607. The model was tested in the
range of 107-117 oF and 2495 - 2725 psia for temperature, and pressure, respectively. Gas compressibility factors obtained
from the Standing and Katz chart gave a MSE of 4.14E-05 while the ANN gave a MSE of 3.02243E-05 as compared to the
experimental data. The results showed that ANN is an effective and powerful tool in estimating gas compressibility factor.
1 Introduction
Natural gas is a subcategory of petroleum that gas describes the behavior of most gases at pressure and temperature
occurs naturally, and it is composed of complex conditions close to atmospheric. At moderate pressures and low
mixtures of hydrocarbons and a minor amount of temperatures, natural gas tends to compress more than the ideal gas law
inorganic compound. Natural gasses physical predicts. Thus, the gas compressibility factor is defined as the ratio of the
properties, and in particular, their variations with volume actually occupied by a gas at a given pressure and temperature to
pressure, temperature, and molecular weight are of the volume it would occupy if it behaved ideally. It is denoted by the
great importance in petroleum and gas engineering symbol z. Thus, gas compressibility factor is expressed as:
calculations (Heidaryan et. al., 2010).
z = Va/Vi (1)
The natural gas compressibility factor is a
requirement in most petroleum and natural gas where, Va is the actual volume of gas and V i is the
engineering calculations. Some of these ideal volume of the gas both at the same conditions
calculations are gas metering, gas compression, of temperature and pressure.
design of processing units, and design of pipeline
The real gas equation is then written as:
and surface facilities. Compressibility factor of
natural gases is also important in the calculation of pV = nzRT (2)
gas flow rate through reservoir rock, material
balance calculations, evaluation of gas reserves,
and reservoir simulations (Azizi et. al., 2010).
5
Figure 2: A Processing Element or Neuron whereas the latter is utilized to check model
performance. However, designing a neural network
Signals (in mathematical form) enter a neuron in such a fashion may lead to a model that over-
through connections, which are assigned adjustable fitting problem, a modified approach, cross
values, known as weights. Each signal is scaled up validation, should be used in which case the data is
by its weight. The scaled signal is summed and a divided into three representatives data sets: a
bias value may be added. The final stage is to training set to calibrate the model, a testing set to
present the resulting signal to a non-linear check model performance at different stages during
activation function which, in turn, produces calibration and a validation set for the final
response to be supplied to other neighbouring assessment of the model. The calibration phase
neurons on the next layer. continues as long as both training and testing set
errors decrease. Calibration should determinate
For a typical multi-layer neural network, neurons
when the testing set error increases, see Figure 3.
are arranged into three layer types (Figure 3): a
This approach results in a model that has captured
single input layer (number of neurons is number of
the underlying relationship contained in the
input parameters), a single output layer (designed
calibration data, rather than a model that has
with neuron for each output) and one or more
“memorized” the specific data points in the training
hidden layers. It has been shown that, a network
set.
with a single hidden layer can model any
continuous function if an adequate number of An ANN is completely defined when the following
weights are provided. Such models have to be characteristics are specified: the architecture, i.e.
calibrated, where the overall aim is to obtain the number of units; how these are connected and
optimal values of weights and biases by how the information flows; the mapping, i.e. how
minimizing the error between predicted and to determine the outputs of the network as a
desirable output, by subjecting the model with function of its inputs; and the learning algorithm,
typical relationships for the problem under study. which is closely related to the architecture and
specifies how to adjust the weights of the network
during the learning process.
3 Methodology
3.1 The Neural Network Architecture
The Feedforward Multilayer Perceptron ANN
trained with the backpropagation algorithm has
been used in this study. This architecture has the
following characteristics:
Figure 3: The Neural Network Architecture 1. The architecture is multilayer. One hidden layer
used in this Study has been added between the input and output
layers. The addition of the hidden layer gives
As mentioned above, like regression models, the network the capability of approximating any
ANNs require a set of experimental (numerical) nonlinear function to arbitrary precision;
data that characterizes the relationship between 2. The information flows in a feedforward fashion,
inputs and outputs of the problem under i.e. from the input to the output going through
investigation. However, one additional requirement the hidden layers and, typically, the units in the
for regression models is to define a functional form previous layer only;
that, to a high degree, best represents the 3. The learning algorithm is supervised. The
inputs/output relation (i.e. they are formula-driven backpropagation algorithm is derived when the
models). ANNs, being data-driven, attempt to steepest-descent method is used to minimize the
extract such relationships from data provided sum of the squares of the error between the real
without a priori knowledge of the underlying output and the output predicted by the network.;
functional form.
The ANN Toolbox of MATLAB® R2009b was
Unlike regression models, the available data for used in developing the model and simulation in this
ANN development are, generally, divided into at paper work.
least two subsets: training and validation. The
former subset is used for model calibration,
5
set of 20% of the total samples was used for
3.2 Training validation. The R-squared value and MSE were
0.99959 and 4.077E-06 respectively. The
There are generally four steps in the training remaining 20% of the total samples was used as the
process: test sets. The outputs were also given in figure 6
and 7 which revealed 0.99607 for the R-squared
1. Assemble the training data
value and 4.780E-06 for the MSE.
2. Create the network object
3. Train the network Table 1: Data Range for Input Parameters in
4. Simulate the network response to new this study
inputs
The data are orientated in such a way of presenting
Data Input Min. Max.
them in rows into the network (Figure 3). The
For Training, Temp. ( F) 105 226
o
network is developed through Graphical User
Validation Pressure (psia) 2445 4843
Interface with feed-forward backpropagation type.
and Testing SG 0.59 0.8
Standard backpropagation is a gradient descent For Temp. (oF) 107 117
algorithm, as is the Widrow-Hoff learning rule, in Simulation Pressure (psia) 2495 2695
which the network weights are moved along the SG 0.6 0.6
negative of the gradient of the performance
function. Input vectors and the corresponding target
vectors are used to train a network until it can
approximate a function, associate input vectors
with specific output vectors. Properly trained
backpropagation networks tend to give reasonable
answers when presented with inputs that they have
never seen. Training function employed was
TRAINLM which updates weight and bias values
according to Levenberg-Marquardt optimization.
Meanwhile LEARNGM was used as adaptation
learning function and for effective performance the
number of neurons for the first (hidden) layer was
ten while the second layer was one (usually
dictated by the number of output) (Howard and
Mark, 2002).
5
external or internal information that flows through
the network during the learning phase.
5
Saputelli L., Malki H., Canelon J. and Nikolaou M.
(2002), “A Critical Overview of Artificial
Neural Network Applications in the Context of
Continuous Oil Field Optimization”, SPE
77703, 29 September – 2 October 2002.
4 Conclusions
The Artificial Neural Network developed in this
study has been shown to be a powerful instrument
for predicting compressibility factor most
especially when other data are not available to be
used in existing models. It correlates gas
compressibility factor as a function of reservoir
temperature and dew point pressure.
References
Azizi, N., Behbahani, R. and Isazadeh, M. A.
(2010), “An Efficient correlation for calculating
compressibility factor of natural gases”,
Journal of Natural Gas Chemistry, 19 (2010)
642–645.
Bahadori, A., Mokhatab, S. and Towler, B. (2007),
“Rapidly Estimating Natural Gas
Compressibility Factor”, Journal of Natural
Gas Chemistry, 16 (2007) 349–353.
Heidaryan, E., Moghadasi, J. and Rahimi, M.
(2010), “New correlations to predict natural gas
viscosity and compressibility factor”, Journal
of Petroleum Science and Engineering.
Heydari, A., Shayesteh, S. and Kamalzadeh, L.
(2007), “Prediction of Hydrate Formation
Temperature for Natural Gas using Artificial
Neural Network”, Oil and Gas Business
Journal.
Howard D. and Mark B. (2002), “Neural Network
Toolbox: For Use with MATLAB”, The
MathWorks, Inc. 3 Apple Hill Drive, Natick,
MA 01760-2098.