SPE38441 RESERVOIR SIMULATION, Present Past
SPE38441 RESERVOIR SIMULATION, Present Past
SPE38441 RESERVOIR SIMULATION, Present Past
International
SPE 38441
Introduction Today, input from reservoir simulation is used in nearly all major reservoir
development decisions. This has come about in part through technology improvements
that make it easier to simulate reservoirs on one hand and possible to simulate them
more realistically on the other; however, although reservoir simulation has come a long way
from its beginnings in the 1950's, substantial further improvement is needed, and this is
stimulating continual change in how simulation is performed.
Given that this change is occurring, both developers and users of simulation
have an interest in understanding where it is leading. Obviously, developers of new
simulation capabilities need this understanding in order to keep their products relevant
and competitive. However, people that use simulation also need this
understanding; how else can they be confident that the organizations that provide
their simulators are keeping up with advancing technology and moving in the right
direction?
Computing. The earliest computers were little more than adding machines by
today's standards. Even the fastest computers available in the 1970's and early
1980's were slower and had less memory than today's PC's. Without substantial
progress in computing, progress in reservoir simulation would have been
meaningless.
Figure 1 gives the computing speed in millions of floating point operations per
second for the three fastest CPU's of their times: the Control Data 6600 in
1970, the Cray is in 1982, and a single processor on a Cray T94 in 1996.
The performance figures are taken from Dongarra's compilation of LINPACK
results' and should be reasonably representative of reservoir simulation
computations.
Figure 1 shows single processor performance. Today,
high-performance computing is achieved by using multiple
processors in parallel. The resulting performance varies widely
depending on problem size and the number of processors used. Use of, for
example, 32 processors should lead to speedup factors of 15-25 in large
reservoir models. As a result, although Figure 1 shows rate of performance
improvement to be slowing, if parallelization is factored in, it may actually have
accelerated somewhat. No attempt is made to incorporate parallel
performance into Figure 1 because of its wide variability.
Consider Table 1, which compares the Cray 1S to today's Intel Pentium Pro.
The 1S was the second version of Cray's
333
J. W. WATTS
SPE 38441
first supercomputer. Not only was it the state of the art of its time, but it represented a
tremendous advance over its contemporaries. As such, it had the standard
supercomputer price, a little under $20 million. It comprised large, heavy pieces of equipment
whose installation required extensive building and electrical modifications. The Pro, on
the other hand, costs a few thousand dollars, can be purchased by mail order or from
any computer store, and can be plugged into a simple wall outlet. The 200 MHz Pro is
over twice as fast as the Cray (according to Dongarra's numbers) and is commonly
available with up to four times as much memory.
Model Size. Over time, model size has grown with computing speed.
Consider the maximum practical model size to be the largest (in terms of number of
gridblocks) that could be used in routine simulation work. In 1960, given the computers
available at the time, this maximum model size was probably about 200 gridblocks. By 1970,
it had grown to about 2000 gridblocks. In 1983, it was 33,000; Exxon's first application
of the MARS program was on a model of this size running on the Cray 18". In 1997, it
is roughly 500,000 gridblocks for simulation on a single processor. Figure 2 plots these
values, This semi-log plot is a nearly straight line, indicating a fairly constant rate of
growth in model size. The growth in model size is roughly consistent with the growth in
computing speed shown in Figure 1.
Technical Advances. The first "reservoir simulator" modeled single-phase flow in one
dimension. Today, the norm is three phase flow in three dimensions with many
gridblocks and complex fluid representation. Many advances in computational
methods made this transition possible. Table 2 lists some of these by decade. In
general, the advances in Table 2 are chosen because they are still in use today or they
paved the way for methods used today. Also, they are methods that are used in many
types of simulation, as opposed to techniques for modeling specific phenomena such
as relative permeability hysteresis or flow in fractured reservoirs. The high points in the
table are discussed below.
Reservoir simulation began in 1954 with the radial gas flow computations of
Aronofsky and Jenkins”. The first work to receive notice outside the field of
reservoir engineering was Peaceman and Rachford's development of the alternating
direction implicit (ADI) procedure". More than 40 years later, ADI is still being used,
though seldom in reservoir simulation.
Today, it is hard to appreciate how little was known at the beginning of the 1960's. Even
concepts that today seem obvious, such as upstream weighting, were topics of
debate. Yet, by the end of the decade, the first true general-purpose simulators had
come into being.
One of the difficulties in the 1960's was solving the matrix equations. Today, a solver
must be fast and easy to use. Then, there were problems that could not be solved
at all. The first effective solver was SIP. Though it was sometimes troublesome to use, it
nearly always could be made to work. Another mathematical breakthrough of the time
was development of implicit-in-time methods, which made it practical to solve
high flow velocity problems such as well coning
The 1970's saw publication of Stone's three-phase relative permeability
models",12. These continue to be widely used. Another innovation that has stood
the test of time was the two point upstream method". Despite widespread efforts since
then, only incremental improvements to it have been found. Also having tremendous
lasting impact was the development of solvers that used approximate factorizations
accelerated by orthogonalization and minimization. These made possible methods that
were largely parameter-free. Finally, Peaceman's well correction for determining
bottom-hole pressure from gridblock pressure and well rate is almost universally used
today.
In the early 1980's, a significant advance occurred with the development of nested
factorization. Both fast and very robust, nested factorization may be today's most widely
used matrix solver method. Another major step occurred in compositional simulation.
Although the first compositional simulators were developed in the late 1960's, their
formulations included an inherent inconsistency that hurt their performance. The
volume balance-5,26 and Young Stephenson'' formulations solved this problem and,
at the same time, made it practical to write a simulator that can efficiently solve both
black-oil and compositional problems. Development of cornerpoint geometry made it
possible to use non-rectangular gridblocks", providing a capability that was useful
in a variety of applications.
In the late 1980's, efforts shifted to issues related to geologic modeling, geostatistics,
upscaling, flexible grids, and parallelization, and this emphasis has continued in the
1990's. The list of accomplishments for the 1990's is much shorter than those for the
preceding three decades. This is, of course, partly because the decade is not finished
yet, but it may also be partly because not enough time has elapsed to make lasting
contributions recognizable. Perhaps it also stems in part from the diversion of effort from
general computational work into the nuts and bolts of interactive software and
parallelization. Finally, it also may be, unfortunately, a result of the reductions in
research effort that began in the mid-1980's.
Simulator Capabilities. The preceding section discusses technical advances.
They would not be of interest had they not led to improvements in capabilities from
the user's standpoint. Table 3 lists, again by decade, the state of the art capabilities that
were available in simulators of the time. No attempt is made to cite the literature in this
discussion. These capabilities tended to become available in several simulators at about
the same time, and often there was no external publication describing them.
The computers of the 1950's permitted use of only the crudest models.
Three-dimensional simulation was out of the question, and only small
two-dimensional models were possible. Everything had to be kept simple, so
single-phase flow or incompressible two-phase flow and very simple geometry
were used.
The more powerful computers of the 1960's enabled more realistic description of the
reservoir and its contents. Three phase, black-oil fluid treatment became the
norm. It became possible to run small three-dimensional models. Multiple
wells were allowed for, they could be located where desired, and their rates could be
varied with time. Gridblock sizes could vary, and gridblocks could be "keyed out," or
eliminated from the system. By the end of the decade, implicit computational
methods were available, permitting practical well coning modeling.
The 1970's seems to have been the enhanced oil recovery (EOR) decade. The first
compositional simulators were developed. Computing limitations forced these
to use large gridblocks, so numerical dispersion was a problem. Also, there
were weaknesses in the initial formulations used. Nonetheless, they made it
possible to begin to model phenomena that had up to then been ignored. Because of
heavy interest in EOR, much effort went into modeling miscible and chemical
recovery. Finally, advances in implicit computational methods provided the
solution stability required to model thermal processes.
In the 1980's, it became no longer adequate for the user to tell the simulator where to
put the wells and how much to produce from them. In prediction runs, the simulator became
responsible for making such decisions on its own. This led to development of
complex well management software that, among other things, determined when to
work over existing wells or drill new ones, adjusted rates so as to adhere to
constraints imposed by separator capacities, maintained computed reservoir
pressures at desired values, and allocated produced gas to injection and gas lift. Other
developments led to approaches for modeling fractured reservoirs. The normal
restriction of topologically rectangular grid connectivity was lifted to allow taking
into account shifting of layers across faults. Finally, work began on
interactive data preparation and display and on graphical user interfaces in
general.
The dominant efforts of the 1990's have been on various ways to make
simulators easier to use. These have included continuation of the work on
graphical user interfaces, widespread attempts at data integration, and
development of automatic gridding packages. The use of numerical geologic
models, often depicting fine-scale property variation generated statistically, has
become widespread. There has been considerable work on methods for "upscaling"
reservoir properties from these models' cells to the much larger gridblocks that
simulators can use. Gridding flexibility has increased through use of local grid
refinement and more complex grid geometries. A current thrust is integration of
simulation with non-reservoir computational tools such as facilities models and
economics packages.
Evolution of Software and Support. As simulators became more powerful and
flexible, their user communities changed. As this happened, developers and
supporters had to change the way they did their jobs. As a result, software
development and support practices progressed through several stages. The
progression is still under way, with one stage left to be accomplished. 1. Developer use.
Initially, a very small team of developers
devised computational methods, implemented them in a simulator, and applied this
simulator themselves. The first applications were intended to test the simulator and its
concepts and the next ones to demonstrate its usefulness on real problems. After these
tests, the developers made production runs intended to benefit the corporation. Portability of this
simulator was not an issue, because there was only one computer for it to run on. Team
use. By the next stage, the team had grown and developed internal specialization.
Within it, one group developed the simulator and another applied it. The simulator
required customization for each new application. Despite the group's
specialization, developers were still frequently involved in applications because of the
simulator's need for continual support. The simulator frequently failed, and it was
essentially undocumented. The simulator ran on a single computer,
and portability was still not an issue. 3. Local use. In this stage, the simulator was used by
people
who were located near to its developers but worked in other parts of the organization. The
simulator still required customization for most new studies. Other support was required
frequently but not continually. Failures still occurred, but less frequently than before.
Documentation was adequate to permit use of the simulator with some assistance from
developers. The simulator ran on several computers, but they were all of the same
type. Widespread use. In this stage, the simulator first began to receive use by
people at remote locations. It seldom required customization, but it still needed
occasional support. It rarely failed. Documentation was thorough, but training was required
for effective use of the simulator. Most applications were performed by specialists in
the use of the simulator. The simulator ran on a small variety of computers. General
use. By this stage, the simulator will have become widely used by people with varying
expertise. It rarely will need customization, will require support only infrequently, and seldom
will fail. Its documentation will be thorough and easily understood. Its user interfaces will
be intuitive and standardized. Little training will be required to use the simulator. The
user will need knowledge of reservoir engineering, but he will not need to be a
simulation expert.
Each transition to a new stage changes what is required of the simulator and those
who support it. Each transition has been more difficult than the ones that preceded it. A
transition that was surprisingly difficult was from local use to widespread use. In the local
use stage, the simulator was being used frequently and for the most part was
functioning correctly. As a result, it seemed safe to send it to remote, perhaps overseas,
locations. Doing so led to many more problems than expected. The remote computer
differed slightly from the developer's, its operating system was at a different release
level, and it was configured differently. The remote users tried the simulator once or twice, and
it did not work. These users did not know the developers personally, and they had no really
convenient way to communicate with them. Typically the users abandoned attempts to
use the simulator, and the developer was slow to learn of this failure.
Interestingly, vendors were forced by their business to make the transition to
widespread use before petroleum companies' in-house developers had to. The
vendors have been dealing with the related problems since the early 1970's, in-house
developers began addressing them later, with varying degrees of success. This
transition is made difficult by the high standards in usability, functionality, and
robustness that must be met. Developing software and documentation of the quality
needed is very time-consuming and requires different sets of skills than those traditional
to reservoir simulator development.
Vendor History and Role. Researchers working at major oil company laboratories
developed the first reservoir simulators. It was not until the middle 1960's that vendors
started to appear. Following is a brief history of certain of these vendors. Inclusion of a
particular vendor in this discussion is not intended to imply endorsement, and exclusion is not
intended to imply criticism. The firms discussed are those with which the author is familiar;
for this reason North American-based firms are more likely to be included than those based
elsewhere.
The first to market a reservoir simulator was D. R. McCord and Associates in 1966. Shortly
thereafter, Core Laboratories also had a reservoir simulator that they used in
consulting work.
The year 1968 saw the founding of INTERCOMP and Scientific Software
Corporation, two companies that dominated the reservoir simulation market in the
1970's. Despite their market success, a number of new companies were formed in the
1970's and early 1980's. The first was INTERA, which was formed in 1973 by
merging an INTERCOMP spinoff with the environmental firm ERA. INTERA initially
focused on environmental work, but eventually got into reservoir simulation as
discussed below. In
1977, the Computer Modelling Group was formed in Calgary with support
from the province of Alberta. J. S. Nolen and Associates and Todd, Diettrich, and Chase
were formed in 1979. In 1981, a reservoir simulation business was formed at the
existing exploration-related firm Exploration Consultants Limited (ECL). Finally,
SimTech was founded in 1982 and Reservoir Simulation Research Corporation in 1984.
The founding of these firms was followed by a series of mergers and acquisitions.
These began with in 1977 with the acquisition of INTERCOMP by Kaneb Services. In
1983, Kaneb Services sold INTERCOMP to Scientific Software Corporation, the two
firms merging to form Scientific Software-Intercomp, or SSI.
In the middle 1980's, J. S. Nolen and Associates was acquired by what came to
be Western Atlas. In 1996, Landmark Graphics acquired Western Atlas' reservoir
simulation business. Since then, Landmark was in turn acquired by and became a
division of Halliburton
In the middle 1980's, INTERA acquired ECL's reservoir simulation business. A few years
later, INTERA split into two completely separate companies, one based in the United
States and the other in Canada. The reservoir simulation business went with the
Canadian INTERA. In 1995, Schlumberger's GeoQuest subsidiary acquired
INTERA's reservoir simulation business. Shortly later, the United States-based
INTERA, which had become part of Duke Engineering and Services,
reentered the reservoir simulation business by acquiring SimTech.
Finally, in 1996, the Norwegian firm Smedvig acquired Reservoir Simulation
Research Corporation.
As of the second half of the 1990's, vendor simulators are very widely used. As
vendor products have improved, some petroleum companies have reduced or
dropped altogether their in-house efforts. Those companies found that vendors
could provide tools that were at least as good as those that they could develop
themselves, and that they could lower their development and support costs by using
the vendors' products. On the other hand, several large companies continue to find it in
their best interests to develop proprietary tools, perhaps incorporating certain vendor
components into their systems.
Gridding. The user defines major reservoir units to which the simulation grid must
conform. Within these units, the grid is generated automatically with some guiding
input from the user. The gridblocks are generally rectangular or nearly so. Local refinement
of the grid in key parts of the model is used. Special computations account for flow across
faults between gridblocks that are not topologically neighbors.
The Future Before attempting to predict the future, it is good to be aware of the
business drivers, the most important of which are discussed below. These lead into
current technical objectives, also discussed below. These objectives then form the
basis of several predictions.
Technical Objectives and Efforts to Address Them. A good technical objective must
meet two criteria: it must address one or more of the business goals, and it must be
achievable with reasonable effort in the intended time frame. Substantial
progress on the following objectives should be possible within 10 years, and
achievement of them should be possible within 20 years. The following states each
objective, expands upon it very briefly, and describes current work addressing it.
Require the user to provide only data describing the physical system, its fluids,
and its history. Do not require him to specify computational data such as solver
selection, iteration parameters, and timestep controls. Create the simulation grid
for him with no intervention on his part.
Several organizations are working on gridding, and some of their work relates to
automating the gridding process. Current linear equation solver work relates primarily
to parallelization, but within this work there is an ongoing attempt to improve robustness
and reduce the effort required by the user.
Automate history matching. Determine the best possible fit to existing historical
data, consistent with the presumed depositional environment and other characteristics
of the geologic model.
Recent work has led to an economical way to compute derivatives for use in
gradient-based optimization methods. These will become more commonly applied, but
much more is needed for truly automatic history matching
Minimize the time and effort required to access information required to
perform the study and to generate the results that are the study's objective.
Integrate data when doing so is practical and provide efficient import capabilities
when it is not. Likewise, integrate downstream computations when practical and
provide efficient export capabilities when not.
The Petroleum Open Systems Corporation (POSC) and its members are
addressing the data integration issue. Recent acquisitions of reservoir
simulation vendors by larger service organizations is leading to integration with
computing tools both upstream and downstream of simulation
Predictions. Following are predictions regarding the year 2007 state of the art of
reservoir simulation and related technologies.
1. The dominant high-end computing platform for
simulation calculations will be a Unix server comprising multiple nodes, with each
node having a small number of processors. Where high performance is not
needed, the dominant platform will be a multiprocessor PC running NT.
The dominant pre- and post-processing and visualization platform will be the top of
the line version
of whatever the PC has become. 3. Integration of reservoir simulators with the software
and databases that provide their input data will be much better than it is today. Reservoir
simulation will be essentially automatic, given the geologic model, fluid data, and
rock (i.e.,
relative permeability and capillary pressure) data. 5. The largest black-oil simulations
will use at least 10
million gridblocks. 6. Most simulators will be based on unstructured grids. 7.
Integrated reservoir-surface network calculations will
be common. 8. Use of history matching tools will be widespread, but
the history matching process will not be automatic. The above predictions are made
with some confidence. It seems reasonable to expect most of them to be generally
correct, with perhaps one or two turning out wrong. Attempting to predict further
into the future is more problematic. Nonetheless, it may be instructive to consider