SPE-181056-MS Enabling Real-Time Distributed Sensor Data For Broader Use by The Big Data Infrastructures

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

SPE-181056-MS

Enabling Real-Time Distributed Sensor Data for Broader Use by the Big Data
Infrastructures

Don Yang, Tommy Denney, Oladele Bello, Sony Lazarus, and Celestine Vettical, Baker Hughes Incorporated

Copyright 2016, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE Intelligent Energy International Conference and Exhibition held in Aberdeen, United Kingdom, 6-8 September 2016.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents
of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect
any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written
consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may
not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
Petroleum exploration and production processes typically generate enormous amounts of petro-technical
data using sub-surface and surface sensors. The acquisition, transferring, managing, and interpreting of
these huge sensor data, as well as the decision making based on it has led to the advent of the digital
oilfield phenomenon in the petroleum industry. To achieve improved efficiency, accuracy, and performance,
many E & P operators are aiming to apply fiber optics distributed temperature sensing data management
technologies to add volume. Currently, high volume distributed sensing data transfer, storage, processing,
archiving, retrieval and exchange system in the petroleum industry still face big challenges such as high
cost of hardware and software, complicated implementation and deployment framework that is difficult
to sustain such as scale and upgrade, as well as compatibility for data provided by different vendors. An
efficient online real-time elastically scalable system that enables fast retrieval from big data infrastructures
is therefore essential.
This paper describes a scalable web based enterprisefiber optic infrastructure for data exchange,
management and visualization. This platform applies multi-tier client-server architecture, scalable
distributed databases, PRODML (Production Markup Language), and web services technologies to provide
a reliable mechanism to bring fiber optic data from the field site to the corporate network in real-time and
enable user to visualize the data anywhere, any time. The support of PRODML industry standard make
it vendor neutral and allow data exchanging from different systems and sharing data among users and
different applications. The distributed Cassandra database enables the scalability to handle the fiber optic
big data in a high performance and efficient way. Finally, the global inventory management system allows
keeping track of changes to the asset and the instrumentation configuration over the life of the distributed
sensor systems, as well as the ability to correlate the measurement data to the proper asset configuration. A
case study is presented that demonstrates successful field testing to verify the functionalities of the newly
developed system for high data volume distributed sensors. Specific attention is given to many advantages
being offered by this new framework over existing ones.
2 SPE-181056-MS

Introduction
Distributed sensing data from distributed acoustic sensors (DAS), distributed temperature sensors (DTS),
discrete distributed temperature sensors (DDTS) and discrete distributed strain sensors (DDSS) is used in
wide range of engineering and scientific applications. These technologies use sensors that can collect data
that are spatially distributed over many thousands of individual measurement points. Petroleum engineers
use the data in artificial lift monitoring and optimization, smart well completions optimization, zonal flow
analysis and flow profiling, wellbore production and injection performance monitoring, wellbore structural
integrity monitoring (casing deformation), water breakthrough detection and conformance control, gas
breakthrough detection, formation damage detection, well stimulation optimization, petroleum reservoir
parameters estimation and updating, reservoir characterization, waterflood assessment and optimization,
volume placement design in fractures, sand production detection and pipeline monitoring and security
(Lumens, P.G.E. 2014; Moreno, J. A., 2014; Ramos, J. E., 2015; Reinsch, T., 2012; Wang and Bussear,
2011; Wang, Z., 2012). The application of distributed sensing data is not limited to oil and gas assets
(reservoirs, wells and facilities) performance monitoring and management. Distributed sensing data is
widely used in other disciplines including civil engineering structural health monitoring, nuclear plant
operations management, risk, hazard andenvironmental analysis. Demand for distributed data in these
types of applications has increased in the past few decades due to expansion of personal computers and
software that provided new means to transmit, store, archive, retrieve and exchange distributed data in
the petroleum industry. Despite these advancements, the accessibility and dissemination of distributed data
is a consistent challenge across exploration and production companies. While there have been numerous
moves toward standardized data formats within the petroleum industry community, the problem of large
datasets scattered across multiple geographic locations persists. Distributed data infrastructures provide a
basis for knowledge discovery and prognosis, evaluation and application and include the following elements
actual distributed data and information, metadata – data describing the distributed data, distributed data
ingestion framework, distributed data storage framework, distributed data retrieval dashboard, distributed
data exchange standards (WITSML, PRODML)web services standard, distributed sensing data models for
the infrastructures, distributed data databases and software services.
An efficient vendor neutral standardized distributed sensing data information communication
infrastructure utilizing the latest in data transfer standards, database software technology, data visualization,
retrieval, exchange and analytics toolsis therefore essential. This paper examines the current challenges
and opportunities of downhole distributed sensing data management technology deployments. It also
provides an insight on the development and deployment of a new generation information communication
infrastructure for simplifying transmission, management, archiving, retrieving, visualization and exchange
of distributed sensing data. This big data infrastructure include technologies such as PRODML (Production
Markup Language), distributed Cassandra databases, web services for spatial data exchange, distributed
sensing data aggregation and cleaning.

Next Generation Big Data Infrastructure for Downhole Distributed Sensing


Systems: Challenges and Opportunities
In the era of big data, oil and gas asset managers are challenged by gigabytes of data that are generated
everyday by millions of downhole distributed sensors, actuators, controllers and other devices used to
measure physical phenomena or capture events in intelligent field deployments. The explosive growth
in distributed data worldwide is fuelled by an increasing amount of data sources as data is produced by
distributed acoustic sensors (DAS), distributed temperature sensors (DTS), discrete distributed temperature
sensors (DDTS) and discrete distributed strain sensors (DDSS) technologies, and so forth. The transition
to the intelligent paradigm opens up new opportunities but also poses a number of challenges. The types
and nature of downhole distributed data are getting more and more various. The downhole distributed data
SPE-181056-MS 3

are generated, stored and transferred across multiple nodes in a matter of seconds. The major challenges
of real-time distributed sensing monitoring technology (real-time data aggregation, transmission, automatic
data management, data visualization and data interpretation) can be broadly classified as follows:

• SCADA systems

◦ Not conducive to collect data across two domains (depth and time)

◦ Often customers will reduce the number of tags in order to "fit" SCADA. Reduces value of
DTS instrumentation
◦ Limited visualization across depth domain, lose the concept of the trace

◦ Difficult to correct "noisy" data

• Manual data collection

◦ Labor intensive, HSE risks

◦ Data exists on thumb drives, local machines

◦ Can be difficult to distribute (especially over email)

◦ Collection times can be arbitrary

• File folders

◦ Difficult to administer

◦ Troubles with correcting data and keeping "versions of the truth"

◦ Can get very large and unmanageable

◦ Differing file formats can make incorporation into visualization tools difficult

◦ How to handle META data?

• 3rd party visualization tools

◦ Often not comprehensive, may offer good visualization, but no concept of data management,
workflow (individual or between users), or META information
◦ What data standards do they support?

◦ Who administers and supports the data?

• Data type

◦ Data is a trace (like logs) not point values

▪ ~ 1MB per trace per ~ minute per well

◦ Data has meaning across two domains – depth and time

◦ Standard systems not equipped to handle both -

▪ SCADA handles time well, but not depth

▪ File folders handle depth well, but not time

◦ Vendors offered different data formats (often proprietary)

• Meta data
4 SPE-181056-MS

◦ Configuration and condition of fiber needs to be known and managed

▪ Complex configurations becoming common

▪ Condition (optical losses), hardware (new models), software (new firmware)

▪ faults (optical anomalies) and change over time, etc

• Data transfer

◦ No reliable and secure mechanism to bring data over from the field site to the corporate network

◦ Data communication costs and custom update rates needed during well life cycle

• Data contextualization/interpretation

◦ For data to be interpreted – must be trusted, often corrected to a reference (how to keep track
of "corrected" traces
◦ Interpretation of data difficult without other data (well logs, 3D & 4D seismic data,
electromagnetic data, production data, well bore schematics, etc.)
◦ Integration with other wellbore equipment (ESPs, Intelligent Wells, etc.)

• Data consumption

◦ Joint integration of DTS data with other real-time data from a historian and/or well completion
data is cumbersome and time-consuming
◦ No technology solution for saving traces that have been modified by users

◦ No technology solution for consolidating the meta-data contained in the traces coming from the
field (i.e. well name) with the master data in the corporate databases
◦ Without data standards, independentintegration pointsby various consuming applications
making it costly to build and maintain connectors
Real-time distributed sensing data can be highly valuable information source for automatic asset
performance operation monitoring and control. Fast real-time and comprehensive distributed sensing data
information acquisition and transmission are the key to immediate operating decisions for asset optimization
and management. Figures 1 to 3 illustrate the relationship between measurement, information and decision
making. With downhole distributed real-time measurements, proper protection and control actions could
be taken to ensure the reliability, availability and maintainability of reservoir, well and facility systems
when event occurs. To support such distributed data communication requirements, the system architecture
and technologies must be able to deliver operational data dynamic (near) real-time information to those
who need it, when they need it. The great needs of bringing in data from disparate sources, which could
include real-time stream distributed data from wellsites, or rigs or between a partner data source in industry
standard formats (such as WITSML, RESQML, PRODML, etc.) are pushing information technology to
play an increasingly important role. This need will become increasingly more critical as the volume of
information holding increases.
SPE-181056-MS 5

Figure 1—Workflow for closed-loop optimization and control for complex


well completion system using distributed downhole measurements

Figure 2—Workflow for continuous advanced well model updating for improved
performance monitoring and optimization using distributed downhole measurements
6 SPE-181056-MS

Figure 3—Workflow for real-time anomaly detection with streaming distributed sensingdata using machine learning

More so, issues like multiple data formats compatibility problem, lack of metadata standardization
complicate the exchange of data among different users. To resolve the large scale distributed data problem,
one possibility is to provide software as a service (SaaS). The software as a service concept include specific
offerings such as real-time distributed data infrastructure as a service, distributed data management as a
service, distributed data interpretation as a service, and distributed data analytics as a service. The major
advantages of such Software as a Service based solutions are: (i) no upfront investment in servers or software
licensing is required (ii) software is completely maintained and updated by the service provider (iii) low
cost of hosting and maintenance of application as one single version of the application that is hosted and
maintained for thousands of users (iv) the applications are accessible from various client devices through
either a thin-client interface such as web browser or anapplication interface (v) the solution enable the
subject matter experts to focus on their core capabilities and chase the internal demand for certain application
(vi) the solution take an average of 45% to 50% of the time required to install traditional (licensed) on
premise software (vii) significantly reduce total cost of ownership (TCO) by providing a superior offering
for a relatively small but constant fraction of an investment, eliminating conventional on-premise costs such
as implementation, hardware, staffing, customization and/or integration and post-software procurement
costs.

Development and Implementation of a New Big Data Infrastructure for Real-


time Distributed Sensing System (DTS – DAS – DDTS – DDSS)
In this section, we describe the development and implementation of the new big data infrastructure for
real-time distributed temperature sensing data. Depending on their location, accessibility and applications,
distributed sensing system (DTS) instrumentation can be classified into three categories:

• Instrumentation with direct network connectivity (‘Online’ DTS): The DTS hardware is connected
directly to the corporate network. Usually there is a network firewall between the device and the
corporate servers, and special security requirements may be needed for sending the data over the
firewall. Instrumentation with indirect network connectivity (‘Internet’ DTS): The DTS hardware
has network connectivity but not directly with the corporate network. Such is the case with very
remote devices with a cell phone data plan. The device has a unique IP address that is visible on the
SPE-181056-MS 7

internet. Direct connectivity to the corporate network is very difficult as enterprise-level security
policies may disallow connections initiated from the internet
• Instrumentation with no connectivity (‘Drive-by DTS’): The DTS hardware has zero connectivity
and all DTS data is stored locally (memory stick or portable hard drive) and later mailed to a
super-user in the corporate environment. Most of the fiber installations fall under this category.
Furthermore this type of instrumentation is shared between multiple wells
Figure 4 shows the architecture of the new big data infrastructure for real-time distributed temperature
sensing data. The diagram shows how several servers disseminate the distributed sensing data and
multiple clients use the servers for different purposes. The system consists of five modules: (1) real-time
data importing/transmission (2) data exchange and modeling (3) automatic data modeling and database
management(4) data visualization data visualization (5) data interpretation and analytics. The system uses
open architecture that enables third party connectivity.

Figure 4—System architecture and communication workflow

The basic requirements for this system are based on the assumption of online real-time devices that
usually produces enormous amounts of data with system response of 2 to 7 seconds. The data server (polling
engine) must support multiple communications drivers. In addition to polling data from field devices, the
system is able to update polling engine subsystem and provide services to read buffered data logs in the
event of communication loss or planned maintenance outages between the data server and the distributed
data sensing devices. Other system requirements are as follows:

Data Archiving
Different ways must be offered to archive data

• Automatic 1: When data becomes of certain age (configurable by the administrator) it is OK to


take it offline
• Automatic 2: When data becomes of certain age, most of it is archived however any representative
and/or averaged logs are left behind
8 SPE-181056-MS

• Exceptions: Archival process shallignore any DTS Measurements or Interpretation logs that have
a specific tag(s) (configured by an administrator) to ignore archival so that they are never archived
due to their business relevance

Interfaces
Application must support the PRODML data exchange standard

• When receiving measurements from instrumentation in the field

• When exposing information in the corporate environment to other systems and/or users

◦ Web-based user interface for supporting the functionality outlined above

◦ Programmatic API for supporting use cases from an automation standpoint

Application must also support import/export capabilities from other existing formats (WITSML, LAS,
CSV, Excel, among others) both from the programmatic API point of view and from the file handling point
of view.

Unit Handling
The PRODML standard will specify unit of measure on those values that require it. The system must respect
and maintain those units of measure. However it must be possible for users and other systems to specify
what unit set the results should be returned as.

Time Zone Handling


The PRODML standard will specify the time zone in addition to the timestamp where the measurement
took place. The system must take the time zone into account and ensure data is always returned in the right
format when requesting measurements from different optical paths that span multiple time zones.

Figure 5—Flowchart for DTS data transporting with various importing possibilities
SPE-181056-MS 9

Figure 6—Flowchart for DTS data importing via secure VPN tunnel

Key capabilities of the new platform include the following:

• Web-based solution, accessible through the Internet

• Standard platform

◦ Compatible with other applications and captures data from fiber optic units regardless of
manufacturer
• Grid enabled Production Data Hub (PRODML Server) ‘Big Data’ platform with elastic scalability

• Scalable architecture supports versatile deployment scenarios

◦ Global and In-country Data Center, or

◦ In-country Client IT Center

• Industry and IT standards compliance and certification

◦ PRODML, WITSML, OPC/OPC-UA, Modbus TCP/RTU

• Integrated data handling

◦ Consolidates management of DTS, P/T, well log, surface, and production data

◦ Provides comprehensive overview of well conditions

• Powerful visualization engine

◦ Device independent HTML5

◦ Intuitive, Interactive, and configurable user-interface

• Intuitive dashboards and KPIs

◦ Displays and identifies trends and system health

• Data download/upload utility

◦ Allows data housed in the AMBIT platform to be used in other applications

◦ Enables uploading from offline platforms and of corrected traces


10 SPE-181056-MS

• Secure Data Management

◦ End-to-end encryption

◦ HTTPS Transport Layer Security

◦ Integrated user-authentication and entitlements

• Metadata capture - Automatically stores metadata associated with the fiber optic installation

• Intuitive dashboards and KPIs - Displays and identifies trends and system health

Special care was taken during design to ensure that the system safeguards all data. The system has passed
numerous security reviews including internal, independent 3rd party, and customer reviews. The source
code is regularly scanned for potential security vulnerabilities, with any conspicuous results immediately
being reviewed and resolved. All data from the system is encrypted during transport – both to and from
the field device as well as to the authenticated users. The integrated user-authentication and entitlements
system ensures only authorized users have access to view the data as well as guaranteeing only designated
users can send control commands. The platform treats all data as "Customer Owned". This means that
no data is deleted, modified, archived, or manipulated without written customer consent. No data is held
hostage and is freely available through our customizable web interface for authorized users. The platform
processing engine, with powerful user definable curve math functions, performs online analysis for data
quality assurance, exception monitoring, complex event recognition and alarms management to provide
advanced production surveillance capabilities. Alarms and events are automatically logged within the
system and configurable notifications sent to designated role-based users to ensure immediate action to
reduce operating costs and minimize downtime.

Table 1—System Components' Functions and Description

No. Function Description

* Flow estimation & allocation as a Service


* Dynamic reservoir characterization as a Service
* Well & reservoir performance monitoring as a Service
1 Interpretation Services * Alarm management as a Service
* Simulation-optimization as a Service
* Reservoir management as a Service
* Integrity management as a Service

Visualization presents data and operations to users through


graphical interface screens. Security limits certain aspects
of visualization to certain users based on their role in
the system. An HTML5 and SVG based configurable
2 Viewer visualization system developed as a next gen human
machine interface (nHMI). This comes with various types
of visualization widgets and display screen templates design
tool. New widgets can be developed or 3rd party HTML5
based widgets can be integrated.

This is an elastically scalable cloud enabled application


server with next gen user interface that can be accessed via
most browsers from most devices. This enables hierarchical
visualization of the production assets hierarchy using the
Production Surveillance & product flow model (PFM) and shared asset model (SAM)
3
Performance Management) of PODML. Various data visualization grids and trends,
Key Performance Indicator (KPI) dashboards, Information
feeds, Reports, data and comments entry, and alarms can
be configured at various asset hierarchy levels for DTS
facilities

(continued on next page.)


SPE-181056-MS 11

Table 1—continued.

No. Function Description

Real-Time Operations (RTO) Gateway, a collection of


engines and clients for real-time data services and caching,
workflow, rules, notifications and role management. This
4 RTO Gateway
has clients for PRODML and OPC-UA to configure,
integrate and map data feed from 3rd party data exchange
systems that comply with industry standards.

The Production DataHub is a PRODML Industry standards


compliant data management and exchange server which
RT Production Data Hub allows for enterprise level data management with tag
5
(PRODML server) level security authorizations, filtration and compression.
Production Data Hub is grid enabled and leverages ‘Big
data’ platform and has elastic scalability.

Data Connectors / Gateways Facilitates communications between physical devices and


6 the distributed sensing platform. Serves live data values to
the system as an OPCDA server.

Application Interfaces Application Interfaces integrate 3rd party systems with the
7 distributed sensing platform. This integration can be via
PRODML APIs or through WEB Service APIs

Data Models The system provides a series of data models that represent
8 physical devices and systems in a logical model. Types are
implemented for each physical device in the system.

Historical Data Data is stored in a Historian which provides efficient


storage and retrieval methods for large amounts of data.
9
Historians store data in a time series format allowing for
easy trending and data retrieval

Analytics A machine learning engine that interacts with data collected


11 by the distributed sensing platform and provides results to
user within the system

Field Example
Field Challenge
An operator needed to deploy a distributed temperature sensing data infrastructure (data transport, storage
and exchange) that could continuously process multiple terra bytes of information from a variety of source.
A single distributed temperature sensor often produces billions of data points, and the operator's scientists
and engineers needed tools for transporting, storing, archiving and visualizing this data. There is also the
challenge of efficiently bringing the DTS data over from multiple field sites and disparate sources, which
could include real-time stream data from wellsites, or rigs or between a partner datasource in industry
standard formats such as PRODML to the corporate network. The operator also needed a system that can
work on historical data ingestion, through data import, that is, import of any other data in industry standard
formats. The ingested data must be available in the deep storage system and available across the distributed
cluster for further computation and analytics by the end users. The system requirements by the operator
also include a platform-agnostic web-based data retrieval dashboard user interface that provides end users
with the means to download specific or general data at their discretion. The data can be transformed and
downloaded in any format the end user needs, including the original datasets if necessary.
Other field challenges faced by the operator can be classified as follows:
– DTS-related hardware management problems
– DTS data transporting, management, visualization and consumption problems
– DTS data security (end-to-end security throughout the data flow) and federated user management
(user authentication and entitlement across enterprises)
– Superior reliability and high availability of 99.99% network uptime
12 SPE-181056-MS

– 100% secure connections


– Complete instant scalability and service availability
– Flexible communication options

Field Deployment Results


– Reduced the DTS data support costs by over $500k annually for one operator
– Successfully transferred data from 3 operational units to the new platform (over 60 wells, ~ 2
Terabytes of data (raw, META, and interpreted)
– Allowed the operator to decommission many servers/data centers
– Allowed for a standard data format (PRODML) across all DTS units regardless of manufacturers/age
– Transitioned to one standard support model and SLA integrated into customer internal support model
– Allowed for integration of 3rd party (competitive) visualization tool

Roadmap
– Improve visualization tools and contextual information
– Integrating other tools for data analysis, event detection and alarming
– Physics and data driven modelling integration (i.e. multiphase flow measurement)
– New distributed measurements incorporations distributed strain sensing and distributed acoustic
sensing (Figure 7)

Figure 7—Joint data integration framework

Conclusions
As distributed sensing (DTS, DDTS, DDSS, DAS) data evolves, more and more real-time information
is needed to support advancedoil and gas assets performance monitoring and management services
and functions in order to improve hydrocarbon production and recovery performance. The new
challenges bring new requirements to the real-time applications of digital infrastructure for distributed
sensing systems: data aggregation, communication, exchange, management and visualization. The use
of distributed sensing systems can greatly improve the artificial lift monitoring and optimization,
smart well completions optimization, zonal flow analysis and flow profiling, wellbore production
and injection performance monitoring, wellbore structural integrity monitoring (casing deformation),
water breakthrough detection and conformance control, gas breakthrough detection, formation damage
SPE-181056-MS 13

detection, well stimulation optimization, petroleum reservoir parameters estimation and updating, reservoir
characterization, waterflood assessment and optimization, volume placement design in fractures, sand
production detection and pipeline monitoring and security. The increasing need to balance economic
prosperity with sustainable environment, health and safety performance together with recent advances in
digital oilfield technologies (industrial automation and instrumentation, wireless sensor/actuator networks)
provide a strong motivation for introducing big data infrastructure for distributed sensing data transmission,
processing (cleaning), storage, exchange and analysis. However, while the development and deployment
of big data technologies have significantly impacted the information technology industries, minimal
implementation of this new paradigm have been recorded for distributed sensing systems in oil and gas
industry.
This paper has provided an overview of the digitaltechnology that is driven by needs of the oil and
gas industry for reducing costs and performance enhancement in distributed sensing data processing and
analytics. Some of the major challenges and/or obstacles to overcome have been highlighted. In this
paper, we presented development, implementation and field applications of a new big data infrastructure
architecture for real-time distributed sensing system (DTS). The use of web services for interaction between
this system and client-side applications was discussed. It was shown that the web services provide an
efficient method for storage and retrieval of distributed sensing data from the database. In addition, web
services can be used to provide various types of distributed sensing data for different client applications.
Using web services for these applications ensures that data can be effectively and quickly accessed and
retrieved. The use of the new system also provides the following technical and economic values:

• Eliminate hardware, software, and reduce licenses

• Technology support (level 3 and 4) is provided by the service provider

• New releases deployed by the service provider (Agile)

• Web based and mobile friendly access

• Quick deployment and improved data quality

• Development driven by industry (market driven)

• Ability to dynamically scale

• Alarm notification function to provide the user with newest condition information of the fully
instrumented asset being monitored
• Real-time robust data exchange standards and effective data management system operations

A case study is presented that demonstrates successful field testing to verify the functionalities of the
newly developed system for high data volume distributed sensors. Specific attention is given to many
advantages being offered by this new framework over existing ones. A field example with a major operator
presented to illustrate the nature of the applications and the resulting opportunities. These initiatives have the
potential of providing a new generation of distributed sensing information management and communication
technology tools set that can significantly increase profits and reduce costs, thereby strengthening the
economic performance and competitiveness of the petroleum industry.

References
1. Lumens, P.G.E. Fiber optic sensing for application in oil and gas wells. Ph.D. Dissertation,
Technical University of Eindhoven, 2014.
14 SPE-181056-MS

2. Moreno, J. A., Implementation of the ensemble Kalman filter in the characterization of hydraulic
fractures in shale gas reservoirs by integrating downhole temperature sensing technology. M.S.
Thesis, Texas A & M University, 2014.
3. Ramos, J. E. Reservoir characterization and conformance control from downhole temperature
measurements. M.S. Thesis, University of Stavanger, 2015.
4. Reinsch, T. Structural integrity monitoring in a hot geothermal well using fiber optic distributed
temperature sensing. Ph.D. Dissertation, Clausthal University of Technology, 2012.
5. Wang, X. and Bussear, T. Real-time horizontal well monitoring using distributed temperature
sensing technology. OTC Paper # 22293-ms, 2011.
6. Wang, Z. The uses of distributed temperature survey (DTS) data. Ph.D. Dissertation, Stanford
University, 2012

You might also like