Best Practices de Oracle
Best Practices de Oracle
Best Practices de Oracle
Pg. 1
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Contents
Pg. 2
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Executive Summary
This document describes the process for installing an Oracle 12cR1 single instance database on a
Redhat or OEL 6- or 7-compatible operating system in order to use Tegile flash storage. For the purpose
of this document Oracle 12.1.0.2 and Oracle Linux 6.7 were used. However, Oracle version 11gR2 and
earlier versions of Linux will have very similar, if not identical, setup methods. Any version specific
alterations in procedures will be called out in the document. The physical characteristics of the system
include a 2-socket, 12-core (6 cores per socket) server with 48GB of memory connected via 8GB fiber
channel to a Tegile T3700 all-flash array running firmware version 2.1.3.5 configured in an active/active
controller configuration.
In order to take advantage of the extreme performance characteristics of Tegile flash storage the
Oracle Automatic Storage Management (ASM) volume manager will be used to achieve raw
performance (as opposed to using a file system).
Disclaimer
Note that this document describes the process for building a generic system and does not take
into account individual customer’s requirements for security, performance, resilience and other
operational aspects that may be relevant. Customers with existing operational guidelines should
treat those guidelines with higher priority – and where any advice in this document conflicts with
existing policies those policies should be adhered to. Tegile does not accept liability for any
issues experienced as a result of following this document.
This document will detail each step necessary to complete the installation process along with
examples and expected outputs. However, experienced users may find this level of detail
unnecessary, so also included is a “Quick Start” section showing only the high level steps
required.
Pg. 3
Best Practices Guide
Oracle Database on Tegile IntelliFlash
This section shows a high-level summary of the steps required to complete the installation:
1. Create LUNs from Tegile’s GUI (see section Storage Array Setup)
2. Install the oracle-rdbms-server-12cR1-preinstall package using yum (for 11gR2 utilize the
oracle-rdbms-server11gR2-preinstall package)
3. Install and configure the device mapper multipathing software – note that there are
specific device details required when adding entries into the multipath.conf file for Tegile
arrays (see section Multipathing)
4. Add aliases in the multipath.conf file for each LUN presented from Tegile arrays (see
section Add Multipath Aliases)
5. Create UDEV rules to handle LUNs presented from Tegile arrays – note that again there
are specific configuration settings which must be set using these UDEV rules (see
section LUN permissions and UDEV rules)
6. Create the Oracle Grid Infrastructure (see section Oracle Grid Install )
8. Creating a separate ASM DiskgGroup for REDO Logs and guidelines on REDO Logs.
Pg. 4
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Tegile makes the following recommendations for the use of Oracle software with Tegile arrays:
• Oracle Database and Grid Infrastructure (ASM) software of version 11g Release 2 or
later is recommended.
• Databases placed on Tegile all flash arrays should have a database block size or 4K or
greater (e.g. the default value of 8K is acceptable).
The design of Tegile arrays allows for a single LUN to deliver the full performance capability of
each active controller. However, since this full performance capability is so high, many operating
systems exhibit bottlenecks at the OS queue level if a single LUN is used. For this reason, Tegile
recommends using multiple LUNs in groups of eight per array (four per active controller when
active/active is configured) for each data storage point (e.g. ASM diskgroup or filesystem)
• If multiple arrays are used, the above recommendation should be adapted to allow a
minimum of 8 LUNs per diskgroup spread over all arrays. For example, a +DATA
diskgroup spread over four arrays would have a minimum of 2 LUNs per array (1 LUN
per controller), making 8 LUNs in total.
• For locations containing files which are infrequently accessed (e.g. database parameter
files, +GRID diskgroups etc.) its is recommended to put it on a mirrored DiskGroup for
redundancy.
• Unless there are specific use cases driving a smaller block size on the Tegile LUNs,
“Database – 8K Block Size” should be used to ensure maximum performance from the
array as well as maximized compression results when compression is enabled. This
recommendation should be used for Bare-Metal Environments (i.e. Linux OS running
without a Hypervisor).
Pg. 5
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Tegile is pioneering a new generation of affordable feature-rich storage arrays that are
dramatically faster, and can store more effective data than traditional arrays. The Tegile all
flash array utilizes an active-active controller architecture to provide an Oracle environment
the highest level of array performance while maintaining a fully redundant, highly available
system. The following array setup takes this architecture into account when creating LUNs to
be presented to ASM for the absolute highest performing design.
In the 2.1.3.5 version of the Tegile T3700 array GUI, there are five IP addresses assigned for
managing the array. Two IPMI addresses, one for each controller, two management
addresses for managing the controllers individually, and one HA address for managing the
entire array. In an active-active configuration the HA address can be utilized for provisioning
storage to both controllers. If the array was configured active-passive then each controller
would need to be managed by their individual MGMT addresses.
Pools
By clicking on the Data menu item at the top of the GUI, the first items viewed are the pool-a
and pool-b pools. The pools can be understood as the storage associated with each
controller. By selecting a pool, the storage available for that particular controller is able to be
provisioned in terms of projects, LUNs, and file systems.
Projects
Projects are an elegant way to encapsulate a group of like LUNs into their base characteristics.
By creating a LUN or group of LUNs into a project, all the activities like snapshot scheduling and
clone creation can be managed from a single place for the entire group of LUNs. Furthermore, a
default set of parameters such as networking settings, block sizes, compression algorithms and
others can be generated so the LUNs created under the project will contain the like settings.
Pg. 6
Best Practices Guide
Oracle Database on Tegile IntelliFlash
For Oracle database best practices the following should be followed for project creation:
1) Provide a project name and select Generic under the purpose. Future versions of the
GUI will have these Oracle best practice settings incorporated into a template. Select a
networking protocol.
2) Based on your specific requirement for the LUNs to be created, complete the FC Target
Group information accordingly.
Pg. 7
Best Practices Guide
Oracle Database on Tegile IntelliFlash
4) The next screen provides for the types of data Deduplication and Compression. By
default, Oracle databases are not good candidates for data deduplication as each
database block is unique with header and DB storage metadata. Compression however,
is a very valid selection with negligible performance impacts. For absolute top
performance while providing adequate levels of compression lz4 should remain selected
for the compression algorithm.
Pg. 8
Best Practices Guide
Oracle Database on Tegile IntelliFlash
LUNs
As per the information mentioned in the High Level Recommendations section previously in this
document, there will be a total of 9 LUNs created in this best practice exercise. However, if an
FRA (Flash Recovery Area) were to be configured, this number would then increase to a
number of 17 LUNs, 1 for the grid infrastructure files (ASM), 8 for the +DATA diskgroup and 8
for the +FRA diskgroup. Adopt a meaningful LUN naming methodology to easily identify devices
on the Oracle host. The LUN naming methodology demonstrated here is in the format Pool
letter_useage_blockize_LUNsize = a_grid01_8k_5GB
Unless there are specific use cases driving a smaller block size on the Tegile LUNs, “Database –
8K Block Size” should be used to ensure maximum performance from the array as well as
maximized compression results when compression is enabled.
1) Create the single small LUN for the Oracle Grid infrastructure files on one of the
controllers. This example shows this occurring on the “a” controller or “pool” in the orcl-
micro1 “Project”.
2) Create the remainder of the database LUNs following a similar naming convention for the
+DATA and +FRA (if necessary) disk groups. The final configuration will appear as
below.
Pool-a
Pg. 9
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Pool-b
LUN block size for REDO Logs.
Redo Logs are transactional journals and each transaction is recorded in the REDO logs. Redo Logs
flush to disk at regular intervals which are decided by multiple factors beyond the scope of this document.
It is recommended to create a separate ASM Disk Group with redundancy and assign multiple LUNs to it
up-to 8 . When creating LUNS for REDO logs , it please create the LUN with a higher LUN block size >
64K < 128K and disable deduplication on these LUNs and assign “LOGBIAS=Latency”
To determine the ideal LUN block size for REDO Logs , an AWR report snapshot can determine the
highest block size count for your database and redo-wastage .If you determine based on your AWR
analysis that the LUN block size for your REDO Logs is not ideal and there is too much redo-wastage ,
you could create new LUNs with a different block size and them to a new ASM diskgroup, create new
REDO Logs on the new DiskGroup and drop the old REDO logs and add the new REDO Log files.
Pg. 10
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Linux OS Setup
Follow the Oracle installation guide for setting up the Oracle server environment. For the
Oracle Enterprise and RHEL Linux OS, the pre-requisite shell script should be executed for the
appropriate Oracle version.
Linux Multipathing
Multipathing software allows for resilience and performance benefits to be gained when multiple
paths exist between storage devices and servers. In the case of fiber-channel storage solutions
there will usually be multiple paths through the fiber-channel network over which LUNs can be
presented from storage. The multipathing software is used to detect which duplicate paths
correspond to each underlying physical device so that they can then be combined into a single
virtual device. The primary benefit of this virtual device is that any underlying path failure can be
tolerated provided there is at least one remaining path available. The multipathing software is
able to detect failed paths and re-issue any failed I/O requests on a remaining active path in a
manner that’s transparent to the caller. This transparency is essential for Oracle software such as
ASM and the database because they are unaware of its existence and have no built-in
functionality to perform the same task.
An additional benefit of multipathing software is the lower latency, which can be gained by
spreading I/O requests over numerous underlying paths. This is of particular importance when
using high performance storage such as Tegile flash arrays.
Each LUN presented from Tegile has a unique identifier. These identifiers are used to create the
user-friendly aliases in the multipath configuration file, so a list of the existing LUNs needs to be
used – the command multipath –ll will also show all existing devices known to the multipathing
software:
Pg. 11
Best Practices Guide
Oracle Database on Tegile IntelliFlash
multipaths
# example for setting user defined names for multipath devices;
{
multipath {
wwid 3600144f0d16d89000000563d35810008
alias a_data01_8k_125GB
}
multipath {
wwid 3600144f0d16d89000000563d35960009
alias a_data02_8k_125GB
}
}
NOTE – The above listing is for Tegile arrays running code versions 2.x. If the Tegile array
is running 3.x or newer code the only difference is the product listing needs to be changed
from product "ZEBI-FC" to product "INTELLIFLASH"
The final step in the process is now to flush the device mapper and order multipath to pick up the
new user-defined configuration:
[root ~]# multipath -F
[root ~]# multipath –v2
Pg. 12
Best Practices Guide
Oracle Database on Tegile IntelliFlash
The I/O scheduler determines the way in which block I/O operations are submitted to storage.
There are a number of different I/O schedulers available in the Linux kernel by default, but a
common theme in their behavior is the aim to reduce the impact of hard drive “seek time”. Most
work by assigning I/O operations into queues and then reordering them to reduce the amount of
time that disk heads spend moving to each location. For the SLES kernel the cfq scheduler is
enabled by default. Flash memory has no issues with seek times and exhibits latencies that are
frequently less than a millisecond, so there is minimal gain from using this scheduler. Tests have
consistently shown a significant increase in performance when switching to the most simple noop
scheduler.
In order to set all Tegile devices to use these values, a new UDEV rule must be created. UDEV is
the Linux device manager which dynamically creates and maintains the device files found in the
/dev directory. UDEV uses a number of rules files located in the /etc/udev/rules.d directory, so to
make this change a new file should be created. The name of the file – and its contents – will be
dependent on the version of Linux in use.
Pg. 13
Best Practices Guide
Oracle Database on Tegile IntelliFlash
This file will contain the following UDEV rules (take care not to introduce any additional carriage
returns – this syntax is very sensitive):
*****************************
* Code levels 2.x & 3.x *
*****************************
### /etc/udev/rules.d/50-tegile.rules #######
### This example is for 2.x FC . For 2.x iSCSI replace SYSFS{model}==”ZEBI-ISCSI”
### For 3.x FC and iSCSI , replace SYSFS{model}==”INTELLIFLASH*”
Finally, UDEV subsystem must be told to reread and apply the new rules:
[root ~]# udevadm control --reload-rules
[root ~]# udevadm trigger
Check that the new rules have taken place by looking at the device owner of the /dev/dm*
Tegile devices have been changed to oracle:dba [root ~]# ls –l /dev/dm* ( partial listing)
Pg. 14
Best Practices Guide
Oracle Database on Tegile IntelliFlash
The procedure for installation of Oracle Automatic Storage Management and the creation of diskgroups
follows the standard process described in the Oracle documentation.
Oracle DB Install
To achieve optimum performance, there are three elements to consider when configuring the
Oracle database to run on NAND flash storage:
• Database block size (set by parameter db_block_size): Allowable values for this
parameter in Oracle are 2K, 4K, 8K (the default), 16K and 32K. In order to ensure optimal
performance, values of 8K or greater should always be used with Tegile arrays.
• Online redo log block size: by default, this is 512 bytes. Please note this value is what Oracle
discovers when querying the geometry of the LUN and not the LUN block size. LUN block size for
REDO logs was discussed in a previous section.
• Database Creation
There are no special procedures required during the creation of databases on Tegile arrays
Pg. 15
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Pg. 16
Best Practices Guide
Oracle Database on Tegile IntelliFlash
There are situations where the customer wants to use Raw Disk Mappings versus VMDK and we
list the advantage and disadvantages of each.
VMware Disk Type Advantages Disadvantages
RAW Disk Mapping RDM Legacy, Easy P->V Migration VM using RDM cannot be live-
Array Snapshots can be used migrated
Hypervisor completely bypassed Storage cannot be migrated using S-
VMotion
Cannot use SIOC (Storage IO
Control
VMFS filesystem ( VMDK/Vdisk) Array snapshots can be used
datastore Hypervisor Latency is minimal with No known disadvantages
proper tuning.
VMotion, Storage VMotion, SIOC
can be used
vSphere Replication using Tegile
SRA.
Configure the VM to produce true UUID for LUNs This is only required if RDM’s LUNs are exposed to a
as seen by Linux Virtual
Machine running Linux and Oracle RDBMS
Pg. 17
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Hypervisor tuning
When using VMDK’s for ASM, one needs to ensure that the tunables are set for the Hypervisor for
FC and iSCSI for optimal performance.
It is highly advisable to use the Tegile vSphere Plugin to set the tunables
This below table lists the parameters commands, which can be used in lieu of using the vSphere
Plugin
These commands vary a bit depending on vSphere release. The Syntax provided is for vSphere 5.5
Pg. 18
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Linux Guest Configuration
Guest Operating System Disk Timeout for RDM and VMware Virtual DISKS
Pg. 19
Best Practices Guide
Oracle Database on Tegile IntelliFlash
vSphere-Server Virtual LUNS for Booting LZ4 ON meta meta Through 32K
Boot LUNS Server ESXi Server put
The below figure shows a typical Project Schema and how snapshots and clones work
Pg. 20
Best Practices Guide
Oracle Database on Tegile IntelliFlash
A space-optimized snapshot can be triggered from the project properties from the GUI or a REST
API call to the Array. If quiesce is turned on the snapshot will be synchronously crash-consistent
across all LUNS
Pg. 21
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Provide a Clone name and click on “inherit settings”. This will make the clone LUNS available to the
same ESXI server.
The clone LUNs can be brought into VM3 as a test-dev environment. These clones are space
optimized clones and multiple such test-dev copies can be created. This can also be automated
using the REST API.
Pg. 22
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Additional References
Pg. 23
Best Practices Guide
Oracle Database on Tegile IntelliFlash
Appendix
no_path_retry 10
dev_loss_tmo 50
path_checker tur
prio alua
failback 30
rr_min_io 128
}
}
multipaths {
multipath {
wwid 3600144f0d16d89000000563d35ad000a
alias b_data01_8k_125GB
}
multipath {
wwid 3600144f0d16d89000000563d35810008
alias b_data02_8k_125GB
}
}
Pg. 24
Best Practices Guide
Oracle Database on Tegile IntelliFlash
}
devices {
device {
vendor "TEGILE"
product "INTELLIFLASH"
hardware_handler "1
alua"
path_selector "round-
robin 0"
path_grouping_policy
"group_by_prio"
no_path_retry 10
dev_loss_tmo 50
path_checker tur
prio alua
failback 30
rr_min_io 128
}
}
multipaths {
multipath {
wwid 3600144f0d16d89000000563d35ad000a
alias b_data01_8k_125GB
}
multipath {
wwid 3600144f0d16d89000000563d35810008
alias b_data02_8k_125GB
}
}
Pg. 25