Oracle 11g RAC Implementation Guide
Oracle 11g RAC Implementation Guide
Oracle 11g RAC Implementation Guide
Configuration Details Configuration Summary Server Platform Storage Model Oracle Software Linux Distribution Linux Distribution Details OS Kernel RHEL 5 Update 2 x86 kernel-2.6.18Oracle Real Application Clusters (RAC) on Red hat Enterprise Linux 5 Update 2 by using ASM HP Proliant BL680C G5 HP Storageworks MSA2012fc Storage Array Oracle Database 11g Release 1 (11.1.0.6) for Linux x86 Red hat Enterprise Linux 5 Update 2 x86
Additional Packages Needed From Distribution binutils-2.17.50.0.6-5.el5 compat-libstdc++-33-3.2.3-61 compat-libstdc++-33-3.2.3-61 elfutils-libelf-devel-0.125-3.el5 gdb-6.5-25.el5 glibc-2.5-18 glibc-2.5-18 glibc-common-2.5-18 glibc-devel-2.5-18 glibc-devel-2.5-18 libXp-1.0.0-8.1.el5 libXtst-1.0.1-3.1 libaio-0.3.106-3.2 libaio-devel-0.3.106-3.2 libstdc++-4.1.2-14.el5 libstdc++-4.1.2-14.el5 libstdc++-devel-4.1.2-14.el5 make-3.81-1.1 sysstat-7.0.0-3.el5 unixODBC-2.2.11-7.1 unixODBC-devel-2.2.11-7.1 util-linux-2.13-0.44.el5 xorg-x11-deprecated-libs-6.8.2-1.EL.33.0.1
Filesystem ASM -
Details Using Automatic Storage Management Library Driver (asmlib) for datafiles ocr and voting disk located directly on block device , for configuration with datafiles on ASM storage.
Hardware At the hardware level, each node in a RAC cluster shares three things: 1. 2. 3. 4. Access to shared disk storage Connection to a private network Access to a public network. Shared Disk Storage Oracle RAC relies on shared disk architecture. The database files, online redo logs, and control files for the database must be accessible to each node in the cluster. The shared disks also store the Oracle Cluster Registry and Voting Disk (discussed later). There are a variety of ways to configure shared storage including direct attached disks (typically SCSI over copper or fiber), Storage Area Networks (SAN), and Network Attached Storage (NAS). Private Network Each cluster node is connected to all other nodes via a private high-speed network, also known as the cluster interconnect or high-speed interconnect (HSI). Oracle Cache Fusion allows data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network. It also preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes. Public Network To maintain high availability, each cluster node is assigned a virtual IP address (VIP). In the event of node failure, the failed node's IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.
5.
6.
/etc/multipath.conf
/etc/modprobe.conf
# vi /etc/sysctl.conf kernel.shmall=3279547 kernel.shmmax=4185235456 kernel.shmmni = 4096 kernel.sem=250 32000 100 142 fs.file-max=327679 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 1048536 net.core.wmem_max = 1048536 save and exit # sysctl -p Setting Shell Limits for the oracle User: Oracle recommends setting the limits to the number of processes and number of open files each Linux account may use. To make these changes as root. # vi /etc/security/limits.conf oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 131072 oracle hard nofile 131072 oracle hard core unlimited oracle hard memlock 50000000 oracle soft memlock 50000000 save and exit # vi /etc/pam.d/login session required /lib/security/pam_limits.so session required pam_limits.so save and exit Configure the Hangcheck Timer: All RHEL releases: # vi /etc/rc.d/rc.local modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 save and exit
3600a0b80001327510000009b4362163e
On node rac2: # hp_rescan -a ####### command is used to find and add LUN # scsi_id -g -s /block/sda
3600a0b80001327510000009b4362163e
Or You can user the command
create: 3600a0b80001327d80000006d43621677 [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:0 sdb 8:16 \\_ 3:0:0:0 sdf 8:80 create: 3600a0b80001327510000009a436215ec [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:1 sdc 8:32 \\_ 3:0:0:1 sdg 8:96 create: 3600a0b80001327d800000070436216b3 [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:2 sdd 8:48 \\_ 3:0:0:2 sdh 8:112 create: 3600a0b80001327510000009b4362163e [size=12 GB][features="0"][hwhandler="0"] \\_ round-robin 0 \\_ 2:0:0:3 sde 8:64 \\_ 3:0:0:3 sdi 8:128
After getting output of wwid nos are to be bind with alias in the configuration file /etc/multipath.conf On Node rac1: # vi /etc/multipath.conf multipaths { multipath { wwid 3600a00b80001327510000009b4362163e ### copy the above scsi_id output id here##### alias asm1 } multipath { wwid 3600a00b80001327510000009b4362153e alias asm2 } multipath { wwid 3600a00b80001327510000009b4362133e alias asm3 } multipath { wwid 3600a00b80001327510000009b4362143e alias ocr } multipath { wwid 3600a00b80001327510000009b4362163e alias ocrmirror } multipath { wwid 3600a00b80001327510000009b4362163e alias voting } multipath { wwid 3600a00b80001327510000009b4362163e alias votingmirror
10
11
Run "kpartx -a" after FDISK is completed to add all partition mappings on the newly-created multipath device # kpartx -a /dev/mapper/asm1 # kpartx -a /dev/mapper/asm2 # kpartx -a /dev/mapper/asm3 # kpartx -l /dev/mapper/asm1 # kpartx -l /dev/mapper/asm2 # kpartx -l /dev/mapper/asm3 # ls /dev/mapper/ # /etc/rc.local #####Run this command for ownership and permission##### Installation of ASM on both nodes ASMLib 2.0 is delivered as a set of three Linux packages: oracleasmlib-2.0 - the ASM libraries oracleasm-support-2.0 - utilities needed to administer ASMLib oracleasm - a kernel module for the ASM library First, determine which kernel you are using by logging in as root and running the following command: # uname rm Download the kernel verion related oracleasm from the link http://www.oracle.com/technology/tech/linux/asmlib/index.html # rpm -ivh oracleasm-2.6.18-53.1.14.el5-2.0.4-1.el5 oracleasm-support-2.0.4-1.el5 oracleasmlib-2.0.31.el5.x86_64.rpm Configuring ASMLib on both nodes # /etc/init.d/oracleasm configure Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: Creating /dev/oracleasm mount point: Loading module "oracleasm": Mounting ASMlib driver filesystem: Scanning system for ASM disks: Tip: Enter the DISK_NAME in UPPERCASE letters. # /etc/init.d/oracleasm createdisk VOL1 Marking disk "/dev/mapper/asm1p1" as an # /etc/init.d/oracleasm createdisk VOL2 Marking disk "/dev/mapper/asm2p1" as an # /etc/init.d/oracleasm createdisk VOL3 Marking disk "/dev/mapper/asm3p1" as an # /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3 On node rac2: Run the following command as root to scan for configured ASMLib disks: # /etc/init.d/oracleasm scandisks
[ [ [ [ [
OK OK OK OK OK
] ] ] ] ]
Note: Mark disks for use by ASMLib by running the following command as root only on Node rac1 /dev/mapper/asm1p1 ASM disk: /dev/mapper/asm2p1 ASM disk: /dev/mapper/asm3p1 ASM disk:
[ [ [
OK OK OK
] ] ]
12
13
14
15
16
17
18
Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2
$ exit Oracle Database software Installation: Copy and unzip the Oracle 11g database at location /stage1. On Node rac1: # cd /stage1
# cp -r /media/linux.x64_11gR1_database.zip /stage1
# unzip linux.x64_11gR1_database.zip # xhost + # su oracle $ cd /home/orace $ vi . asm.env export ORACLE_BASE=/node1 export ORACLE_HOME=/node1/asm save and exit $ . asm.env $ cd /stage1/database $ ./runInstaller
19
Choose /node1 for Oracle base, leave Name and Path in Software Location at /node1/asm.
20
Installer should verify your environment. In your configuration you probably do not have enough swap space, but this hasn't caused any problems, so you can safely user-verify this. Also you should ignore the kernel rmem_default parameter notice (it's also OK).
21
Select Install database software only as you want to create the database later.
22
Installation process will occur. Taking into account that iSCSI and storage optimizations haven't been done yet, it can take up to one hour depending on your hardware.
After installation you will be asked to run post-installation scripts on both nodes.
23
24
Click Exit DBCA: Creation of Database and ASM instance Creating an ASM Instance and a Disk Group with DBCA To create an ASM instance and a disk group with DBCA, perform the following steps: DBCA starts its GUI interface. # xhost + # su oracle $ . db.env $ cd /node1/asm/product/11.1.0/db_1/bin/ $ ./dbca
25
26
27
Click "Finish" to exit out from dbca. Verify that LISTENER and ASM instances are up and running and are properly registered with CRS. CRS STACK STATUS AFTER THE INSTALLATION AND CONFIGURATION OF ASM
28
29
30
Choose Custom Database to have better flexibility in the database creation process.
Name your database. For the purpose of this guide call it as erac.world. The SID prefix should automatically be set to erac. Individual SID are going to be rac1 .. rac2.
31
For testing purposes you can select some easy password for all important Oracle accounts.
32
You will use a non-shared PFILE for every ASM instance. You could choose to use a shared SPFILE to have centralized ASM configuration, but then a problem arises: At which storage array should you store this critical file?
33
At the Create Diskgroup Window click on Change Disk Discovery Path; a new window should pop up.
At Initialization Parameters you can configure how much memory (total) will be used by this instance. This can be later alter by changing the memory_target parameter. Customize other database parameters to meet yours needs.
34
There are many new security options in Oracle Database 11g (which are on by default). Accept them all.
35
At Database Storage you can tune parameters related to REDO logs, controlfiles, and so on. Defaults are appropriate for initial testing.
36
Select only Create Database; if you'd like you can generate database scripts to speed up creation of a new DB after wiping an old one (e.g. for new experiments).
You will be presented with summary showing which options are going to be installed. The database creation process can be somehow long depending on the options being installed and the storage used. Finally, you will get a quick summary about the created database.
37
38