Solaris Cluster - UnixArena
Solaris Cluster - UnixArena
english
This is the first article about Sun cluster aka oracle Solaris cluster on UnixArena. This Oracle Solaris cluster series articles will cover the
build of a two-node cluster running Solaris Cluster 4.1 on Solaris 11.The intention of making the cluster will be configuring High
availability local zone aka failover local zones.We have done the same setup on veritas cluster quite long time back.In this article we will
see the installation of the cluster software.
1.Download the oracle Solaris cluster from oracle website.You may need oracle support login to download it.The file name will be “osc-4_1-ga-repo-full.iso” .
3. Create a device file for ISO image to mount on both the nodes.
4.Create the mount point and mount it on both the cluster nodes.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 1/87
02/01/2024 09:02 Solaris Cluster – UnixArena
6.set the correct the repo file and rebuild the index using pkg command.
7.List the available publisher. You can see the newly configured cluster repo here.
8.Install the oracle solaris cluster using pkg command on both the commands.
Packages to install: 68
Create boot environment: No
Create backup boot environment: Yes
Services to change: 7
PHASE ITEMS
Installing new actions 9506/9506
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
Reading search index Done
Building new search index 952/952
root@UnixArena:~#
PHASE ITEMS
Installing new actions 223/223
https://unixarena68.rssing.com/chan-59694592/all_p1.html 2/87
02/01/2024 09:02 Solaris Cluster – UnixArena
10.Add the below path in root’s profile to access the cluster commands.
We have successfully installed the oracle cluster 4.1 on both cluster nodes.The installation is so simple like veritas cluster. Very soon we will see the configuration
part.
When your system required to handle huge amount of traffic, then you may need to combine one or more physical interfaces in to logical
interfaces in various methods.1.Link Aggregation 2.Configuring IPMP. IPMP methods works in two methods to detect the failures. If the system
is configured with default router, then it checks the connectivity with it.Otherwise it works in multicast mode by checking the connectivity with
near by node.
Here we will see how to configure probe-based IPMP. The IPMP group can be configured with active-active interfaces or active-standby
interfaces. IPMP provides network availability during interface failures.As you know Solaris 11 has changed lot from Solaris 10
and configuring IPMP also has no exception from that. In Solaris 11, you need to use “ipadm” command to configure the IPMP.
We will also see how to enable the transitive probing for oracle Solaris cluster in the end of the article.
1. Login to Solaris 11 host and make sure two or more than physical interfaces available.
2.Here we are going to use net0 and net1 to configure the IPMP group.So make there is no IP exists on net0 & net1.If exists,remove it using ” ipadm delete-addr
net0/XXXX”
root@Unixarena1:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net0/pubip static ok -- 192.168.2.32/24
net1 ip down -- --
https://unixarena68.rssing.com/chan-59694592/all_p1.html 3/87
02/01/2024 09:02 Solaris Cluster – UnixArena
NAME CLASS/TYPE STATE UNDER ADDR
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip down -- --
net1 ip down -- --
4.Now we are good to start configuring the IPMP. Let me create a new IPMP group.
6.Associate the network interfaces to ipmp group which we have created in step number 4. IP address has taken from /etc/hosts file with entry of UnixArena.
7.For Active-Active IPMP configuration, no further changes needed. The default configuration works as Active-Active model.If the standby property is set to off,then its
active-active model. You can verify the settings using below commands.
8.For active-standby setup, you need to modify the interface property.Here my standby interface is net1.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 4/87
02/01/2024 09:02 Solaris Cluster – UnixArena
9.Its time to test the IPMP setup. Disable one interface and check whether the standby interface is talking load of primary or not .Before proceeding to disable , make
sure both interfaces are active in IPMP level.
root@UnixArena:~# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net1 no ipmp0 is----- up disabled ok
net0 yes ipmp0 --mbM-- up disabled ok
root@UnixArena:~#
As per the above output, one interface is multicast mode and other one in transitive mode.None of the interface in disabled mode. So it’s good begin the test.
If you are going to use oracle Solaris cluster then we need to enable transitive probing.Here we will see how to enable transitive probing.
1.Check the existing transitive value,By default it will be false.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 5/87
02/01/2024 09:02 Solaris Cluster – UnixArena
root@UnixArena:~# ipmpstat -p
TIME INTERFACE PROBE NETRTT RTT RTTAVG TARGET
0.94s net1 t4 0.58ms 0.59ms 0.55ms
2.35s net1 t5 0.51ms 0.52ms 0.55ms
3.76s net1 t6 0.47ms 0.47ms 0.54ms
4.72s net1 t7 0.53ms 0.53ms 0.54ms
6.69s net1 t8 0.50ms 0.50ms 0.53ms
^C
root@UnixArena:~#
As per the above output,you can see probe-based failure detection is enabled.
Thank you for visiting UnixArena. Please leave a comment if you have any doubt on this.
The post Solaris 11- How to Configure IPMP – Probe-Based ? appeared first on UnixArena.
Here we will see how to install oracle Solaris Cluster 3.3 u2 on Solaris 10 x86 system. I haven’t showed any interest on Solaris cluster but getting many requests to write
about it. So i took some time and now i am back with Solaris Cluster series articles which will help you to setup the cluster from scratch.Our goal will be making high
availability zones using the cluster service. Here we will just see the installation part.
UASOL1:#uname -a
SunOS UASOL1 5.10 Generic_147148-26 i86pc i386 i86pc
UASOL1:#cat /etc/release
Oracle Solaris 10 1/13 s10x_u11wos_24a X86
Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
Assembled 17 January 2013
UASOL1:#
1.Download the cluster package from oracle and copy to the Solaris nodes.
UASOL1:#cd Solaris_x86
UASOL1:#ls -lrt
total 27
-rw-r--r-- 1 root root 89 Feb 28 2013 release_info
-rwxr-xr-x 1 root root 10641 Feb 28 2013 installer
drwxr-xr-x 8 root root 8 Feb 28 2013 Product
UASOL1:#./installer -nodisplay
Welcome to Oracle(R) Solaris Cluster; serious software made simple...
Before you begin, refer to the Release Notes and Installation Guide for the
products that you are installing. This documentation is available at http:
//www.oracle.com/technetwork/indexes/documentation/index.html.
You can install any or all of the Services provided by Oracle Solaris
Cluster.
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
4.Just hit enter to continue. Here you will get the component selection.
Installation Type
-----------------
https://unixarena68.rssing.com/chan-59694592/all_p1.html 6/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Do you want to install the full set of Oracle Solaris Cluster Products and
Services? (Yes/No) [Yes] {"<" goes back, "!" exits} No
Based on product dependencies for your selections, the installer will install:
** * Java DB Client
** * Java DB Server
** * Java DB Client
** * Java DB Server
https://unixarena68.rssing.com/chan-59694592/all_p1.html 7/87
02/01/2024 09:02 Solaris Cluster – UnixArena
*[X] 6. Oracle Solaris Cluster HA/Scalable for Java System Web Server
*[X] 7. Oracle Solaris Cluster HA for Instant Messaging
*[X] 8. Oracle Solaris Cluster HA for Java System Calendar Server
*[X] 9. Oracle Solaris Cluster HA for Apache Tomcat
*[X] 10. Oracle Solaris Cluster HA for DHCP
*[X] 11. Oracle Solaris Cluster HA for DNS
*[X] 12. Oracle Solaris Cluster HA for MySQL
*[X] 13. Oracle Solaris Cluster HA for Sun N1 Service Provisioning System
*[X] 14. Oracle Solaris Cluster HA for NFS
*[X] 15. Oracle Solaris Cluster HA for Oracle
*[X] 16. Oracle Solaris Cluster HA for Samba
*[X] 17. Oracle Solaris Cluster HA for Sun N1 Grid Engine
*[X] 18. Oracle Solaris Cluster HA for Solaris Containers
*[X] 19. Oracle Solaris Cluster Support for Oracle RAC
*[X] 20. Oracle Solaris Cluster HA for Apache
*[X] 21. Oracle Solaris Cluster HA for SAP liveCache
*[X] 22. Oracle Solaris Cluster HA for WebSphere Message Broker
*[X] 23. Oracle Solaris Cluster HA for WebSphere MQ
*[X] 24. Oracle Solaris Cluster HA for SAPDB
*[X] 25. Oracle Solaris Cluster HA for SAP Web Application Server
*[X] 26. Oracle Solaris Cluster HA for SAP
*[X] 27. Oracle Solaris Cluster HA for Kerberos
*[X] 28. Oracle Solaris Cluster HA for BEA WebLogic Server
*[X] 29. Oracle Solaris Cluster HA for PostgreSQL
*[X] 30. Oracle Solaris Cluster HA for Oracle 9iAS
*[X] 31. Oracle Solaris Cluster HA for Sybase ASE
*[X] 32. Oracle Solaris Cluster HA for Informix
*[X] 33. Oracle Solaris Cluster HA for TimesTen
*[X] 34. Oracle Solaris Cluster HA for Oracle External Proxy
*[X] 35. Oracle Solaris Cluster HA for Oracle Web Tier Agent
*[X] 36. Oracle Solaris Cluster HA for SAP NetWeaver
Install multilingual package(s) for all selected components [Yes] {"<" goes
back, "!" exits}: No
1. Yes
2. No
9.Do not configure the cluster now. Just proceed with the Solaris cluster installation.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 8/87
02/01/2024 09:02 Solaris Cluster – UnixArena
What would you like to do [1] {"<" goes back, "!" exits}? 1
10.Once the installation is completed,you can see the Solaris cluster install summary.
Installation Complete
Software installation has completed successfully. You can view the installation
summary and log by using the choices below. Summary and log files are available
in /var/sadm/install/logs/.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 9/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Oracle Solaris Cluster Agents 3.3u2 : Installed, Configure After Install
Configuration Data
The configuration log is saved in : /var/sadm/install/logs/JavaES_Install_log.
1904654032
Enter 1 to view installation summary and Enter 2 to view installation logs
[1] {"!" exits} !
In order to notify you of potential updates, we need to confirm an internet connection. Do you want to proceed [Y/N] : N
UASOL1:#
The post How to install Solaris Cluster 3.3 on Solaris 10 ? appeared first on UnixArena.
Once you have installed the Solaris cluster on Solaris 10 nodes, you can start configuring the Solaris cluster according to the requirement . If you are planning for two node
cluster, then you need two Solaris 10 hosts with 3 NIC cards and shared storage .You have to provide two dedicated NIC for cluster heartbeat.Also you need to setup up
root – password less authentication between two Solaris nodes to configure the cluster. Here we will see that how we can configure two node Solaris cluster.
Solaris 10 Hosts :
UASOL1 – 192.168.2.90
UASOL2 – 192.168.2.91
Update both the nodes /etc/hosts file to resolve the host name. On UASOL1,
UASOL1:#cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.2.90 UASOL1 loghost
192.168.2.91 UASOL2
On UASOL2,
UASOL2:#cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.2.90 UASOL1
192.168.2.91 UASOL2 loghost
1. Login to one of the Solaris 10 node where you need to configure Solaris cluster.
2.Navigate to /usr/cluster/bin directory and execute scinstall cluster.Select 1 to create a new cluster .
https://unixarena68.rssing.com/chan-59694592/all_p1.html 10/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Option: 1
Option: 1
4.We have already setup the ssh password less authentication for root between two nodes. So we can continue.
You must use the Oracle Solaris Cluster installation media to install
the Oracle Solaris Cluster framework software on each machine in the
new cluster before you select this option.
This tool supports two modes of operation, Typical mode and Custom
mode. For most clusters, you can use Typical mode. However, you might
need to select the Custom mode option if not all of the Typical mode
defaults can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.
1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]: 2
https://unixarena68.rssing.com/chan-59694592/all_p1.html 11/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Each cluster has a name assigned to it. The name can be made up of any
characters other than whitespace. Each cluster name should be unique
within the namespace of your enterprise.
7.Enter the Solaris 10 nodes hostname which are going to participate on this cluster.
List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:
Once the first node establishes itself as a single node cluster, other
nodes attempting to add themselves to the cluster configuration must
be found on the list of nodes you just provided. You can modify this
list by using claccess(1CL) or other tools once the cluster has been
established.
9. We have two dedicated physical NIC cards on both the solaris nodes.
Should this cluster use at least two private networks (yes/no) [yes]?
https://unixarena68.rssing.com/chan-59694592/all_p1.html 12/87
02/01/2024 09:02 Solaris Cluster – UnixArena
private network.
Transport adapters are the adapters that attach to the private cluster
interconnect.
1) e1000g1
2) e1000g2
3) Other
Option: 1
1) e1000g1
2) e1000g2
3) Other
Option: 2
13.Let the cluster chooses network and subnet for Solaris cluster transport.
Most of the time, leave fencing turned on. However, turn off fencing
when at least one of the following conditions is true: 1) Your shared
storage devices, such as Serial Advanced Technology Attachment (SATA)
https://unixarena68.rssing.com/chan-59694592/all_p1.html 13/87
02/01/2024 09:02 Solaris Cluster – UnixArena
disks, do not support SCSI; 2) You want to allow systems outside your
cluster to access storage devices attached to your cluster; 3) Oracle
Corporation has not qualified the SCSI persistent group reservation
(PGR) support for your shared storage devices.
If you choose to turn off global fencing now, after your cluster
starts you can still use the cluster(1CL) command to turn on global
fencing.
You can use the "clsetup" command to change the value of the
resource_security property after the cluster is running.
You have chosen to turn on the global fencing. If your shared storage
devices do not support SCSI, such as Serial Advanced Technology
Attachment (SATA) disks, or if your shared disks do not support
SCSI-2, you must disable this feature.
Do you want to disable automatic quorum device selection (yes/no) [no]? yes
16.Oracle Solaris cluster 3.3 u2 , automatically create a global filesystem on both the systems.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 14/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Each node in the cluster must have a local file system mounted on
/global/.devices/node@ before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
Alternatively, you can use a loopback file (lofi), with a new file
system, and mount it on /global/.devices/node@.
If the lofi method is used, scinstall creates a new 100 MB file system
from a lofi device by using the file /.globaldevices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.
17.Proceed with cluster creation. Do not interrupt cluster creation due to cluster check errors.
18.Once cluster configuration is completed , it reboots the other nodes and it reboots itself.
Cluster Creation
Rebooting ...
19.Once the nodes are rebooted, you can see that both the nodes are booted in cluster mode and check the status using below command.
UASOL1:#clnode status
https://unixarena68.rssing.com/chan-59694592/all_p1.html 15/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#
21.You can also see that Solaris cluster has plumbed the new IP’s on both hosts .
UASOL1:#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.90 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
ether 0:c:29:4f:bc:b8
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.0.66 netmask ffffffc0 broadcast 172.16.0.127
ether 0:c:29:4f:bc:c2
e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
inet 172.16.0.130 netmask ffffffc0 broadcast 172.16.0.191
ether 0:c:29:4f:bc:cc
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.2 netmask ffffff00 broadcast 172.16.2.255
ether 0:0:0:0:0:2
UASOL1:#
22.As of now , we haven’t configured the quorum devices, but you can just see the voting status using below command.
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
1 1 1
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 0 0 Online
UASOL1:#
We have successfully configured oracle Solaris two node cluster on Solaris 10 update 11 X86 systems.
What’s Next ?
The post How to configure Solaris two node cluster on Solaris 10 ? appeared first on UnixArena.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 16/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Once you have configured the Solaris cluster , you have to add the quorum device for additional voting process. Without Quorum devices, cluster will be in installed mode.
You can verify the status using “cluster show -t global | grep installmode” command.Each node in a configured cluster has one ( 1 ) quorum vote and cluster require
minimum two vote to run the cluster.If any one node goes down, cluster won’t get 2 votes and it will panic second node also to avoid the data corruption on shared storage.
To avoid this situation, we can make one small SAN disk as quorum device which can provide one vote.So that, if one node fails, system can still get two votes all the time
on two node cluster.
Once you have configured the two node Solaris cluster, you can start configure the quorum device.
UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
1 1 1
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 0 0 Online
UASOL1:#
4.Make sure you have small size LUN is assigned to both the cluster node from SAN.
UASOL1:#echo |format
Searching for disks...done
UASOL1:#format c1t1d0
selecting c1t1d0: quorum
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return
https://unixarena68.rssing.com/chan-59694592/all_p1.html 17/87
02/01/2024 09:02 Solaris Cluster – UnixArena
quit
format> fdisk
The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> volname quorum
format> quit
UASOL1:#
UASOL2:#echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32 quorum
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL2:#
UASOL2:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL2:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL2:#
UASOL1:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL1:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL1:#
UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
UASOL1:#cldev show d4
=== DID Device Instances ===
DID Device Name: /dev/did/rdsk/d4
Full Device Path: UASOL1:/dev/rdsk/c1t1d0
Full Device Path: UASOL2:/dev/rdsk/c1t1d0
Replication: none
default_fencing: global
UASOL1:#
UASOL1:#clquorum add d4
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 3 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
https://unixarena68.rssing.com/chan-59694592/all_p1.html 18/87
02/01/2024 09:02 Solaris Cluster – UnixArena
We have successfully configured the quorum on two node Solaris cluster 3.3 u2.
Just reboot any one of the node and you can see the voting status .
UASOL2:#reboot
updating /platform/i86pc/boot_archive
Connection to UASOL2 closed by remote host.
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 2 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 0 1 Offline
UASOL1 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d4 1 1 Online
UASOL1:#
We can see that UASOL1 is not panic by cluster. So quorum device worked well.
If you don’t have real SAN storage for shared LUN, you can use openfiler.
What’s Next ? We will configure resource group for failover local zone and perform the test.
The post How to configure Quorum devices on Solaris cluster ? appeared first on UnixArena.
This article will help you to create a resource group on Solaris cluster and adding couple of resource to it. Resource group is similar to service group in veritas cluster
which bundles the resources in one logical unit. Once you have configured the Solaris two node cluster and added the quorum devices, you can create a resource group.
Once we create the resource group ,we will add zpool storage resource and will perform the failover test.
1. Login to one of the cluster node as root and check the cluster node status.
UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 19/87
02/01/2024 09:02 Solaris Cluster – UnixArena
2.Check the heartbeat link status of Solaris cluster.
UASOL1:#clinterconnect status
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
UASOL2:e1000g2 UASOL1:e1000g2 Path online
UASOL2:e1000g1 UASOL1:e1000g1 Path online
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
4.In the above command output, everything seems to be fine. So let me create a resource group.
UASOL1:#clrg status
1.Check the cluster device instances. Here d5 d6 are from SAN storage. d5 is already used for quorum setup.
UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d2 UASOL1:/dev/rdsk/c1t2d0
d3 UASOL2:/dev/rdsk/c1t2d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
d5 UASOL2:/dev/rdsk/c2t16d0
d5 UASOL1:/dev/rdsk/c2t14d0
d6 UASOL2:/dev/rdsk/c2t15d0
d6 UASOL1:/dev/rdsk/c2t13d0
UASOL1:#
UASOL1:#cldevice status
=== Cluster DID Devices ===
Device Instance Node Status
--------------- ---- ------
/dev/did/rdsk/d1 UASOL1 Ok
UASOL2 Ok
https://unixarena68.rssing.com/chan-59694592/all_p1.html 20/87
02/01/2024 09:02 Solaris Cluster – UnixArena
/dev/did/rdsk/d2 UASOL1 Ok
/dev/did/rdsk/d3 UASOL2 Ok
/dev/did/rdsk/d4 UASOL1 Ok
UASOL2 Ok
/dev/did/rdsk/d5 UASOL1 Ok
UASOL2 Ok
/dev/did/rdsk/d6 UASOL1 Ok
UASOL2 Ok
UASOL1:#
4.Create the new cluster resource for zpool which we have created on previous step.
UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAZPOOL UASOL2 Offline Offline
UASOL1 Offline Offline
UASOL1:#
6.Bring the resource group online and check the resource status.
UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /
https://unixarena68.rssing.com/chan-59694592/all_p1.html 21/87
02/01/2024 09:02 Solaris Cluster – UnixArena
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#
8.To test the resource group, Switch the resource group to other node.
9.Now you can see that cluster zpool has been moved to UASOL2 node.
UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Online
UASOL1 No Offline
UASOL1:#ssh UASOL2 zpool list
Password:
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /
rpool 13.9G 9.15G 4.73G 65% ONLINE -
UASOL1:#
So automatic failover should work for resource group which we have just created. In the next article,we will see that how add the localzone to the cluster.
The post How to create Resource Group on Solaris cluster ? appeared first on UnixArena.
In this article, we will see how we can add local zone as a resource in Solaris cluster to make the zone highly available. In the past we have seen similar setup in veritas
cluster. By configuring zone as a resource, if any one node fails ,automatically zone will fly to other node with minimal downtime.(Flying zone on Solaris). Once you have
configured the below things, then we can proceed with bring the localzone under Solaris cluster.
Unlike veritas cluster, local zone IP will be managed from global zone as cluster resource . So let me create a IP resource before proceeding with local zone creation.
1. Login to Solaris cluster nodes and add the local zone IP & Host name information in /etc/hosts file.
UASOL1:#clresource status
https://unixarena68.rssing.com/chan-59694592/all_p1.html 22/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#
UASOL1:#ping UAHAZ1
UAHAZ1 is alive
UASOL1:#
5.You can see that local zone IP has plumbed by Solaris cluster .
UASOL2:#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
ether 0:c:29:e:f8:ce
e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
inet 192.168.2.94 netmask ffffff00 broadcast 192.168.2.255
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.0.65 netmask ffffffc0 broadcast 172.16.0.127
ether 0:c:29:e:f8:d8
e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
inet 172.16.0.129 netmask ffffffc0 broadcast 172.16.0.191
ether 0:c:29:e:f8:e2
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
ether 0:0:0:0:0:1
UASOL2:#
UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAHAZ1 UASOL2 Offline Offline - LogicalHostname offline.
UASOL1 Online Online - LogicalHostname online.
We have successfully created logicalhostname cluster resource and tested on both the nodes.
7.Create a local zone on any one of the cluster node and copy the /etc/zones/global & /etc/zones/zonename.xml file to other node to make the zone configuration available
on both the cluster nodes.Create a local zone without adding network part.(Ex:add net)
https://unixarena68.rssing.com/chan-59694592/all_p1.html 23/87
02/01/2024 09:02 Solaris Cluster – UnixArena
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared
UASOL1:#
You can refer this article for creating the local zone but do not configure network.
8.Halt the local zone on UASOl1 and failover the resource group to UASOL2 to test the zone on it.
10. Login to local zone and perform the health check .If everything seems to be fine , then just halt the localzone.
UASOL2:#zlogin UAHAZ1
[Connected to zone 'UAHAZ1' pts/4]
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# uptime
12:37am up 1 user, load average: 0.50, 0.13, 0.07
bash-3.2# exit
# ^D
[Connection to zone 'UAHAZ1' pts/4 closed]
UASOL2:#zoneadm -z UAHAZ1 halt
UASOL2:#
Click Page 2 to see how to create the resource for local zone and adding in to the resource group .
The post How to configure High Availability zone on Solaris cluster ? appeared first on UnixArena.
Getting opportunity to work on cluster environment is very difficult on big companies due to security problem. If you get opportunity also you can’t play much on it since
most of the cluster environments will be critical to the client. To learn any operating system cluster, you have to build it by your own and configuring the resource groups
and resources yourself. You will be lucky if your organization provides the LAB environment with necessary hardwares for these kind of setup. Due to hardware cost, many
companies are not providing the such LAB setup . So how to become master on cluster ? Will it be possible to setup cluster environment on single Desktop/Laptop ? Yes.
Using VMware workstation, you can setup cluster. In the past we have seen for veritas cluster. Here we will see how to setup two Solaris cluster on Solaris 10 using
VMware workstation .
https://unixarena68.rssing.com/chan-59694592/all_p1.html 24/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Desktop/Laptop Configuration:
1. In your desktop, install VMware workstation software and create two virtual machines with below mentioned configuration.
I have allocated 4.3Gb to each VM but assigning 1 GB is enough for each virtual machine. Your virtual machines must have minimum three network adapter . One NIC for
public and two NICs for heartbeat.
4.Enable the Windows share on both the virtual machine for copying Solaris cluster software from your laptop to virtual machine.Copy the Solaris cluster 3.3 u2 to /var/tmp
on both the nodes. Otherwise use winscp to copy it.
8.To proceed further on soalris cluster, you require shared storage. So create a new virtual machine and install openfiler on it.
9. Provision two LUNs to ISCSI target on openfiler web-interface. (512MB Lun for Quorum and 3Gb LUN for shared Zpool)
12.Create the Solaris cluster resource group and add the ZFS storage pools as resource.
13.Finally create the local zone and add it in to Solaris cluster for failover local zone or high availability local zone using Solaris cluster.
By performing above steps , definitely you can setup Two node Solaris cluster on Desktop/Laptop using VMware workstation.
Good Luck.
The post How to setup oracle Solaris cluster on VMware workstation ? appeared first on UnixArena.
This article explains about zone cluster. Zone cluster is created on oracle Solaris hosts using sun cluster aka Oracle Solaris cluster. In Most of the deployments , we might
have seen the failover zones (HA Zones) using sun cluster or Veritas cluster (VCS) on Solaris. Comparatively , zone clusters are very less in the industry but used in some
of the organization very effectively . You must establish the traditional cluster between physical nodes in an order to configure a zone cluster. Since cluster applications
always run in a zone, the cluster node is always a zone.
The typical 4-Node Sun cluster looks like below. (Prior to configuring zone cluster )
4 Node Cluster
4 Node Cluster
https://unixarena68.rssing.com/chan-59694592/all_p1.html 25/87
02/01/2024 09:02 Solaris Cluster – UnixArena
The above diagram shows that two zone clusters have been configured on global cluster.
Here you can see that both test and development systems are in different zone cluster but in same global cluster.
In this cluster model, all the three tiers are in same global cluster but are in different zone cluster.
Cost containment
Administrative workload reduction
Good to know:
Distribution of nodes: You can’t host multiple zones which are part same cluster on same host. Zones must be distributed across the physical nodes.
Node creation: You must create at least one zone cluster node at the time that you create the zone cluster. The name of the zone-cluster node must be unique within the
zone cluster. The infrastructure automatically creates an underlying non-global zone on each host that supports the zone cluster. Each non-global zone is given the same
zone name, which is derived from, and identical to, the name that you assign to the zone cluster when you create the cluster. For example, if you create a zone cluster that
is named “uainfrazone”, the corresponding non-global zone name on each host that supports the zone cluster is also “uainfrazone”.
Cluster name: Each zone-cluster name must be unique throughout the cluster of machines that host the global cluster. The zone-cluster name cannot also be used by a
non-global zone elsewhere in the cluster of machines, nor can the zone-cluster name be the same as that of a global-cluster node. You cannot use “all” or “global” as a
zone-cluster name, because these are reserved names.
Public-network IP addresses: You can optionally assign a specific public-network IP address to each zone-cluster node.
Private hostnames: During creation of the zone cluster, a private hostname is automatically created for each node of the zone cluster, in the same way that hostnames
are created in global clusters.
IP type:A zone cluster is created with the shared IP type. The exclusive IP type is not supported for zone clusters.
Hope this article is informative to you. In the next article, we will see that how to configure the zone cluster on existing two node sun cluster (global cluster).
The post Sun Cluster – Zone Cluster on Oracle Solaris – Overview appeared first on UnixArena.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 26/87
02/01/2024 09:02 Solaris Cluster – UnixArena
This article will walk you through the zone cluster deployment on oracle Solaris. The zone cluster consists of a set of zones, where each zone represents a virtual node.
Each zone of a zone cluster is configured on a separate machine. As such, the upper bound on the number of virtual nodes in a zone cluster is limited to the number of
machines in the global cluster. The zone cluster design introduces a new brand of zone, called the cluster brand. The cluster brand is based on the original native brand
type, and adds enhancements for clustering. The BrandZ framework provides numerous hooks where other software can take action appropriate for the brand type of zone.
For example, there is a hook for software to be called during the zone boot, and zone clusters take advantage of this hook to inform the cluster software about the boot of
the virtual node. Because zone clusters use the BrandZ framework, at a minimum Oracle Solaris 10 5/08 is required.
The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone Cluster Membership Monitor (ZCMM), that monitors
the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone clusters. Zone clusters are considerably simpler than global
clusters. For example, there are no quorum devices in a zone cluster, as a quorum device is not needed.
clzonecluster is a utility to create , modify , delete and manage the zone cluster on sun cluster environment.
Note:
Sun Cluster is a product where zone cluster is one of the cluster type in sun cluster.
Environment:
Operating System : Oracle Solaris 10 u9
Cluster : Sun Cluster 3.3 (aka Oracle Solaris cluster 3.3)
Prerequisites :
Two Oracle Solaris 10 u9 nodes or above
Sun Cluster 3.3 package
Install Oracle Solaris cluster 3.3 (Aka Sun Cluster) on Solaris 10 nodes.
Configure two node sun cluster 3.3 on Solaris 10
UASOL2:#clnode status
=== Cluster Nodes ===
https://unixarena68.rssing.com/chan-59694592/all_p1.html 27/87
02/01/2024 09:02 Solaris Cluster – UnixArena
3. You must keep the zone path ready for local zone installation on both the cluster nodes. Zone path must be identical on both the nodes. On Node UASOL1 ,
On Node UASOL2,
Note:
• By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.
• Specifying an IP address and NIC for each zone cluster node is optional.
Here , we have just configured one zone on UASOL1 . Clustering make sense when you configure with two or more nodes. So let me create a one more zone on UASOL2
node in same zone cluster.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 28/87
02/01/2024 09:02 Solaris Cluster – UnixArena
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
sysid:
root_password: H/80/NT4F2H7g
name_service: NONE
nfs4_domain: dynamic
security_policy: NONE
system_locale: C
terminal: xterm
timezone: Asia/Calcutta
node:
physical-host: UASOL1
hostname: uainfrazone1
net:
address: 192.168.2.101
physical: e1000g0
defrouter not specified
node:
physical-host: UASOL2
hostname: uainfrazone2
net:
address: 192.168.2.103
physical: e1000g0
defrouter not specified
clzc:uainfrazone> exit
6. Check the zone cluster status. At this stage zones are in configured status.
UASOL2:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 29/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Here you can see that uainfrazone is created and installed. You should be able to see the same on UASOL1 as well.
Note: There is no difference if you run a command from UASOL1 or UASOL2 since both are in cluster.
8.Bring up the zones using clzonecluster . (You should not use zoneadm command to boot the zones)
In UASOL2,
10. Zones will reboot automatically for sysconfig. You could see that when you access the zone’s console.
UASOL1:#zlogin -C uainfrazone
[Connected to zone 'uainfrazone' console]
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair
Configuring network interface addresses: clprivnet0.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 30/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2:#clzonecluster status
=== Zone Clusters ===
We have successfully configured the two node zone cluster. What’s Next ? You should login to one of the zone and configure the resource group and resources. Just
login to any one of the local zone and check the cluster status.
UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/2]
Last login: Mon Apr 11 01:58:20 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===
Similar to this , you could create a N-number of zone cluster under the global cluster. These zone cluster uses the host’s private network and other required resources. In
the next article, we will see that how to configure the resource group on local zone.
The post Sun Cluster – How to Configure Zone Cluster on Solaris ? appeared first on UnixArena.
This article will walk you through how to configure a resource group in zone cluster. Unlike traditional cluster, resource group and cluster resources are should be created
inside the non-global zone. The required physical or logical resources need to be pinned from the global zone using “clzonecluster” or “clzc” command. In this article, we
will configure HA filesystem and IP resource on one of the zone cluster which we have created earlier. Adding to that , you can also configure DB or Application resource
for HA.
Global Cluster:
UASOL2:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
Zone Cluster :
https://unixarena68.rssing.com/chan-59694592/all_p1.html 31/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Login to one of the zone and check the cluster status. (extend the command search path to “/usr/cluster/bin”)
UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/3]
Last login: Mon Apr 11 02:00:17 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
uainfrazone1 Online
uainfrazone2 Online
bash-3.2#
Make sure that both the host names are updated on each nodes “/etc/inet/hosts” file.
3. Login to one of the global zone (Global Cluster) and add the IP detail in zone cluster. (IP which needs to highly available)
4 . Create the ZFS pool on shared SAN LUN. So that zpool can be exported and imported other cluster nodes.
Just manually export the zpool on UASOL2 & try to import it on UASOL1.
5. In one of the global cluster node , invoke “clzc” to add the zpool.
We have successfully added IP address and dataset on the zone cluster configuration. At this point, you are eligible to use these resource under the zone cluster to
configure the cluster resources.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 32/87
02/01/2024 09:02 Solaris Cluster – UnixArena
2. In one of the zone cluster node , Create the cluster resource group with name of “oradb-rg”.
bash-3.2#
If you want to create the resource group for “uainfrazone” zone cluster from global zone , you can use the following command. (with -Z “zone-cluster” name)
bash-3.2#
4. Create the ZFS resource for zpool – oradbp1 (which we have created and assigned this zone cluster in first section of the document)
You must register the ZFS resource type prior to adding the resource in cluster.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 33/87
02/01/2024 09:02 Solaris Cluster – UnixArena
bash-3.2#
bash-3.2# uname -a
SunOS uainfrazone2 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#
You can see that ZFS dataset “oradbp1” and IP “192.168.2.102” is up on uainfrazone1.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 34/87
02/01/2024 09:02 Solaris Cluster – UnixArena
7. Switch the resource group to uainfrazone2 and check the resource status.
bash-3.2#
bash-3.2#
Verify the result from OS level. Login to uainfrazone2 and check the following to confirm the switch over.
bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone uainfrazone
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.103 netmask ffffff00 broadcast 192.168.2.255
e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.102 netmask ffffff00 broadcast 192.168.2.255
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
clprivnet0:3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
zone uainfrazone
inet 172.16.3.65 netmask ffffffc0 broadcast 172.16.3.127
bash-3.2# df -h /oradbp1/
Filesystem size used avail capacity Mounted on
oradbp1 2.9G 31K 2.9G 1% /oradbp1
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
oradbp1 86.5K 2.91G 31K /oradbp1
bash-3.2#
We have successfully configure the Resource group and made ZFS and IP as highly available (HA) on Oracle Solaris zones via zone cluster concept. Hope this article is
informative to you. In the next article, we will see that how to add/remove/delete nodes from the zones cluster.
The post Sun Cluster – Configuring Resource Group in Zone Cluster appeared first on UnixArena.
This article will talk about managing the Zone Cluster on oracle Solaris. The clzonecluster command supports all zone cluster administrative activity, from creation through
modification and control to final destruction. The clzonecluster command supports single point of administration, which means that the command can be executed from any
https://unixarena68.rssing.com/chan-59694592/all_p1.html 35/87
02/01/2024 09:02 Solaris Cluster – UnixArena
node and operates across the entire cluster. The clzonecluster command builds upon the Oracle Solaris zonecfg and zoneadm commands and adds support for cluster
features. We will see that how to add/remove cluster nodes,checking the resource status and listing the resources from the global zone.
Each zone cluster has its own notion of membership. The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone
Cluster Membership Monitor (ZCMM), that monitors the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone
clusters.Naturally, a zone of a zone cluster can only become operational after the global zone on the hosting machine becomes operational. A zone of a zone cluster will
not boot when the global zone is not booted in cluster mode. A zone of a zone cluster can be configured to automatically boot after the machine boots, or the administrator
can manually control when the zone boots. A zone of a zone cluster can fail or an administrator can manually halt or reboot a zone. All of these events result in the zone
cluster automatically updating its membership.
UASOL1:#clzc status -v
=== Zone Clusters ===
UASOL1:#
To check, all the zone cluster’s resource group status from global zone,
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 36/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#
3. Would you like to reboot the zone cluster ? Use the following command.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 37/87
02/01/2024 09:02 Solaris Cluster – UnixArena
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL1:#
2. Here the zone cluster is already in operational and running. In an order to add the additional nodes to this cluster , we need to do add the zone configuration in zone
cluster. (clzc & clzonecluster are identical commands. You can use any one of them)
UASOL1:#
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 38/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#
The zone status might show as “offline” and it will become online once the sys-config is done (via automatic reboot).
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 39/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL2 oraweb2 Online Running
UASOL1:#
SUBCOMMANDS:
UASOL1:#clzonecluster --help
Usage: clzonecluster [] [+ | ...]
clzonecluster [] -? | --help
clzonecluster -V | --version
SUBCOMMANDS:
UASOL1:#
The post Managing Zone Cluster – Oracle Solaris appeared first on UnixArena.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 40/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Once you have configured the Solaris cluster , you have to add the quorum device for additional voting process. Without Quorum devices, cluster will be in installed mode.
You can verify the status using “cluster show -t global | grep installmode” command.Each node in a configured cluster has one ( 1 ) quorum vote and cluster require
minimum two vote to run the cluster.If any one node goes down, cluster won’t get 2 votes and it will panic second node also to avoid the data corruption on shared storage.
To avoid this situation, we can make one small SAN disk as quorum device which can provide one vote.So that, if one node fails, system can still get two votes all the time
on two node cluster.
Once you have configured the two node Solaris cluster, you can start configure the quorum device.
UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
1 1 1
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 0 0 Online
UASOL1:#
4.Make sure you have small size LUN is assigned to both the cluster node from SAN.
UASOL1:#echo |format
Searching for disks...done
UASOL1:#format c1t1d0
selecting c1t1d0: quorum
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
https://unixarena68.rssing.com/chan-59694592/all_p1.html 41/87
02/01/2024 09:02 Solaris Cluster – UnixArena
volname - set 8-character volume name
! - execute , then return
quit
format> fdisk
The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> volname quorum
format> quit
UASOL1:#
UASOL2:#echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32 quorum
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL2:#
UASOL2:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL2:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL2:#
UASOL1:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL1:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL1:#
UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
UASOL1:#cldev show d4
=== DID Device Instances ===
DID Device Name: /dev/did/rdsk/d4
Full Device Path: UASOL1:/dev/rdsk/c1t1d0
Full Device Path: UASOL2:/dev/rdsk/c1t1d0
Replication: none
default_fencing: global
UASOL1:#
UASOL1:#clquorum add d4
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 3 3
https://unixarena68.rssing.com/chan-59694592/all_p1.html 42/87
02/01/2024 09:02 Solaris Cluster – UnixArena
We have successfully configured the quorum on two node Solaris cluster 3.3 u2.
Just reboot any one of the node and you can see the voting status .
UASOL2:#reboot
updating /platform/i86pc/boot_archive
Connection to UASOL2 closed by remote host.
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 2 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 0 1 Offline
UASOL1 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d4 1 1 Online
UASOL1:#
We can see that UASOL1 is not panic by cluster. So quorum device worked well.
If you don’t have real SAN storage for shared LUN, you can use openfiler.
What’s Next ? We will configure resource group for failover local zone and perform the test.
The post How to configure Quorum devices on Solaris cluster ? appeared first on UnixArena.
This article will help you to create a resource group on Solaris cluster and adding couple of resource to it. Resource group is similar to service group in veritas cluster
which bundles the resources in one logical unit. Once you have configured the Solaris two node cluster and added the quorum devices, you can create a resource group.
Once we create the resource group ,we will add zpool storage resource and will perform the failover test.
1. Login to one of the cluster node as root and check the cluster node status.
UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
https://unixarena68.rssing.com/chan-59694592/all_p1.html 43/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1 Online
UASOL1:#
UASOL1:#clinterconnect status
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
UASOL2:e1000g2 UASOL1:e1000g2 Path online
UASOL2:e1000g1 UASOL1:e1000g1 Path online
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
4.In the above command output, everything seems to be fine. So let me create a resource group.
UASOL1:#clrg status
1.Check the cluster device instances. Here d5 d6 are from SAN storage. d5 is already used for quorum setup.
UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d2 UASOL1:/dev/rdsk/c1t2d0
d3 UASOL2:/dev/rdsk/c1t2d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
d5 UASOL2:/dev/rdsk/c2t16d0
d5 UASOL1:/dev/rdsk/c2t14d0
d6 UASOL2:/dev/rdsk/c2t15d0
d6 UASOL1:/dev/rdsk/c2t13d0
UASOL1:#
UASOL1:#cldevice status
=== Cluster DID Devices ===
Device Instance Node Status
--------------- ---- ------
https://unixarena68.rssing.com/chan-59694592/all_p1.html 44/87
02/01/2024 09:02 Solaris Cluster – UnixArena
/dev/did/rdsk/d1 UASOL1 Ok
UASOL2 Ok
/dev/did/rdsk/d2 UASOL1 Ok
/dev/did/rdsk/d3 UASOL2 Ok
/dev/did/rdsk/d4 UASOL1 Ok
UASOL2 Ok
/dev/did/rdsk/d5 UASOL1 Ok
UASOL2 Ok
/dev/did/rdsk/d6 UASOL1 Ok
UASOL2 Ok
UASOL1:#
4.Create the new cluster resource for zpool which we have created on previous step.
UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAZPOOL UASOL2 Offline Offline
UASOL1 Offline Offline
UASOL1:#
6.Bring the resource group online and check the resource status.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 45/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#
8.To test the resource group, Switch the resource group to other node.
9.Now you can see that cluster zpool has been moved to UASOL2 node.
UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Online
UASOL1 No Offline
UASOL1:#ssh UASOL2 zpool list
Password:
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /
rpool 13.9G 9.15G 4.73G 65% ONLINE -
UASOL1:#
So automatic failover should work for resource group which we have just created. In the next article,we will see that how add the localzone to the cluster.
The post How to create Resource Group on Solaris cluster ? appeared first on UnixArena.
In this article, we will see how we can add local zone as a resource in Solaris cluster to make the zone highly available. In the past we have seen similar setup in veritas
cluster. By configuring zone as a resource, if any one node fails ,automatically zone will fly to other node with minimal downtime.(Flying zone on Solaris). Once you have
configured the below things, then we can proceed with bring the localzone under Solaris cluster.
Unlike veritas cluster, local zone IP will be managed from global zone as cluster resource . So let me create a IP resource before proceeding with local zone creation.
1. Login to Solaris cluster nodes and add the local zone IP & Host name information in /etc/hosts file.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 46/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#clresource status
UASOL1:#
UASOL1:#ping UAHAZ1
UAHAZ1 is alive
UASOL1:#
5.You can see that local zone IP has plumbed by Solaris cluster .
UASOL2:#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
ether 0:c:29:e:f8:ce
e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
inet 192.168.2.94 netmask ffffff00 broadcast 192.168.2.255
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.0.65 netmask ffffffc0 broadcast 172.16.0.127
ether 0:c:29:e:f8:d8
e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
inet 172.16.0.129 netmask ffffffc0 broadcast 172.16.0.191
ether 0:c:29:e:f8:e2
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
ether 0:0:0:0:0:1
UASOL2:#
UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAHAZ1 UASOL2 Offline Offline - LogicalHostname offline.
UASOL1 Online Online - LogicalHostname online.
We have successfully created logicalhostname cluster resource and tested on both the nodes.
7.Create a local zone on any one of the cluster node and copy the /etc/zones/global & /etc/zones/zonename.xml file to other node to make the zone configuration available
on both the cluster nodes.Create a local zone without adding network part.(Ex:add net)
https://unixarena68.rssing.com/chan-59694592/all_p1.html 47/87
02/01/2024 09:02 Solaris Cluster – UnixArena
- UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared
UASOL1:#ssh UASOL2 zoneadm list -cv
Password:
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared
UASOL1:#
You can refer this article for creating the local zone but do not configure network.
8.Halt the local zone on UASOl1 and failover the resource group to UASOL2 to test the zone on it.
10. Login to local zone and perform the health check .If everything seems to be fine , then just halt the localzone.
UASOL2:#zlogin UAHAZ1
[Connected to zone 'UAHAZ1' pts/4]
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# uptime
12:37am up 1 user, load average: 0.50, 0.13, 0.07
bash-3.2# exit
# ^D
[Connection to zone 'UAHAZ1' pts/4 closed]
UASOL2:#zoneadm -z UAHAZ1 halt
UASOL2:#
Click Page 2 to see how to create the resource for local zone and adding in to the resource group .
The post How to configure High Availability zone on Solaris cluster ? appeared first on UnixArena.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 48/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Getting opportunity to work on cluster environment is very difficult on big companies due to security problem. If you get opportunity also you can’t play much on it since
most of the cluster environments will be critical to the client. To learn any operating system cluster, you have to build it by your own and configuring the resource groups
and resources yourself. You will be lucky if your organization provides the LAB environment with necessary hardwares for these kind of setup. Due to hardware cost, many
companies are not providing the such LAB setup . So how to become master on cluster ? Will it be possible to setup cluster environment on single Desktop/Laptop ? Yes.
Using VMware workstation, you can setup cluster. In the past we have seen for veritas cluster. Here we will see how to setup two Solaris cluster on Solaris 10 using
VMware workstation .
Desktop/Laptop Configuration:
1. In your desktop, install VMware workstation software and create two virtual machines with below mentioned configuration.
I have allocated 4.3Gb to each VM but assigning 1 GB is enough for each virtual machine. Your virtual machines must have minimum three network adapter . One NIC for
public and two NICs for heartbeat.
4.Enable the Windows share on both the virtual machine for copying Solaris cluster software from your laptop to virtual machine.Copy the Solaris cluster 3.3 u2 to /var/tmp
on both the nodes. Otherwise use winscp to copy it.
8.To proceed further on soalris cluster, you require shared storage. So create a new virtual machine and install openfiler on it.
9. Provision two LUNs to ISCSI target on openfiler web-interface. (512MB Lun for Quorum and 3Gb LUN for shared Zpool)
12.Create the Solaris cluster resource group and add the ZFS storage pools as resource.
13.Finally create the local zone and add it in to Solaris cluster for failover local zone or high availability local zone using Solaris cluster.
By performing above steps , definitely you can setup Two node Solaris cluster on Desktop/Laptop using VMware workstation.
Good Luck.
The post How to setup oracle Solaris cluster on VMware workstation ? appeared first on UnixArena.
This article explains about zone cluster. Zone cluster is created on oracle Solaris hosts using sun cluster aka Oracle Solaris cluster. In Most of the deployments , we might
have seen the failover zones (HA Zones) using sun cluster or Veritas cluster (VCS) on Solaris. Comparatively , zone clusters are very less in the industry but used in some
of the organization very effectively . You must establish the traditional cluster between physical nodes in an order to configure a zone cluster. Since cluster applications
always run in a zone, the cluster node is always a zone.
The typical 4-Node Sun cluster looks like below. (Prior to configuring zone cluster )
https://unixarena68.rssing.com/chan-59694592/all_p1.html 49/87
02/01/2024 09:02 Solaris Cluster – UnixArena
4 Node Cluster
4 Node Cluster
The above diagram shows that two zone clusters have been configured on global cluster.
Here you can see that both test and development systems are in different zone cluster but in same global cluster.
In this cluster model, all the three tiers are in same global cluster but are in different zone cluster.
Cost containment
Administrative workload reduction
Good to know:
Distribution of nodes: You can’t host multiple zones which are part same cluster on same host. Zones must be distributed across the physical nodes.
Node creation: You must create at least one zone cluster node at the time that you create the zone cluster. The name of the zone-cluster node must be unique within the
zone cluster. The infrastructure automatically creates an underlying non-global zone on each host that supports the zone cluster. Each non-global zone is given the same
zone name, which is derived from, and identical to, the name that you assign to the zone cluster when you create the cluster. For example, if you create a zone cluster that
is named “uainfrazone”, the corresponding non-global zone name on each host that supports the zone cluster is also “uainfrazone”.
Cluster name: Each zone-cluster name must be unique throughout the cluster of machines that host the global cluster. The zone-cluster name cannot also be used by a
non-global zone elsewhere in the cluster of machines, nor can the zone-cluster name be the same as that of a global-cluster node. You cannot use “all” or “global” as a
zone-cluster name, because these are reserved names.
Public-network IP addresses: You can optionally assign a specific public-network IP address to each zone-cluster node.
Private hostnames: During creation of the zone cluster, a private hostname is automatically created for each node of the zone cluster, in the same way that hostnames
are created in global clusters.
IP type:A zone cluster is created with the shared IP type. The exclusive IP type is not supported for zone clusters.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 50/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Hope this article is informative to you. In the next article, we will see that how to configure the zone cluster on existing two node sun cluster (global cluster).
The post Sun Cluster – Zone Cluster on Oracle Solaris – Overview appeared first on UnixArena.
This article will walk you through the zone cluster deployment on oracle Solaris. The zone cluster consists of a set of zones, where each zone represents a virtual node.
Each zone of a zone cluster is configured on a separate machine. As such, the upper bound on the number of virtual nodes in a zone cluster is limited to the number of
machines in the global cluster. The zone cluster design introduces a new brand of zone, called the cluster brand. The cluster brand is based on the original native brand
type, and adds enhancements for clustering. The BrandZ framework provides numerous hooks where other software can take action appropriate for the brand type of zone.
For example, there is a hook for software to be called during the zone boot, and zone clusters take advantage of this hook to inform the cluster software about the boot of
the virtual node. Because zone clusters use the BrandZ framework, at a minimum Oracle Solaris 10 5/08 is required.
The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone Cluster Membership Monitor (ZCMM), that monitors
the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone clusters. Zone clusters are considerably simpler than global
clusters. For example, there are no quorum devices in a zone cluster, as a quorum device is not needed.
clzonecluster is a utility to create , modify , delete and manage the zone cluster on sun cluster environment.
Note:
Sun Cluster is a product where zone cluster is one of the cluster type in sun cluster.
Environment:
Operating System : Oracle Solaris 10 u9
Cluster : Sun Cluster 3.3 (aka Oracle Solaris cluster 3.3)
Prerequisites :
Two Oracle Solaris 10 u9 nodes or above
Sun Cluster 3.3 package
Install Oracle Solaris cluster 3.3 (Aka Sun Cluster) on Solaris 10 nodes.
Configure two node sun cluster 3.3 on Solaris 10
UASOL2:#clnode status
=== Cluster Nodes ===
https://unixarena68.rssing.com/chan-59694592/all_p1.html 51/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2 Online
UASOL1 Online
UASOL2:#
3. You must keep the zone path ready for local zone installation on both the cluster nodes. Zone path must be identical on both the nodes. On Node UASOL1 ,
On Node UASOL2,
Note:
• By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.
• Specifying an IP address and NIC for each zone cluster node is optional.
Here , we have just configured one zone on UASOL1 . Clustering make sense when you configure with two or more nodes. So let me create a one more zone on UASOL2
node in same zone cluster.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 52/87
02/01/2024 09:02 Solaris Cluster – UnixArena
scheduling-class:
ip-type: shared
enable_priv_net: true
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
sysid:
root_password: H/80/NT4F2H7g
name_service: NONE
nfs4_domain: dynamic
security_policy: NONE
system_locale: C
terminal: xterm
timezone: Asia/Calcutta
node:
physical-host: UASOL1
hostname: uainfrazone1
net:
address: 192.168.2.101
physical: e1000g0
defrouter not specified
node:
physical-host: UASOL2
hostname: uainfrazone2
net:
address: 192.168.2.103
physical: e1000g0
defrouter not specified
clzc:uainfrazone> exit
6. Check the zone cluster status. At this stage zones are in configured status.
UASOL2:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 53/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- uainfrazone installed /export/zones/uainfrazone cluster shared
UASOL2:#
Here you can see that uainfrazone is created and installed. You should be able to see the same on UASOL1 as well.
Note: There is no difference if you run a command from UASOL1 or UASOL2 since both are in cluster.
8.Bring up the zones using clzonecluster . (You should not use zoneadm command to boot the zones)
In UASOL2,
10. Zones will reboot automatically for sysconfig. You could see that when you access the zone’s console.
UASOL1:#zlogin -C uainfrazone
[Connected to zone 'uainfrazone' console]
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair
Configuring network interface addresses: clprivnet0.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 54/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2:#clzonecluster status
=== Zone Clusters ===
We have successfully configured the two node zone cluster. What’s Next ? You should login to one of the zone and configure the resource group and resources. Just
login to any one of the local zone and check the cluster status.
UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/2]
Last login: Mon Apr 11 01:58:20 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===
Similar to this , you could create a N-number of zone cluster under the global cluster. These zone cluster uses the host’s private network and other required resources. In
the next article, we will see that how to configure the resource group on local zone.
The post Sun Cluster – How to Configure Zone Cluster on Solaris ? appeared first on UnixArena.
This article will walk you through how to configure a resource group in zone cluster. Unlike traditional cluster, resource group and cluster resources are should be created
inside the non-global zone. The required physical or logical resources need to be pinned from the global zone using “clzonecluster” or “clzc” command. In this article, we
will configure HA filesystem and IP resource on one of the zone cluster which we have created earlier. Adding to that , you can also configure DB or Application resource
for HA.
Global Cluster:
UASOL2:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
https://unixarena68.rssing.com/chan-59694592/all_p1.html 55/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2 Online
UASOL1 Online
Zone Cluster :
Login to one of the zone and check the cluster status. (extend the command search path to “/usr/cluster/bin”)
UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/3]
Last login: Mon Apr 11 02:00:17 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
uainfrazone1 Online
uainfrazone2 Online
bash-3.2#
Make sure that both the host names are updated on each nodes “/etc/inet/hosts” file.
3. Login to one of the global zone (Global Cluster) and add the IP detail in zone cluster. (IP which needs to highly available)
4 . Create the ZFS pool on shared SAN LUN. So that zpool can be exported and imported other cluster nodes.
Just manually export the zpool on UASOL2 & try to import it on UASOL1.
5. In one of the global cluster node , invoke “clzc” to add the zpool.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 56/87
02/01/2024 09:02 Solaris Cluster – UnixArena
clzc:uainfrazone> exit
UASOL1:#
We have successfully added IP address and dataset on the zone cluster configuration. At this point, you are eligible to use these resource under the zone cluster to
configure the cluster resources.
2. In one of the zone cluster node , Create the cluster resource group with name of “oradb-rg”.
bash-3.2#
If you want to create the resource group for “uainfrazone” zone cluster from global zone , you can use the following command. (with -Z “zone-cluster” name)
bash-3.2#
4. Create the ZFS resource for zpool – oradbp1 (which we have created and assigned this zone cluster in first section of the document)
You must register the ZFS resource type prior to adding the resource in cluster.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 57/87
02/01/2024 09:02 Solaris Cluster – UnixArena
bash-3.2#
bash-3.2# uname -a
SunOS uainfrazone2 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 58/87
02/01/2024 09:02 Solaris Cluster – UnixArena
You can see that ZFS dataset “oradbp1” and IP “192.168.2.102” is up on uainfrazone1.
7. Switch the resource group to uainfrazone2 and check the resource status.
bash-3.2#
bash-3.2#
Verify the result from OS level. Login to uainfrazone2 and check the following to confirm the switch over.
bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone uainfrazone
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.103 netmask ffffff00 broadcast 192.168.2.255
e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.102 netmask ffffff00 broadcast 192.168.2.255
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
clprivnet0:3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
zone uainfrazone
inet 172.16.3.65 netmask ffffffc0 broadcast 172.16.3.127
bash-3.2# df -h /oradbp1/
Filesystem size used avail capacity Mounted on
oradbp1 2.9G 31K 2.9G 1% /oradbp1
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
oradbp1 86.5K 2.91G 31K /oradbp1
bash-3.2#
We have successfully configure the Resource group and made ZFS and IP as highly available (HA) on Oracle Solaris zones via zone cluster concept. Hope this article is
informative to you. In the next article, we will see that how to add/remove/delete nodes from the zones cluster.
The post Sun Cluster – Configuring Resource Group in Zone Cluster appeared first on UnixArena.
Next Oracle Solaris Cluster – Configure Oracle Database with Dataguard- Part 1
https://unixarena68.rssing.com/chan-59694592/all_p1.html 59/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Previous Sun Cluster – Configuring Resource Group in Zone Cluster
This article will talk about managing the Zone Cluster on oracle Solaris. The clzonecluster command supports all zone cluster administrative activity, from creation through
modification and control to final destruction. The clzonecluster command supports single point of administration, which means that the command can be executed from any
node and operates across the entire cluster. The clzonecluster command builds upon the Oracle Solaris zonecfg and zoneadm commands and adds support for cluster
features. We will see that how to add/remove cluster nodes,checking the resource status and listing the resources from the global zone.
Each zone cluster has its own notion of membership. The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone
Cluster Membership Monitor (ZCMM), that monitors the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone
clusters.Naturally, a zone of a zone cluster can only become operational after the global zone on the hosting machine becomes operational. A zone of a zone cluster will
not boot when the global zone is not booted in cluster mode. A zone of a zone cluster can be configured to automatically boot after the machine boots, or the administrator
can manually control when the zone boots. A zone of a zone cluster can fail or an administrator can manually halt or reboot a zone. All of these events result in the zone
cluster automatically updating its membership.
UASOL1:#clzc status -v
=== Zone Clusters ===
UASOL1:#
To check, all the zone cluster’s resource group status from global zone,
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 60/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#
3. Would you like to reboot the zone cluster ? Use the following command.
https://unixarena68.rssing.com/chan-59694592/all_p1.html 61/87
02/01/2024 09:02 Solaris Cluster – UnixArena
2. Here the zone cluster is already in operational and running. In an order to add the additional nodes to this cluster , we need to do add the zone configuration in zone
cluster. (clzc & clzonecluster are identical commands. You can use any one of them)
UASOL1:#
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 62/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1:#
The zone status might show as “offline” and it will become online once the sys-config is done (via automatic reboot).
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 63/87
02/01/2024 09:02 Solaris Cluster – UnixArena
4. Remove the zone configuration from cluster.
SUBCOMMANDS:
UASOL1:#clzonecluster --help
Usage: clzonecluster [] [+ | ...]
clzonecluster [] -? | --help
clzonecluster -V | --version
SUBCOMMANDS:
UASOL1:#
https://unixarena68.rssing.com/chan-59694592/all_p1.html 64/87
02/01/2024 09:02 Solaris Cluster – UnixArena
The post Managing Zone Cluster – Oracle Solaris appeared first on UnixArena.
Viewing all 22 articles Page 1 Browse latest View live
TOP-RATED IMAGES
LATEST IMAGES
Underwood &
English and
Videohive Premium LIKES & LOVES| regiments en
Transitions Spin DECEMBER 2023 Mechanic Cooperative #1 of football
December 31, 2023, 11:51 pm December 30, 2023, 6:11 am December 30, 2023, 4:00 am December 30, 2023, 4:00
https://unixarena68.rssing.com/chan-59694592/all_p1.html 65/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 66/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 67/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 68/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 69/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 70/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 71/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 72/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 73/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 74/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 75/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 76/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 77/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 78/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 79/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 80/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 81/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 82/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 83/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 84/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 85/87
02/01/2024 09:02 Solaris Cluster – UnixArena
https://unixarena68.rssing.com/chan-59694592/all_p1.html 86/87
02/01/2024 09:02 Solaris Cluster – UnixArena
© 2024 //www.rssing.com
https://unixarena68.rssing.com/chan-59694592/all_p1.html 87/87