0% found this document useful (0 votes)
74 views

Solaris Cluster - UnixArena

This document summarizes how to install Oracle Solaris Cluster 4.1 on two Solaris 11 nodes. It outlines the 10 steps to download and mount the installation ISO, set up the package publisher, install the cluster and quorum server packages, and configure necessary environment variables. It also previews configuring IPMP probe-based monitoring between the cluster nodes.

Uploaded by

Serge Capwell
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

Solaris Cluster - UnixArena

This document summarizes how to install Oracle Solaris Cluster 4.1 on two Solaris 11 nodes. It outlines the 10 steps to download and mount the installation ISO, set up the package publisher, install the cluster and quorum server packages, and configure necessary environment variables. It also previews configuring IPMP probe-based monitoring between the cluster nodes.

Uploaded by

Serge Capwell
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

02/01/2024 09:02 Solaris Cluster – UnixArena

 

 english  

RSSing>> Latest Popular Top Rated Trending


 Channel: Solaris Cluster – UnixArena

Viewing all 22 articles Page 1  Browse latest View live

How to Install Oracle Solaris Cluster on Solaris 11 ?


November 17, 2013, 9:40 am

 Next  Solaris 11- How to Configure IPMP – Probe-Based ?

    

This is the first article about Sun cluster aka oracle Solaris cluster on UnixArena. This Oracle Solaris cluster series articles will cover the
build of a two-node cluster running Solaris Cluster 4.1 on Solaris 11.The intention of making the cluster will be configuring High
availability local zone aka failover local zones.We have done the same setup on veritas cluster quite long time back.In this article we will
see the installation of the cluster software.

Cluster Nodes: UnixArena,UnixArena1


Cluster Software: Oracle Solaris Cluster 4.1.
Operating System:Solaris 11

1.Download the oracle Solaris cluster from oracle website.You may need oracle support login to download it.The file name will be “osc-4_1-ga-repo-full.iso” .

2. Copy the file to your cluster nodes using winscp.

3. Create a device file for ISO image to mount on both the nodes.

root@UnixArena:~# lofiadm -a /osc-4_1-ga-repo-full.iso


/dev/lofi/1

4.Create the mount point and mount it on both the cluster nodes.

root@UnixArena:~# mkdir -p /testrepo/hasol1


root@UnixArena:~# mount -F hsfs /dev/lofi/1 /testrepo/hasol1
root@UnixArena:~#

5.Set the package publisher on both cluster nodes.

root@UnixArena:~# pkg set-publisher -g file:///testrepo/hasol1 ha-cluster


pkg set-publisher: The origin URIs for 'ha-cluster' do not appear to point to a valid pkg repository.
Please verify the repository's location and the client's network configuration.
Additional details:

Unable to contact valid package repository


Encountered the following error(s):
Transport errors encountered when trying to contact repository.
Reported the following errors:
file protocol error: code: 22 reason: The path '/testrepo/hasol1' does not contain a valid package repository.
Repository URL: 'file:///testrepo/hasol1'.
root@UnixArena:~#

oops…We are not pointing the correct repo file.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 1/87
02/01/2024 09:02 Solaris Cluster – UnixArena

6.set the correct the repo file and rebuild the index using pkg command.

root@UnixArena:~# pkg set-publisher -g file:///testrepo/hasol1/repo ha-cluster


root@UnixArena:~# pkg rebuild-index
PHASE ITEMS
Building new search index 884/884
root@UnixArena:~#

7.List the available publisher. You can see the newly configured cluster repo here.

root@UnixArena:~# pkg publisher


PUBLISHER TYPE STATUS P LOCATION
solaris origin online F file:///testrepo/
ha-cluster origin online F file:///testrepo/hasol1/repo/
root@UnixArena:~#

8.Install the oracle solaris cluster using pkg command on both the commands.

root@UnixArena:~# pkg install ha-cluster-full


Creating Plan (Solver setup): |

Creating Plan (Evaluating mediators): |

Packages to install: 68
Create boot environment: No
Create backup boot environment: Yes
Services to change: 7

DOWNLOAD PKGS FILES XFER (MB) SPEED


...r/data-service/oracle-database 8/68 203/7059 1.0/52.4 cache

DOWNLOAD PKGS FILES XFER (MB) SPEED


ha-cluster/system/cfgchk 48/68 1198/7059 10.8/52.4 cache

DOWNLOAD PKGS FILES XFER (MB) SPEED


ha-cluster/system/core 49/68 1803/7059 23.7/52.4 cache

9.Here is the final result of the installation.

root@UnixArena:~# pkg install ha-cluster-full


Packages to install: 68
Create boot environment: No
Create backup boot environment: Yes
Services to change: 7

DOWNLOAD PKGS FILES XFER (MB) SPEED


Completed 68/68 7059/7059 52.4/52.4 0B/s

PHASE ITEMS
Installing new actions 9506/9506
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
Reading search index Done
Building new search index 952/952
root@UnixArena:~#

10.Install quorum server package on both the nodes.

root@UnixArena:/# pkg install ha-cluster-quorum-server-full


Packages to install: 6
Create boot environment: No
Create backup boot environment: No
Services to change: 3

DOWNLOAD PKGS FILES XFER (MB) SPEED


Completed 6/6 69/69 0.1/0.1 0B/s

PHASE ITEMS
Installing new actions 223/223

https://unixarena68.rssing.com/chan-59694592/all_p1.html 2/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Updating package state database Done


Updating image state Done
Creating fast lookup database Done
Reading search index Done
Updating search index 6/6
root@UnixArena:/#

10.Add the below path in root’s profile to access the cluster commands.

root@UnixArena:~# export PATH=$PATH:/usr/cluster/bin

We have successfully installed the oracle cluster 4.1 on both cluster nodes.The installation is so simple like veritas cluster. Very soon we will see the configuration
part.

Thank you for visiting UnixArena.


The post How to Install Oracle Solaris Cluster on Solaris 11 ? appeared first on UnixArena.

search RSSing.com.... Search

Solaris 11- How to Configure IPMP – Probe-Based ?


November 17, 2013, 3:34 pm

 Next  How to install Solaris Cluster 3.3 on Solaris 10 ?


 Previous  How to Install Oracle Solaris Cluster on Solaris 11 ?

    

When your system required to handle huge amount of traffic, then you may need to combine one or more physical interfaces in to logical
interfaces in various methods.1.Link Aggregation 2.Configuring IPMP. IPMP methods works in two methods to detect the failures. If the system
is configured with default router, then it checks the connectivity with it.Otherwise it works in multicast mode by checking the connectivity with
near by node.
Here we will see how to configure probe-based IPMP. The IPMP group can be configured with active-active interfaces or active-standby
interfaces. IPMP provides network availability during interface failures.As you know Solaris 11 has changed lot from Solaris 10
and configuring IPMP also has no exception from that. In Solaris 11, you need to use “ipadm” command to configure the IPMP.

We will also see how to enable the transitive probing for oracle Solaris cluster in the end of the article.

1. Login to Solaris 11 host and make sure two or more than physical interfaces available.

root@UnixArena:~# dladm show-phys


LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full e1000g0
net1 Ethernet up 1000 full e1000g1
net2 Ethernet unknown 0 unknown e1000g2
net3 Ethernet unknown 0 unknown e1000g3
root@UnixArena:~#

2.Here we are going to use net0 and net1 to configure the IPMP group.So make there is no IP exists on net0 & net1.If exists,remove it using ” ipadm delete-addr
net0/XXXX”

root@Unixarena1:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net0/pubip static ok -- 192.168.2.32/24
net1 ip down -- --

3.Here IP exists on net0. Let me remove it from console.

root@Unixarena1:~# ipadm delete-addr net0/pubip


root@Unixarena1:~# ipadm

https://unixarena68.rssing.com/chan-59694592/all_p1.html 3/87
02/01/2024 09:02 Solaris Cluster – UnixArena
NAME CLASS/TYPE STATE UNDER ADDR
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip down -- --
net1 ip down -- --

4.Now we are good to start configuring the IPMP. Let me create a new IPMP group.

root@UnixArena:~# ipadm create-ipmp ipmp0


root@UnixArena:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
ipmp0 ipmp down -- --
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net1 ip ok -- --
root@UnixArena:~#

5.Plumb the interface to configure IP if not already exists in ipadm list.

root@UnixArena:~# ipadm create-ip net0


ipadm: cannot create interface net0: Interface already exists
root@UnixArena:~# ipadm create-ip net1
ipadm: cannot create interface net1: Interface already exists
root@UnixArena:~#

6.Associate the network interfaces to ipmp group which we have created in step number 4. IP address has taken from /etc/hosts file with entry of UnixArena.

root@UnixArena:~# ipadm add-ipmp -i net0 -i net1 ipmp0


root@UnixArena:~# ipadm create-addr -T static -a UnixArena/24 ipmp0/pubip
root@UnixArena:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
ipmp0 ipmp ok -- --
ipmp0/pubip static ok -- 192.168.2.31/24
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok ipmp0 --
net1 ip ok ipmp0 --
root@UnixArena:~#
root@UnixArena:~# cat /etc/hosts |grep UnixArena
192.168.2.31 UnixArena
root@UnixArena:~#

7.For Active-Active IPMP configuration, no further changes needed. The default configuration works as Active-Active model.If the standby property is set to off,then its
active-active model. You can verify the settings using below commands.

root@UnixArena:~# ipadm show-ifprop -p standby


IFNAME PROPERTY PROTO PERM CURRENT PERSISTENT DEFAULT POSSIBLE
lo0 standby ip rw off -- off on,off
ipmp0 standby ip rw off -- off on,off
net0 standby ip rw off -- off on,off
net1 standby ip rw off off off on,off
root@UnixArena:~#
root@UnixArena:~# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net0 yes ipmp0 --mbM-- up ok unknown
net1 yes ipmp0 ------- up ok ok
root@UnixArena:~#

8.For active-standby setup, you need to modify the interface property.Here my standby interface is net1.

root@UnixArena:~# ipadm set-ifprop -p standby=on -m ip net1


root@UnixArena:~# ipadm show-ifprop -p standby net1
IFNAME PROPERTY PROTO PERM CURRENT PERSISTENT DEFAULT POSSIBLE
net1 standby ip rw on on off on,off
root@UnixArena:~#
root@UnixArena:~# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE

https://unixarena68.rssing.com/chan-59694592/all_p1.html 4/87
02/01/2024 09:02 Solaris Cluster – UnixArena

net0 yes ipmp0 --mbM-- up ok unknown


net1 no ipmp0 is----- up ok ok
root@UnixArena:~#

9.Its time to test the IPMP setup. Disable one interface and check whether the standby interface is talking load of primary or not .Before proceeding to disable , make
sure both interfaces are active in IPMP level.

root@UnixArena:~# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net1 no ipmp0 is----- up disabled ok
net0 yes ipmp0 --mbM-- up disabled ok
root@UnixArena:~#

As per the above output, one interface is multicast mode and other one in transitive mode.None of the interface in disabled mode. So it’s good begin the test.

10.Disable the net0 and check the status.

root@UnixArena:~# ipadm disable-if -t net0


root@UnixArena:~# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net1 yes ipmp0 -smbM-- up ok unknown
root@UnixArena:~# ipmpstat -a
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
UnixArena up ipmp0 net1 net1
root@UnixArena:~#

Awesome …Its working fine.

11.Enable the net0 back.

root@UnixArena:~# ipadm enable-if -t net0


root@UnixArena:~# ipmpstat -a
ADDRESS STATE GROUP INBOUND OUTBOUND
:: down ipmp0 -- --
UnixArena up ipmp0 net0 net0
root@UnixArena:~# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net0 yes ipmp0 --mbM-- up ok unknown
net1 no ipmp0 is----- up ok ok

Super stuff..System back to normal with two interfaces on IPMP.


That’s it.We have successfully configured the probe based IPMP and tested.

If you are going to use oracle Solaris cluster then we need to enable transitive probing.Here we will see how to enable transitive probing.
1.Check the existing transitive value,By default it will be false.

root@UnixArena:~# svccfg -s svc:/network/ipmp listprop config/transitive-probing


config/transitive-probing boolean false
root@UnixArena:~# ipmpstat -p
ipmpstat: probe-based failure detection is disabled
root@UnixArena:~#

2.Modify the transitive probing to true.

root@UnixArena:~# svccfg -s svc:/network/ipmp setprop config/transitive-probing=true


root@UnixArena:~# svccfg -s svc:/network/ipmp listprop config/transitive-probing
config/transitive-probing boolean true

3.Re-load the IPMP configuration by refreshing SMF service.

root@UnixArena:~# svcadm refresh svc:/network/ipmp:default


root@UnixArena:~# svcs svc:/network/ipmp:default
STATE STIME FMRI
online 23:08:28 svc:/network/ipmp:default
root@UnixArena:~#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 5/87
02/01/2024 09:02 Solaris Cluster – UnixArena
root@UnixArena:~# ipmpstat -p
TIME INTERFACE PROBE NETRTT RTT RTTAVG TARGET
0.94s net1 t4 0.58ms 0.59ms 0.55ms
2.35s net1 t5 0.51ms 0.52ms 0.55ms
3.76s net1 t6 0.47ms 0.47ms 0.54ms
4.72s net1 t7 0.53ms 0.53ms 0.54ms
6.69s net1 t8 0.50ms 0.50ms 0.53ms
^C
root@UnixArena:~#

As per the above output,you can see probe-based failure detection is enabled.

Thank you for visiting UnixArena. Please leave a comment if you have any doubt on this.
The post Solaris 11- How to Configure IPMP – Probe-Based ? appeared first on UnixArena.

How to install Solaris Cluster 3.3 on Solaris 10 ?


June 24, 2014, 6:15 pm

 Next  How to configure Solaris two node cluster on Solaris 10 ?


 Previous  Solaris 11- How to Configure IPMP – Probe-Based ?

    

Here we will see how to install oracle Solaris Cluster 3.3 u2 on Solaris 10 x86 system. I haven’t showed any interest on Solaris cluster but getting many requests to write
about it. So i took some time and now i am back with Solaris Cluster series articles which will help you to setup the cluster from scratch.Our goal will be making high
availability zones using the cluster service. Here we will just see the installation part.

I have used Solaris 10 update 11 for this setup.

UASOL1:#uname -a
SunOS UASOL1 5.10 Generic_147148-26 i86pc i386 i86pc
UASOL1:#cat /etc/release
Oracle Solaris 10 1/13 s10x_u11wos_24a X86
Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
Assembled 17 January 2013
UASOL1:#

1.Download the cluster package from oracle and copy to the Solaris nodes.

2.Unzip the cluster package.

UASOL1:#unzip solaris-cluster-3_3u2-ga-x86.zip > /dev/null


UASOL1:#ls -lrt
drwxr-xr-x 2 root root 4 Jan 12 2013 License
drwxr-xr-x 3 root root 5 Feb 28 2013 Solaris_x86
drwxr-xr-x 2 root root 3 Feb 28 2013 README
-r--r--r-- 1 root root 3322 Feb 28 2013 Copyright
-rwx------ 1 root root 79871062 Jun 24 10:52 solaris-cluster-3_3u2-ga-x86.zip
UASOL1:#

3.Navigate to “Solaris_x86” directory and execute the installer script.

UASOL1:#cd Solaris_x86
UASOL1:#ls -lrt
total 27
-rw-r--r-- 1 root root 89 Feb 28 2013 release_info
-rwxr-xr-x 1 root root 10641 Feb 28 2013 installer
drwxr-xr-x 8 root root 8 Feb 28 2013 Product
UASOL1:#./installer -nodisplay
Welcome to Oracle(R) Solaris Cluster; serious software made simple...

Before you begin, refer to the Release Notes and Installation Guide for the
products that you are installing. This documentation is available at http:
//www.oracle.com/technetwork/indexes/documentation/index.html.

You can install any or all of the Services provided by Oracle Solaris
Cluster.

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

4.Just hit enter to continue. Here you will get the component selection.

Installation Type
-----------------

https://unixarena68.rssing.com/chan-59694592/all_p1.html 6/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Do you want to install the full set of Oracle Solaris Cluster Products and
Services? (Yes/No) [Yes] {"<" goes back, "!" exits} No

Choose Software Components - Main Menu


-------------------------------
Note: "* *" indicates that the selection is disabled

[ ] 1. Oracle Solaris Cluster Geographic Edition 3.3u2


[ ] 2. Quorum Server
[ ] 3. High Availability Session Store 4.4.3
[ ] 4. Oracle Solaris Cluster 3.3u2
[ ] 5. Java DB 10.2.2.1
[ ] 6. Oracle Solaris Cluster Agents 3.3u2

Enter a comma separated list of products to install, or press R to refresh


the list [] {"<" goes back, "!" exits}: 4,5,6

5.Just press enter to continue with current component selection.

Choose Software Components - Confirm Choices


--------------------------------------------

Based on product dependencies for your selections, the installer will install:

[X] 4. Oracle Solaris Cluster 3.3u2


* * Java DB 10.2.2.1
[X] 6. Oracle Solaris Cluster Agents 3.3u2

Press "Enter" to Continue or Enter a comma separated list of products to


deselect. Enter "-" with product number to deselect a product (for eg. -5
will deselect product number 5). To return to the component selection list,
press "r". [1] {"<" goes back, "!" exits}

Component Selection - Selected Product "Oracle Solaris Cluster 3.3u2"


---------------------------------------------------------------------

** * Oracle Solaris Cluster Core


*[X] 2. Oracle Solaris Cluster Manager

Enter a comma separated list of components to install (or A to install all )


[A] {"<" goes back, "!" exits}

** * Oracle Solaris Cluster Core


*[X] 2. Oracle Solaris Cluster Manager

Press "Enter" to Continue or Enter a comma separated list of products to


deselect. Enter "-" with product number to deselect a product (for eg. -5
will deselect product number 5). To return to the component selection list,
press "r". [1] {"<" goes back, "!" exits}

Component Selection - Selected Product "Java DB 10.2.2.1"


---------------------------------------------------------

** * Java DB Client
** * Java DB Server

Enter a comma separated list of components to install (or A to install all )


[A] {"<" goes back, "!" exits}

** * Java DB Client
** * Java DB Server

Press "Enter" to Continue or Enter a comma separated list of products to


deselect. Enter "-" with product number to deselect a product (for eg. -5
will deselect product number 5). To return to the component selection list,
press "r". [1] {"<" goes back, "!" exits}

6.Here is the agent list which is going to be install .

*[X] 1. Oracle Solaris Cluster HA for Java System Application Server


*[X] 2. Oracle Solaris Cluster HA for Java System Message Queue
*[X] 3. Oracle Solaris Cluster HA for Java System Directory Server
*[X] 4. Oracle Solaris Cluster HA for Java System Messaging Server
*[X] 5. Oracle Solaris Cluster HA for Application Server EE (HADB)

https://unixarena68.rssing.com/chan-59694592/all_p1.html 7/87
02/01/2024 09:02 Solaris Cluster – UnixArena
*[X] 6. Oracle Solaris Cluster HA/Scalable for Java System Web Server
*[X] 7. Oracle Solaris Cluster HA for Instant Messaging
*[X] 8. Oracle Solaris Cluster HA for Java System Calendar Server
*[X] 9. Oracle Solaris Cluster HA for Apache Tomcat
*[X] 10. Oracle Solaris Cluster HA for DHCP
*[X] 11. Oracle Solaris Cluster HA for DNS
*[X] 12. Oracle Solaris Cluster HA for MySQL
*[X] 13. Oracle Solaris Cluster HA for Sun N1 Service Provisioning System
*[X] 14. Oracle Solaris Cluster HA for NFS
*[X] 15. Oracle Solaris Cluster HA for Oracle
*[X] 16. Oracle Solaris Cluster HA for Samba
*[X] 17. Oracle Solaris Cluster HA for Sun N1 Grid Engine
*[X] 18. Oracle Solaris Cluster HA for Solaris Containers
*[X] 19. Oracle Solaris Cluster Support for Oracle RAC
*[X] 20. Oracle Solaris Cluster HA for Apache
*[X] 21. Oracle Solaris Cluster HA for SAP liveCache
*[X] 22. Oracle Solaris Cluster HA for WebSphere Message Broker
*[X] 23. Oracle Solaris Cluster HA for WebSphere MQ
*[X] 24. Oracle Solaris Cluster HA for SAPDB
*[X] 25. Oracle Solaris Cluster HA for SAP Web Application Server
*[X] 26. Oracle Solaris Cluster HA for SAP
*[X] 27. Oracle Solaris Cluster HA for Kerberos
*[X] 28. Oracle Solaris Cluster HA for BEA WebLogic Server
*[X] 29. Oracle Solaris Cluster HA for PostgreSQL
*[X] 30. Oracle Solaris Cluster HA for Oracle 9iAS
*[X] 31. Oracle Solaris Cluster HA for Sybase ASE
*[X] 32. Oracle Solaris Cluster HA for Informix
*[X] 33. Oracle Solaris Cluster HA for TimesTen
*[X] 34. Oracle Solaris Cluster HA for Oracle External Proxy
*[X] 35. Oracle Solaris Cluster HA for Oracle Web Tier Agent
*[X] 36. Oracle Solaris Cluster HA for SAP NetWeaver

Press "Enter" to Continue or Enter a comma separated list of products to


deselect. Enter "-" with product number to deselect a product (for eg. -5
will deselect product number 5). To return to the component selection list,
press "r". [1] {"<" goes back, "!" exits}

7.I haven’t select the multilingual package and support.

Install multilingual package(s) for all selected components [Yes] {"<" goes
back, "!" exits}: No

You have chosen not to install multilanguage support.


If you do not add this support now,
you will not be able to install it later.

Do you want to add multilanguage support now?

1. Yes
2. No

Enter your choice [1] {"<" goes back, "!" exits} 2

8.Installer will check the system now.

Checking System Status


Available disk space... : Checking .... OK
Memory installed... : Checking .... OK
Swap space installed... : Checking .... OK
Operating system patches... : Checking .... OK
Operating system resources... : Checking .... OK
System ready for installation
Enter 1 to continue [1] {"<" goes back, "!" exits} 1

9.Do not configure the cluster now. Just proceed with the Solaris cluster installation.

Screen for selecting Type of Configuration

1. Configure Now - Selectively override defaults or express through

2. Configure Later - Manually configure following installation

Select Type of Configuration [1] {"<" goes back, "!" exits} 2


Ready to Install
----------------
The following components will be installed.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 8/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Product: Oracle Solaris Cluster


Uninstall Location: /var/sadm/prod/SUNWentsyssc33u2
Space Required: 140.48 MB
---------------------------------------------------
Java DB
Java DB Server
Java DB Client
Oracle Solaris Cluster 3.3u2
Oracle Solaris Cluster Core
Oracle Solaris Cluster Manager
Oracle Solaris Cluster Agents 3.3u2
Oracle Solaris Cluster HA for Java(TM) System Application Server
Oracle Solaris Cluster HA for Java(TM) System Message Queue
Oracle Solaris Cluster HA for Sybase ASE
Oracle Solaris Cluster HA for Java(TM) System Messaging Server
Oracle Solaris Cluster HA for Java(TM) System Calendar Server
Oracle Solaris Cluster HA for Java(TM) System Directory Server
Oracle Solaris Cluster HA for Java(TM) System Application Server EE (HADB)
Oracle Solaris Cluster HA for Instant Messaging
Oracle Solaris Cluster HA/Scalable for Java(TM) System Web Server
Oracle Solaris Cluster HA for Apache Tomcat
Oracle Solaris Cluster HA for DHCP
Oracle Solaris Cluster HA for DNS
Oracle Solaris Cluster HA for MySQL
Oracle Solaris Cluster HA for Sun N1 Service Provisioning System
Oracle Solaris Cluster HA for NFS
Oracle Solaris Cluster HA for Oracle
Oracle Solaris Cluster HA for Samba
Oracle Solaris Cluster HA for Sun N1 Grid Engine
Oracle Solaris Cluster HA for Solaris Containers
Oracle Solaris Cluster Support for Oracle RAC
Oracle Solaris Cluster HA for Apache
Oracle Solaris Cluster HA for SAP liveCache
Oracle Solaris Cluster HA for WebSphere Message Broker
Oracle Solaris Cluster HA for WebSphere MQ
Oracle Solaris Cluster HA for Oracle 9iAS
Oracle Solaris Cluster HA for SAPDB
Oracle Solaris Cluster HA for SAP Web Application Server
Oracle Solaris Cluster HA for SAP
Oracle Solaris Cluster HA for Kerberos
Oracle Solaris Cluster HA for BEA WebLogic Server
Oracle Solaris Cluster HA for PostgreSQL
Oracle Solaris Cluster HA for Informix
Oracle Solaris Cluster HA for TimesTen
Oracle Solaris Cluster HA for Oracle External Proxy
Oracle Solaris Cluster HA for Oracle Web Tier Agent
Oracle Solaris Cluster HA for SAP NetWeaver
1. Install
2. Start Over
3. Exit Installation

What would you like to do [1] {"<" goes back, "!" exits}? 1

10.Once the installation is completed,you can see the Solaris cluster install summary.

Oracle Solaris Cluster


|-1%--------------25%-----------------50%-----------------75%--------------100%|

Installation Complete

Software installation has completed successfully. You can view the installation
summary and log by using the choices below. Summary and log files are available
in /var/sadm/install/logs/.

Your next step is to perform the postinstallation configuration and


verification tasks documented in the Postinstallation Configuration and Startup
Chapter of the Java(TM) Enterprise System Installation Guide. See: http:
//download.oracle.com/docs/cd/E19528-01/820-2827.

Enter 1 to view installation summary and Enter 2 to view installation logs


[1] {"!" exits} 1
Installation Summary Report
Install Summary
Oracle Solaris Cluster : Installed
Java DB : Installed, Configure After Install
Oracle Solaris Cluster 3.3u2 : Installed, Configure After Install

https://unixarena68.rssing.com/chan-59694592/all_p1.html 9/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Oracle Solaris Cluster Agents 3.3u2 : Installed, Configure After Install
Configuration Data
The configuration log is saved in : /var/sadm/install/logs/JavaES_Install_log.
1904654032
Enter 1 to view installation summary and Enter 2 to view installation logs
[1] {"!" exits} !
In order to notify you of potential updates, we need to confirm an internet connection. Do you want to proceed [Y/N] : N
UASOL1:#

We have successfully installed Solaris cluster 3.3 u2 on Solaris 10 x86 systems.

Our next steps will be ,

Configuring the two node Solaris cluster


Making high-availability zone using cluster service
Testing the cluster

Hope this article is informative to you.

Share it ! Comment it !! Be Sociable !!!

The post How to install Solaris Cluster 3.3 on Solaris 10 ? appeared first on UnixArena.

How to configure Solaris two node cluster on Solaris 10 ?


June 24, 2014, 11:40 pm

 Next  How to configure Quorum devices on Solaris cluster ?


 Previous  How to install Solaris Cluster 3.3 on Solaris 10 ?

    

Once you have installed the Solaris cluster on Solaris 10 nodes, you can start configuring the Solaris cluster according to the requirement . If you are planning for two node
cluster, then you need two Solaris 10 hosts with 3 NIC cards and shared storage .You have to provide two dedicated NIC for cluster heartbeat.Also you need to setup up
root – password less authentication between two Solaris nodes to configure the cluster. Here we will see that how we can configure two node Solaris cluster.

Solaris 10 Hosts :

UASOL1 – 192.168.2.90
UASOL2 – 192.168.2.91

Update both the nodes /etc/hosts file to resolve the host name. On UASOL1,

UASOL1:#cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.2.90 UASOL1 loghost
192.168.2.91 UASOL2

On UASOL2,

UASOL2:#cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.2.90 UASOL1
192.168.2.91 UASOL2 loghost

1. Login to one of the Solaris 10 node where you need to configure Solaris cluster.

2.Navigate to /usr/cluster/bin directory and execute scinstall cluster.Select 1 to create a new cluster .

login as: root


Using keyboard-interactive authentication.
Password:
Last login: Tue Jun 24 10:51:29 2014 from 192.168.2.3
Oracle Corporation SunOS 5.10 Generic Patch January 2005
UASOL1:#cd /usr/cluster/bin/
UASOL1:#./scinstall

*** Main Menu ***

https://unixarena68.rssing.com/chan-59694592/all_p1.html 10/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Please select from one of the following (*) options:

* 1) Create a new cluster or add a cluster node


2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
* 5) Print release information for this cluster node

* ?) Help with menu options


* q) Quit

Option: 1

3.Again Select option 1 to create new cluster.

*** New Cluster and Cluster Node Menu ***

Please select from any one of the following options:

1) Create a new cluster


2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster

?) Help with menu options


q) Return to the Main Menu

Option: 1

4.We have already setup the ssh password less authentication for root between two nodes. So we can continue.

*** Create a New Cluster ***

This option creates and configures a new cluster.

You must use the Oracle Solaris Cluster installation media to install
the Oracle Solaris Cluster framework software on each machine in the
new cluster before you select this option.

If the "remote configuration" option is unselected from the Oracle


Solaris Cluster installer when you install the Oracle Solaris Cluster
framework on any of the new nodes, then you must configure either the
remote shell (see rsh(1)) or the secure shell (see ssh(1)) before you
select this option. If rsh or ssh is used, you must enable root access
to all of the new member nodes from this node.

Press Control-D at any time to return to the Main Menu.

Do you want to continue (yes/no) [yes]?

5.Its better to go with custom mode of cluster configuration.

>>> Typical or Custom Mode <<<

This tool supports two modes of operation, Typical mode and Custom
mode. For most clusters, you can use Typical mode. However, you might
need to select the Custom mode option if not all of the Typical mode
defaults can be applied to your cluster.

For more information about the differences between Typical and Custom
modes, select the Help option from the menu.

Please select from one of the following options:

1) Typical
2) Custom

?) Help
q) Return to the Main Menu

Option [1]: 2

6.Enter the cluster name .

https://unixarena68.rssing.com/chan-59694592/all_p1.html 11/87
02/01/2024 09:02 Solaris Cluster – UnixArena

>>> Cluster Name <<<

Each cluster has a name assigned to it. The name can be made up of any
characters other than whitespace. Each cluster name should be unique
within the namespace of your enterprise.

What is the name of the cluster you want to establish? UACLS1

7.Enter the Solaris 10 nodes hostname which are going to participate on this cluster.

>>> Cluster Nodes <<<

This Oracle Solaris Cluster release supports a total of up to 16


nodes.

List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:

Node name (Control-D to finish): UASOL1


Node name (Control-D to finish): UASOL2
Node name (Control-D to finish): ^D

This is the complete list of nodes:


UASOL1
UASOL2

Is it correct (yes/no) [yes]?

Attempting to contact "UASOL2" ... done

Searching for a remote configuration method ... done

The Oracle Solaris Cluster framework is able to complete the


configuration process without remote shell access.

8.I haven’t used DES authentication .

>>> Authenticating Requests to Add Nodes <<<

Once the first node establishes itself as a single node cluster, other
nodes attempting to add themselves to the cluster configuration must
be found on the list of nodes you just provided. You can modify this
list by using claccess(1CL) or other tools once the cluster has been
established.

By default, nodes are not securely authenticated as they attempt to


add themselves to the cluster configuration. This is generally
considered adequate, since nodes which are not physically connected to
the private cluster interconnect will never be able to actually join
the cluster. However, DES authentication is available. If DES
authentication is selected, you must configure all necessary
encryption keys before any node will be allowed to join the cluster
(see keyserv(1M), publickey(4)).

Do you need to use DES authentication (yes/no) [no]?

9. We have two dedicated physical NIC cards on both the solaris nodes.

>>> Minimum Number of Private Networks <<<

Each cluster is typically configured with at least two private


networks. Configuring a cluster with just one private interconnect
provides less availability and will require the cluster to spend more
time in automatic recovery if that private interconnect fails.

Should this cluster use at least two private networks (yes/no) [yes]?

10. In my setup, there is no switch in place to provide the system interconnect.

>>> Point-to-Point Cables <<<

The two nodes of a two-node cluster may use a directly-connected


interconnect. That is, no cluster switches are configured. However,
when there are greater than two nodes, this interactive form of
scinstall assumes that there will be exactly one switch for each

https://unixarena68.rssing.com/chan-59694592/all_p1.html 12/87
02/01/2024 09:02 Solaris Cluster – UnixArena

private network.

Does this two-node cluster use switches (yes/no) [yes]? no

11.Select the first network adapter for cluster heartbeat.

>>> Cluster Transport Adapters and Cables <<<

Transport adapters are the adapters that attach to the private cluster
interconnect.

Select the first cluster transport adapter:

1) e1000g1
2) e1000g2
3) Other

Option: 1

Adapter "e1000g1" is an Ethernet adapter.

Searching for any unexpected network traffic on "e1000g1" ... done


Unexpected network traffic was seen on "e1000g1".
"e1000g1" may be cabled to a public network.

Do you want to use "e1000g1" anyway (yes/no) [no]? yes

The "dlpi" transport type will be set for this cluster.

Name of adapter (physical or virtual) on "UASOL2" to which "e1000g1" is connected? e1000g1

12.Select the second cluster heartbeat network adapter name.

Select the second cluster transport adapter:

1) e1000g1
2) e1000g2
3) Other

Option: 2

Adapter "e1000g2" is an Ethernet adapter.

Searching for any unexpected network traffic on "e1000g2" ... done


Unexpected network traffic was seen on "e1000g2".
"e1000g2" may be cabled to a public network.

Do you want to use "e1000g2" anyway (yes/no) [no]? yes

The "dlpi" transport type will be set for this cluster.

Name of adapter (physical or virtual) on "UASOL2" to which "e1000g2" is connected? e1000g2

13.Let the cluster chooses network and subnet for Solaris cluster transport.

>>> Network Address for the Cluster Transport <<<


The cluster transport uses a default network address of 172.16.0.0. If this IP address is already in use elsewhere within your enterprise, specify anot
Is it okay to accept the default network address (yes/no) [yes]?
Is it okay to accept the default netmask (yes/no) [yes]?
Plumbing network address 172.16.0.0 on adapter e1000g1 >> NOT DUPLICATE ... done
Plumbing network address 172.16.0.0 on adapter e1000g2 >> NOT DUPLICATE ... done

14.Leave Fencing turned on.

>>> Set Global Fencing <<<

Fencing is a mechanism that a cluster uses to protect data integrity


when the cluster interconnect between nodes is lost. By default,
fencing is turned on for global fencing, and each disk uses the global
fencing setting. This screen allows you to turn off the global
fencing.

Most of the time, leave fencing turned on. However, turn off fencing
when at least one of the following conditions is true: 1) Your shared
storage devices, such as Serial Advanced Technology Attachment (SATA)

https://unixarena68.rssing.com/chan-59694592/all_p1.html 13/87
02/01/2024 09:02 Solaris Cluster – UnixArena
disks, do not support SCSI; 2) You want to allow systems outside your
cluster to access storage devices attached to your cluster; 3) Oracle
Corporation has not qualified the SCSI persistent group reservation
(PGR) support for your shared storage devices.

If you choose to turn off global fencing now, after your cluster
starts you can still use the cluster(1CL) command to turn on global
fencing.

Do you want to turn off global fencing (yes/no) [no]?

15.Resource security configuration can be tuned using clsetup command later.

>>> Resource Security Configuration <<<

The execution of a cluster resource is controlled by the setting of a


global cluster property called resource_security. When the cluster is
booted, this property is set to SECURE.

Resource methods such as Start and Validate always run as root. If


resource_security is set to SECURE and the resource method executable
file has non-root ownership or group or world write permissions,
execution of the resource method fails at run time and an error is
returned.

Resource types that declare the Application_user resource property


perform additional checks on the executable file ownership and
permissions of application programs. If the resource_security property
is set to SECURE and the application program executable is not owned
by root or by the configured Application_user of that resource, or the
executable has group or world write permissions, execution of the
application program fails at run time and an error is returned.

Resource types that declare the Application_user property execute


application programs according to the setting of the resource_security
cluster property. If resource_security is set to SECURE, the
application user will be the value of the Application_user resource
property; however, if there is no Application_user property, or it is
unset or empty, the application user will be the owner of the
application program executable file. The resource will attempt to
execute the application program as the application user; however a
non-root process cannot execute as root (regardless of property
settings and file ownership) and will execute programs as the
effective non-root user ID.

You can use the "clsetup" command to change the value of the
resource_security property after the cluster is running.

Press Enter to continue:

15.Disable automatic quorum device selection.

>>> Quorum Configuration <<<

Every two-node cluster requires at least one quorum device. By


default, scinstall selects and configures a shared disk quorum device
for you.

This screen allows you to disable the automatic selection and


configuration of a quorum device.

You have chosen to turn on the global fencing. If your shared storage
devices do not support SCSI, such as Serial Advanced Technology
Attachment (SATA) disks, or if your shared disks do not support
SCSI-2, you must disable this feature.

If you disable automatic quorum device selection now, or if you intend


to use a quorum device that is not a shared disk, you must instead use
clsetup(1M) to manually configure quorum once both nodes have joined
the cluster for the first time.

Do you want to disable automatic quorum device selection (yes/no) [no]? yes

16.Oracle Solaris cluster 3.3 u2 , automatically create a global filesystem on both the systems.

>>> Global Devices File System <<<

https://unixarena68.rssing.com/chan-59694592/all_p1.html 14/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Each node in the cluster must have a local file system mounted on
/global/.devices/node@ before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.

You must supply the name of either an already-mounted file system or a


raw disk partition which scinstall can use to create the global
devices file system. This file system or partition should be at least
512 MB in size.

Alternatively, you can use a loopback file (lofi), with a new file
system, and mount it on /global/.devices/node@.

If an already-mounted file system is used, the file system must be


empty. If a raw disk partition is used, a new file system will be
created for you.

If the lofi method is used, scinstall creates a new 100 MB file system
from a lofi device by using the file /.globaldevices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.

The default is to use lofi.

For node "UASOL1",


Is it okay to use this default (yes/no) [yes]?

For node "UASOL2",


Is it okay to use this default (yes/no) [yes]?

17.Proceed with cluster creation. Do not interrupt cluster creation due to cluster check errors.

Is it okay to create the new cluster (yes/no) [yes]?

During the cluster creation process, cluster check is run on each of


the new cluster nodes. If cluster check detects problems, you can
either interrupt the process or check the log files after the cluster
has been established.

Interrupt cluster creation for cluster check errors (yes/no) [no]?

18.Once cluster configuration is completed , it reboots the other nodes and it reboots itself.

Cluster Creation

Log file - /var/cluster/logs/install/scinstall.log.1215

Started cluster check on "UASOL1".


Started cluster check on "UASOL2".

cluster check failed for "UASOL1".


cluster check failed for "UASOL2".

The cluster check command failed on both of the nodes.

Refer to the log file for details.


The name of the log file is /var/cluster/logs/install/scinstall.log.1215.

Configuring "UASOL2" ... done


Rebooting "UASOL2" ... done

Configuring "UASOL1" ... done


Rebooting "UASOL1" ...

Log file - /var/cluster/logs/install/scinstall.log.1215

Rebooting ...

19.Once the nodes are rebooted, you can see that both the nodes are booted in cluster mode and check the status using below command.

UASOL1:#clnode status

=== Cluster Nodes ===

--- Node Status ---

https://unixarena68.rssing.com/chan-59694592/all_p1.html 15/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Node Name Status


--------- ------
UASOL2 Online
UASOL1 Online

UASOL1:#

20.You can see the loopback global-devices on both the systems.

UASOL1:#df -h |grep -i node


/dev/lofi/127 781M 5.4M 729M 1% /global/.devices/node@1
/dev/lofi/126 781M 5.4M 729M 1% /global/.devices/node@2
UASOL1:#lofiadm
Block Device File
/dev/lofi/126 /.globaldevices
UASOL1:#

21.You can also see that Solaris cluster has plumbed the new IP’s on both hosts .

UASOL1:#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.90 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
ether 0:c:29:4f:bc:b8
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.0.66 netmask ffffffc0 broadcast 172.16.0.127
ether 0:c:29:4f:bc:c2
e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
inet 172.16.0.130 netmask ffffffc0 broadcast 172.16.0.191
ether 0:c:29:4f:bc:cc
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.2 netmask ffffff00 broadcast 172.16.2.255
ether 0:0:0:0:0:2
UASOL1:#

22.As of now , we haven’t configured the quorum devices, but you can just see the voting status using below command.

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
1 1 1
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 0 0 Online
UASOL1:#

We have successfully configured oracle Solaris two node cluster on Solaris 10 update 11 X86 systems.

What’s Next ?

Configure the Quorum devices to avoid split Brian


Configure the Resource group
Configure HA zone and Test the failover

if you want to configure Solaris cluster on VMware workstation,refer this article.

Share it ! Comment it !! Be Sociable !!!

The post How to configure Solaris two node cluster on Solaris 10 ? appeared first on UnixArena.

How to configure Quorum devices on Solaris cluster ?


June 30, 2014, 12:51 am

 Next  How to create Resource Group on Solaris cluster ?


 Previous  How to configure Solaris two node cluster on Solaris 10 ?

    

https://unixarena68.rssing.com/chan-59694592/all_p1.html 16/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Once you have configured the Solaris cluster , you have to add the quorum device for additional voting process. Without Quorum devices, cluster will be in installed mode.
You can verify the status using “cluster show -t global | grep installmode” command.Each node in a configured cluster has one ( 1 ) quorum vote and cluster require
minimum two vote to run the cluster.If any one node goes down, cluster won’t get 2 votes and it will panic second node also to avoid the data corruption on shared storage.
To avoid this situation, we can make one small SAN disk as quorum device which can provide one vote.So that, if one node fails, system can still get two votes all the time
on two node cluster.

Once you have configured the two node Solaris cluster, you can start configure the quorum device.

1.Check the cluster node status.

UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
UASOL1:#

2.You can see that ,currently cluster is in install mode.

# cluster show -t global | grep installmode


installmode: enabled

3.Current cluster quorum status

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
1 1 1
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 0 0 Online
UASOL1:#

4.Make sure you have small size LUN is assigned to both the cluster node from SAN.

UASOL1:#echo |format
Searching for disks...done

AVAILABLE DISK SELECTIONS:


0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL1:#

5.Let me label the disk and naming the disk.

UASOL1:#format c1t1d0
selecting c1t1d0: quorum
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return

https://unixarena68.rssing.com/chan-59694592/all_p1.html 17/87
02/01/2024 09:02 Solaris Cluster – UnixArena
quit
format> fdisk
The default partition for the disk is:

a 100% "SOLARIS System" partition

Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> volname quorum
format> quit
UASOL1:#

6.You can see the same LUN on UASOL2 node as well.

UASOL2:#echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32 quorum
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL2:#

7. Populate the disks in solaris cluster.

UASOL2:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL2:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL2:#

UASOL1:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL1:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL1:#

8.Check the devices status.

UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
UASOL1:#cldev show d4
=== DID Device Instances ===
DID Device Name: /dev/did/rdsk/d4
Full Device Path: UASOL1:/dev/rdsk/c1t1d0
Full Device Path: UASOL2:/dev/rdsk/c1t1d0
Replication: none
default_fencing: global
UASOL1:#

9.Add the d4 as quorum device in cluster.

UASOL1:#clquorum add d4
UASOL1:#

10.Check the Quorum status

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 3 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status

https://unixarena68.rssing.com/chan-59694592/all_p1.html 18/87
02/01/2024 09:02 Solaris Cluster – UnixArena

--------- ------- -------- ------


UASOL2 1 1 Online
UASOL1 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d4 1 1 Online
UASOL1:#

We have successfully configured the quorum on two node Solaris cluster 3.3 u2.

How can we test quorum device is working or not ?

Just reboot any one of the node and you can see the voting status .

UASOL2:#reboot
updating /platform/i86pc/boot_archive
Connection to UASOL2 closed by remote host.
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 2 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 0 1 Offline
UASOL1 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d4 1 1 Online
UASOL1:#

We can see that UASOL1 is not panic by cluster. So quorum device worked well.

If you don’t have real SAN storage for shared LUN, you can use openfiler.

What’s Next ? We will configure resource group for failover local zone and perform the test.

Share it ! Comment it !! Be Sociable !!

The post How to configure Quorum devices on Solaris cluster ? appeared first on UnixArena.

How to create Resource Group on Solaris cluster ?


June 30, 2014, 1:50 pm

 Next  How to configure High Availability zone on Solaris cluster ?


 Previous  How to configure Quorum devices on Solaris cluster ?

    

This article will help you to create a resource group on Solaris cluster and adding couple of resource to it. Resource group is similar to service group in veritas cluster
which bundles the resources in one logical unit. Once you have configured the Solaris two node cluster and added the quorum devices, you can create a resource group.
Once we create the resource group ,we will add zpool storage resource and will perform the failover test.

1. Login to one of the cluster node as root and check the cluster node status.

UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
UASOL1:#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 19/87
02/01/2024 09:02 Solaris Cluster – UnixArena
2.Check the heartbeat link status of Solaris cluster.

UASOL1:#clinterconnect status
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
UASOL2:e1000g2 UASOL1:e1000g2 Path online
UASOL2:e1000g1 UASOL1:e1000g1 Path online

UASOL1:#

3.Check the quorum status.

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---

Needed Present Possible


------ ------- --------
2 3 3

--- Quorum Votes by Node (current status) ---


Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 1 1 Online

--- Quorum Votes by Device (current status) ---


Device Name Present Possible Status
----------- ------- -------- ------
d5 1 1 Online
UASOL1:#

4.In the above command output, everything seems to be fine. So let me create a resource group.

UASOL1:#clrg create UA-HA-ZRG


UASOL1:#

5.Check the resource group status.

UASOL1:#clrg status

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status


---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Unmanaged
UASOL1 No Unmanaged
UASOL1:#

We have successfully created the resource group on Solaris cluster.

Let me create a ZFS storage pool and add it in Solaris cluster.

1.Check the cluster device instances. Here d5 d6 are from SAN storage. d5 is already used for quorum setup.

UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d2 UASOL1:/dev/rdsk/c1t2d0
d3 UASOL2:/dev/rdsk/c1t2d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
d5 UASOL2:/dev/rdsk/c2t16d0
d5 UASOL1:/dev/rdsk/c2t14d0
d6 UASOL2:/dev/rdsk/c2t15d0
d6 UASOL1:/dev/rdsk/c2t13d0
UASOL1:#
UASOL1:#cldevice status
=== Cluster DID Devices ===
Device Instance Node Status
--------------- ---- ------
/dev/did/rdsk/d1 UASOL1 Ok
UASOL2 Ok

https://unixarena68.rssing.com/chan-59694592/all_p1.html 20/87
02/01/2024 09:02 Solaris Cluster – UnixArena

/dev/did/rdsk/d2 UASOL1 Ok

/dev/did/rdsk/d3 UASOL2 Ok

/dev/did/rdsk/d4 UASOL1 Ok
UASOL2 Ok

/dev/did/rdsk/d5 UASOL1 Ok
UASOL2 Ok

/dev/did/rdsk/d6 UASOL1 Ok
UASOL2 Ok
UASOL1:#

2.Create a new ZFS storage pool using d6.

UASOL1:#zpool create -f UAZPOOL /dev/did/dsk/d6s2


UASOL1:#zpool status UAZPOOL
pool: UAZPOOL
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM


UAZPOOL ONLINE 0 0 0
/dev/did/dsk/d6s2 ONLINE 0 0 0

errors: No known data errors


UASOL1:#df -h /UAZPOOL
Filesystem size used avail capacity Mounted on
UAZPOOL 3.0G 31K 3.0G 1% /UAZPOOL
UASOL1:#

3.Register the ZFS resource type in Solaris cluster.

UASOL1:#clresourcetype register SUNW.HAStoragePlus


UASOL1:#

4.Create the new cluster resource for zpool which we have created on previous step.

UASOL1:#clresource create -g UA-HA-ZRG -t SUNW.HAStoragePlus -p Zpools=UAZPOOL CLUAZPOOL


UASOL1:#

-g Resoure Group – UA-HA-ZRG


-t Resource type – SUNW.HAStoragePlus
-p Zpools – UAZPOOL(zpool name)
CLUAZPOOL – Cluster Resource name.

5.Check the resource status.

UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAZPOOL UASOL2 Offline Offline
UASOL1 Offline Offline
UASOL1:#

6.Bring the resource group online and check the resource status.

UASOL1:#clrg online -M -n UASOL1 UA-HA-ZRG


UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAZPOOL UASOL2 Offline Offline
UASOL1 Online Online
UASOL1:#

7.List the zpool where the resource group is online.

UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /

https://unixarena68.rssing.com/chan-59694592/all_p1.html 21/87
02/01/2024 09:02 Solaris Cluster – UnixArena
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#

8.To test the resource group, Switch the resource group to other node.

UASOL1:#clrg switch -n UASOL2 +


UASOL1:#

9.Now you can see that cluster zpool has been moved to UASOL2 node.

UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Online
UASOL1 No Offline
UASOL1:#ssh UASOL2 zpool list
Password:
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /
rpool 13.9G 9.15G 4.73G 65% ONLINE -
UASOL1:#

So automatic failover should work for resource group which we have just created. In the next article,we will see that how add the localzone to the cluster.

Share it ! Comment it !! Be Sociable !!!

The post How to create Resource Group on Solaris cluster ? appeared first on UnixArena.

How to configure High Availability zone on Solaris cluster ?


July 1, 2014, 8:57 am

 Next  How to setup oracle Solaris cluster on VMware workstation ?


 Previous  How to create Resource Group on Solaris cluster ?

    

In this article, we will see how we can add local zone as a resource in Solaris cluster to make the zone highly available. In the past we have seen similar setup in veritas
cluster. By configuring zone as a resource, if any one node fails ,automatically zone will fly to other node with minimal downtime.(Flying zone on Solaris). Once you have
configured the below things, then we can proceed with bring the localzone under Solaris cluster.

Two node solaris cluster


Quorum devices
Configuring resource group

Unlike veritas cluster, local zone IP will be managed from global zone as cluster resource . So let me create a IP resource before proceeding with local zone creation.

1. Login to Solaris cluster nodes and add the local zone IP & Host name information in /etc/hosts file.

UASOL1:#cat /etc/hosts |grep UAHAZ1


192.168.2.94 UAHAZ1
UASOL1:#ssh UASOl2 grep UAHAZ1 /etc/hosts
Password:
192.168.2.94 UAHAZ1
UASOL1:#

Here My local zone IP is 192.168.2.94 and host name is UAHAZ1

2.Add the logical host name as resource in Solaris cluster.

UASOL1:#clreslogicalhostname create -g UA-HA-ZRG -h UAHAZ1 CLUAHAZ1


UASOL1:#

Resource Group Name = – g UA-HA-ZRG


Local zone Name = -h UAHAZ1
Local zone IP resource Name = CLUAHAZ1

3.Check the solaris cluster resource status

UASOL1:#clresource status

https://unixarena68.rssing.com/chan-59694592/all_p1.html 22/87
02/01/2024 09:02 Solaris Cluster – UnixArena

=== Cluster Resources ===

Resource Name Node Name State Status Message


------------- --------- ----- --------------
CLUAHAZ1 UASOL2 Online Online - LogicalHostname online.
UASOL1 Offline Offline

CLUAZPOOL UASOL2 Online Online


UASOL1 Offline Offline

UASOL1:#

4.You test the resource by pinging the local zone IP.

UASOL1:#ping UAHAZ1
UAHAZ1 is alive
UASOL1:#

5.You can see that local zone IP has plumbed by Solaris cluster .

UASOL2:#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
ether 0:c:29:e:f8:ce
e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
inet 192.168.2.94 netmask ffffff00 broadcast 192.168.2.255
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.0.65 netmask ffffffc0 broadcast 172.16.0.127
ether 0:c:29:e:f8:d8
e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
inet 172.16.0.129 netmask ffffffc0 broadcast 172.16.0.191
ether 0:c:29:e:f8:e2
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
ether 0:0:0:0:0:1
UASOL2:#

6. Fail-over resource group to UASOL1 and check the status.

UASOL2:#clrg switch -n UASOL1 +


UASOL2:#logout
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Offline
UASOL1 No Online

UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAHAZ1 UASOL2 Offline Offline - LogicalHostname offline.
UASOL1 Online Online - LogicalHostname online.

CLUAZPOOL UASOL2 Offline Offline


UASOL1 Online Online
UASOL1:#

We have successfully created logicalhostname cluster resource and tested on both the nodes.

7.Create a local zone on any one of the cluster node and copy the /etc/zones/global & /etc/zones/zonename.xml file to other node to make the zone configuration available
on both the cluster nodes.Create a local zone without adding network part.(Ex:add net)

UASOL1:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared
UASOL1:#ssh UASOL2 zoneadm list -cv
Password:

https://unixarena68.rssing.com/chan-59694592/all_p1.html 23/87
02/01/2024 09:02 Solaris Cluster – UnixArena
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared
UASOL1:#

You can refer this article for creating the local zone but do not configure network.

8.Halt the local zone on UASOl1 and failover the resource group to UASOL2 to test the zone on it.

UASOL1:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 running /UAZPOOL/UAHAZ1 native shared
UASOL1:#zoneadm -z UAHAZ1 halt
UASOL1:#
UASOL1:#clrg switch -n UASOL2 +
UASOL1:#ssh UASOL2
Password:
Last login: Tue Jul 1 00:27:14 2014 from uasol1
Oracle Corporation SunOS 5.10 Generic Patch January 2005
UASOL2:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Online
UASOL1 No Offline
UASOL2:#

9. Attach the local zone and boot it .

UASOL2:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared
UASOL2:#zoneadm -z UAHAZ1 attach -F
UASOL2:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared
UASOL2:#zoneadm -z UAHAZ1 boot
UASOL2:#

10. Login to local zone and perform the health check .If everything seems to be fine , then just halt the localzone.

UASOL2:#zlogin UAHAZ1
[Connected to zone 'UAHAZ1' pts/4]
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# uptime
12:37am up 1 user, load average: 0.50, 0.13, 0.07
bash-3.2# exit
# ^D
[Connection to zone 'UAHAZ1' pts/4 closed]
UASOL2:#zoneadm -z UAHAZ1 halt
UASOL2:#

Click Page 2 to see how to create the resource for local zone and adding in to the resource group .

The post How to configure High Availability zone on Solaris cluster ? appeared first on UnixArena.

How to setup oracle Solaris cluster on VMware workstation ?


July 2, 2014, 10:30 am

 Next  Sun Cluster – Zone Cluster on Oracle Solaris – Overview


 Previous  How to configure High Availability zone on Solaris cluster ?

    

Getting opportunity to work on cluster environment is very difficult on big companies due to security problem. If you get opportunity also you can’t play much on it since
most of the cluster environments will be critical to the client. To learn any operating system cluster, you have to build it by your own and configuring the resource groups
and resources yourself. You will be lucky if your organization provides the LAB environment with necessary hardwares for these kind of setup. Due to hardware cost, many
companies are not providing the such LAB setup . So how to become master on cluster ? Will it be possible to setup cluster environment on single Desktop/Laptop ? Yes.
Using VMware workstation, you can setup cluster. In the past we have seen for veritas cluster. Here we will see how to setup two Solaris cluster on Solaris 10 using
VMware workstation .

https://unixarena68.rssing.com/chan-59694592/all_p1.html 24/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Desktop/Laptop Configuration:

Operating System: windows 7 64 Bit or Linux 64 Bit


Software: VMware workstation 8 or higher version
Physical Memory : 4GB (Minimum) or 8 GB (Recommended )
Processor : Any intel processor with VT technology enabled.

1. In your desktop, install VMware workstation software and create two virtual machines with below mentioned configuration.

Virtual machine configuration


Virtual machine configuration

I have allocated 4.3Gb to each VM but assigning 1 GB is enough for each virtual machine. Your virtual machines must have minimum three network adapter . One NIC for
public and two NICs for heartbeat.

2.Install Solaris 10 update 11 on both the virtual machines.

3.Install VMware tools on both the virtual machines.

4.Enable the Windows share on both the virtual machine for copying Solaris cluster software from your laptop to virtual machine.Copy the Solaris cluster 3.3 u2 to /var/tmp
on both the nodes. Otherwise use winscp to copy it.

5.Configure password less authentication between two Solaris virtual machines.

6.Install the Solaris cluster on both virtual machine .

7.Configure the Solaris cluster between these two virtual machines.

8.To proceed further on soalris cluster, you require shared storage. So create a new virtual machine and install openfiler on it.

9. Provision two LUNs to ISCSI target on openfiler web-interface. (512MB Lun for Quorum and 3Gb LUN for shared Zpool)

10.Add the openfiler ISCSI targets in both the solaris nodes.

11.Add the Quorum device to the cluster.

12.Create the Solaris cluster resource group and add the ZFS storage pools as resource.

13.Finally create the local zone and add it in to Solaris cluster for failover local zone or high availability local zone using Solaris cluster.

Solaris Cluster in a BOX


Solaris Cluster in a BOX

By performing above steps , definitely you can setup Two node Solaris cluster on Desktop/Laptop using VMware workstation.

Good Luck.

Share it ! Comment it !! Be Sociable !!!

The post How to setup oracle Solaris cluster on VMware workstation ? appeared first on UnixArena.

search RSSing.com.... Search

Sun Cluster – Zone Cluster on Oracle Solaris – Overview


April 10, 2016, 12:20 pm

 Next  Sun Cluster – How to Configure Zone Cluster on Solaris ?


 Previous  How to setup oracle Solaris cluster on VMware workstation ?

    

This article explains about zone cluster. Zone cluster is created on oracle Solaris hosts using sun cluster aka Oracle Solaris cluster. In Most of the deployments , we might
have seen the failover zones (HA Zones) using sun cluster or Veritas cluster (VCS) on Solaris. Comparatively , zone clusters are very less in the industry but used in some
of the organization very effectively . You must establish the traditional cluster between physical nodes in an order to configure a zone cluster. Since cluster applications
always run in a zone, the cluster node is always a zone.

The typical 4-Node Sun cluster looks like below. (Prior to configuring zone cluster )

4 Node Cluster
4 Node Cluster

https://unixarena68.rssing.com/chan-59694592/all_p1.html 25/87
02/01/2024 09:02 Solaris Cluster – UnixArena

After configuring zone cluster on global cluster,

zone cluster on global cluster

zone cluster on global cluster

The above diagram shows that two zone clusters have been configured on global cluster.

Global Cluster – 4 Node Cluster (Node 1, Node 2 , Node 3, Node 4 )


Zone Cluster A – 4 Node Cluster (Zone A1 , A2 , A3 , A4)
Zone Cluster B – 2 Node Cluster (Zone B1 , B2)

Zone Cluster Use Cases:


This section demonstrates the utility of zone clusters by examining a variety of use cases, including the following:

Multiple organization consolidation

Functional consolidation (See the below example)

Here you can see that both test and development systems are in different zone cluster but in same global cluster.

Functional Consolidation - Sun Cluster


Functional Consolidation – Sun Cluster

Multiple-tier consolidation. (See the below example)

In this cluster model, all the three tiers are in same global cluster but are in different zone cluster.

Multiple-tier consolidation - Sun cluster


Multiple-tier consolidation – Sun cluster

Cost containment
Administrative workload reduction

Good to know:
Distribution of nodes: You can’t host multiple zones which are part same cluster on same host. Zones must be distributed across the physical nodes.

Node creation: You must create at least one zone cluster node at the time that you create the zone cluster. The name of the zone-cluster node must be unique within the
zone cluster. The infrastructure automatically creates an underlying non-global zone on each host that supports the zone cluster. Each non-global zone is given the same
zone name, which is derived from, and identical to, the name that you assign to the zone cluster when you create the cluster. For example, if you create a zone cluster that
is named “uainfrazone”, the corresponding non-global zone name on each host that supports the zone cluster is also “uainfrazone”.

Cluster name: Each zone-cluster name must be unique throughout the cluster of machines that host the global cluster. The zone-cluster name cannot also be used by a
non-global zone elsewhere in the cluster of machines, nor can the zone-cluster name be the same as that of a global-cluster node. You cannot use “all” or “global” as a
zone-cluster name, because these are reserved names.

Public-network IP addresses: You can optionally assign a specific public-network IP address to each zone-cluster node.

Private hostnames: During creation of the zone cluster, a private hostname is automatically created for each node of the zone cluster, in the same way that hostnames
are created in global clusters.

IP type:A zone cluster is created with the shared IP type. The exclusive IP type is not supported for zone clusters.

Hope this article is informative to you. In the next article, we will see that how to configure the zone cluster on existing two node sun cluster (global cluster).

The post Sun Cluster – Zone Cluster on Oracle Solaris – Overview appeared first on UnixArena.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 26/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Sun Cluster – How to Configure Zone Cluster on Solaris ?


April 10, 2016, 1:42 pm

 Next  Sun Cluster – Configuring Resource Group in Zone Cluster


 Previous  Sun Cluster – Zone Cluster on Oracle Solaris – Overview

    

This article will walk you through the zone cluster deployment on oracle Solaris. The zone cluster consists of a set of zones, where each zone represents a virtual node.
Each zone of a zone cluster is configured on a separate machine. As such, the upper bound on the number of virtual nodes in a zone cluster is limited to the number of
machines in the global cluster. The zone cluster design introduces a new brand of zone, called the cluster brand. The cluster brand is based on the original native brand
type, and adds enhancements for clustering. The BrandZ framework provides numerous hooks where other software can take action appropriate for the brand type of zone.
For example, there is a hook for software to be called during the zone boot, and zone clusters take advantage of this hook to inform the cluster software about the boot of
the virtual node. Because zone clusters use the BrandZ framework, at a minimum Oracle Solaris 10 5/08 is required.

The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone Cluster Membership Monitor (ZCMM), that monitors
the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone clusters. Zone clusters are considerably simpler than global
clusters. For example, there are no quorum devices in a zone cluster, as a quorum device is not needed.

clzonecluster is a utility to create , modify , delete and manage the zone cluster on sun cluster environment.

Zone uses the global zone physical resoruces


Zone uses the global zone physical resoruces

Note:
Sun Cluster is a product where zone cluster is one of the cluster type in sun cluster.

Environment:
Operating System : Oracle Solaris 10 u9
Cluster : Sun Cluster 3.3 (aka Oracle Solaris cluster 3.3)

Prerequisites :
Two Oracle Solaris 10 u9 nodes or above
Sun Cluster 3.3 package

Step : 1 Create a global cluster:


The following listed articles will help you to install and configure two node sun cluster on oracle Solaris 10.

Install Oracle Solaris cluster 3.3 (Aka Sun Cluster) on Solaris 10 nodes.
Configure two node sun cluster 3.3 on Solaris 10

Step: 2 Create a zone cluster inside the global cluster:


1. Login to one of the cluster node (Global zone).

2. Ensure that node of the global cluster is in cluster mode.

UASOL2:#clnode status
=== Cluster Nodes ===

--- Node Status ---


Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
UASOL2:#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 27/87
02/01/2024 09:02 Solaris Cluster – UnixArena
3. You must keep the zone path ready for local zone installation on both the cluster nodes. Zone path must be identical on both the nodes. On Node UASOL1 ,

UASOL1:#zfs list |grep /export/zones/uainfrazone


rpool/export/zones/uainfrazone 149M 4.54G 149M /export/zones/uainfrazone
UASOL1:#

On Node UASOL2,

UASOL2:#zfs list |grep /export/zones/uainfrazone


rpool/export/zones/uainfrazone 149M 4.24G 149M /export/zones/uainfrazone
UASOL2:#

4. Create a new zone cluster.

Note:
• By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.
• Specifying an IP address and NIC for each zone cluster node is optional.

UASOL1:#clzonecluster configure uainfrazone


uainfrazone: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:uainfrazone> create
clzc:uainfrazone> set zonepath=/export/zones/uainfrazone
clzc:uainfrazone> add node
clzc:uainfrazone:node> set physical-host=UASOL1
clzc:uainfrazone:node> set hostname=uainfrazone1
clzc:uainfrazone:node> add net
clzc:uainfrazone:node:net> set address=192.168.2.101
clzc:uainfrazone:node:net> set physical=e1000g0
clzc:uainfrazone:node:net> end
clzc:uainfrazone:node> end
clzc:uainfrazone> add sysid
clzc:uainfrazone:sysid> set root_password="H/80/NT4F2H7g"
clzc:uainfrazone:sysid> end
clzc:uainfrazone> verify
clzc:uainfrazone> commit
clzc:uainfrazone> exit
UASOL1:#

Cluster Name = uainfrazone


Zone Path = /export/zones/uainfrazone
physical-host = UASOL1 (Where the uainfrazone1 should be configured)
set hostname = uainfrazone1 (zone cluster node name)
Zone IP Address (Optional)=192.168.2.101

Here , we have just configured one zone on UASOL1 . Clustering make sense when you configure with two or more nodes. So let me create a one more zone on UASOL2
node in same zone cluster.

UASOL1:#clzonecluster configure uainfrazone


clzc:uainfrazone> add node
clzc:uainfrazone:node> set physical-host=UASOL2
clzc:uainfrazone:node> set hostname=uainfrazone2
clzc:uainfrazone:node> add net
clzc:uainfrazone:node:net> set address=192.168.2.103
clzc:uainfrazone:node:net> set physical=e1000g0
clzc:uainfrazone:node:net> end
clzc:uainfrazone:node> end
clzc:uainfrazone> commit
clzc:uainfrazone> info
zonename: uainfrazone
zonepath: /export/zones/uainfrazone
autoboot: true
hostid:
brand: cluster
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
enable_priv_net: true
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:

https://unixarena68.rssing.com/chan-59694592/all_p1.html 28/87
02/01/2024 09:02 Solaris Cluster – UnixArena

dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
sysid:
root_password: H/80/NT4F2H7g
name_service: NONE
nfs4_domain: dynamic
security_policy: NONE
system_locale: C
terminal: xterm
timezone: Asia/Calcutta
node:
physical-host: UASOL1
hostname: uainfrazone1
net:
address: 192.168.2.101
physical: e1000g0
defrouter not specified
node:
physical-host: UASOL2
hostname: uainfrazone2
net:
address: 192.168.2.103
physical: e1000g0
defrouter not specified
clzc:uainfrazone> exit

Cluster Name = uainfrazone


Zone Path = /export/zones/uainfrazone
physical-host = UASOL2 (Where the uainfrazone2 should be configured)
set hostname = uainfrazone2 (zone cluster node name)
Zone IP Address (Optional)=192.168.2.103

The encrypted root password is “root123” . (Zone’s root password.)

5. Verify the zone cluster.

UASOL2:#clzonecluster verify uainfrazone


Waiting for zone verify commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL2:#

6. Check the zone cluster status. At this stage zones are in configured status.

UASOL2:#clzonecluster status uainfrazone

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone Host Name Status Zone Status


---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Offline Configured
UASOL2 uainfrazone2 Offline Configured

UASOL2:#

7. Install the zones using following command.

UASOL2:#clzonecluster install uainfrazone


Waiting for zone install commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL2:#
UASOL2:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- uainfrazone installed /export/zones/uainfrazone cluster shared
UASOL2:#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 29/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Here you can see that uainfrazone is created and installed. You should be able to see the same on UASOL1 as well.

UASOL1:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
- uainfrazone installed /export/zones/uainfrazone cluster shared
UASOL1:#

Note: There is no difference if you run a command from UASOL1 or UASOL2 since both are in cluster.

8.Bring up the zones using clzonecluster . (You should not use zoneadm command to boot the zones)

UASOL1:#clzonecluster boot uainfrazone


Waiting for zone boot commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#
UASOL1:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 uainfrazone running /export/zones/uainfrazone cluster shared
UASOL1:#

In UASOL2,

UASOL2:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
3 uainfrazone running /export/zones/uainfrazone cluster shared
UASOL2:#

9. Check the zone cluster status.

UASOL1:#clzonecluster status uainfrazone


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Offline Running
UASOL2 uainfrazone2 Offline Running
UASOL1:#

10. Zones will reboot automatically for sysconfig. You could see that when you access the zone’s console.

UASOL1:#zlogin -C uainfrazone
[Connected to zone 'uainfrazone' console]
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair
Configuring network interface addresses: clprivnet0.

rebooting system due to change(s) in /etc/default/init

Apr 10 13:21:47 Cluster.Framework: cl_execd: Going down on signal 15.


Apr 10 13:21:47 Cluster.Framework: cl_execd: Going down on signal 15.

[NOTICE: Zone rebooting]

SunOS Release 5.10 Version Generic_147148-26 64-bit


Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
Hostname: uainfrazone1

uainfrazone1 console login:

11. Check the zone cluster status.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 30/87
02/01/2024 09:02 Solaris Cluster – UnixArena

UASOL2:#clzonecluster status
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL2:#

We have successfully configured the two node zone cluster. What’s Next ? You should login to one of the zone and configure the resource group and resources. Just
login to any one of the local zone and check the cluster status.

UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/2]
Last login: Mon Apr 11 01:58:20 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===

--- Node Status ---


Node Name Status
--------- ------
uainfrazone1 Online
uainfrazone2 Online
bash-3.2#

Similar to this , you could create a N-number of zone cluster under the global cluster. These zone cluster uses the host’s private network and other required resources. In
the next article, we will see that how to configure the resource group on local zone.

Hope this article is informative to you.

The post Sun Cluster – How to Configure Zone Cluster on Solaris ? appeared first on UnixArena.

Sun Cluster – Configuring Resource Group in Zone Cluster


April 11, 2016, 10:27 am

 Next  Managing Zone Cluster – Oracle Solaris


 Previous  Sun Cluster – How to Configure Zone Cluster on Solaris ?

    

This article will walk you through how to configure a resource group in zone cluster. Unlike traditional cluster, resource group and cluster resources are should be created
inside the non-global zone. The required physical or logical resources need to be pinned from the global zone using “clzonecluster” or “clzc” command. In this article, we
will configure HA filesystem and IP resource on one of the zone cluster which we have created earlier. Adding to that , you can also configure DB or Application resource
for HA.

Global Cluster Nodes – UASOL1 & UASOL2


zone Cluster Nodes – uainfrazone1 & uainfrazone2

1.Login to one of the global cluster node.

2.Check the cluster status.

Global Cluster:
UASOL2:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online

Zone Cluster :
https://unixarena68.rssing.com/chan-59694592/all_p1.html 31/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Login to one of the zone and check the cluster status. (extend the command search path to “/usr/cluster/bin”)

UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/3]
Last login: Mon Apr 11 02:00:17 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
uainfrazone1 Online
uainfrazone2 Online
bash-3.2#

Make sure that both the host names are updated on each nodes “/etc/inet/hosts” file.

3. Login to one of the global zone (Global Cluster) and add the IP detail in zone cluster. (IP which needs to highly available)

UASOL2:#clzc configure uainfrazone


clzc:uainfrazone> add net
clzc:uainfrazone:net> set address=192.168.2.102
clzc:uainfrazone:net> info
net:
address: 192.168.2.102
physical: auto
defrouter not specified
clzc:uainfrazone:net> end
clzc:uainfrazone> commit
clzc:uainfrazone> exit

4 . Create the ZFS pool on shared SAN LUN. So that zpool can be exported and imported other cluster nodes.

UASOL2:#zpool create oradbp1 c2t15d0


UASOL2:#zpool list oradbp1
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
oradbp1 2.95G 78.5K 2.95G 0% ONLINE -
UASOL2:#

Just manually export the zpool on UASOL2 & try to import it on UASOL1.

UASOL2:#zpool export oradbp1


UASOL2:#logout
Connection to UASOL2 closed.
UASOL1:#zpool import oradbp1
UASOL1:#zpool list oradbp1
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
oradbp1 2.95G 133K 2.95G 0% ONLINE -
UASOL1:#

It works. Let’s map this zpool to the zone cluster – uainfrazone.

5. In one of the global cluster node , invoke “clzc” to add the zpool.

UASOL1:#clzc configure uainfrazone


clzc:uainfrazone> add dataset
clzc:uainfrazone:dataset> set name=oradbp1
clzc:uainfrazone:dataset> info
dataset:
name: oradbp1
clzc:uainfrazone:dataset> end
clzc:uainfrazone> commit
clzc:uainfrazone> exit
UASOL1:#

We have successfully added IP address and dataset on the zone cluster configuration. At this point, you are eligible to use these resource under the zone cluster to
configure the cluster resources.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 32/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Configure Resource group and cluster Resources on Zone Cluster:


1. Add the IP in /etc/hosts of the zone cluster nodes (uainfrazone1 & uainfrazone2). We will make this IP as highly available through cluster.

bash-3.2# grep ora /etc/hosts


192.168.2.102 oralsn-ip
bash-3.2#

2. In one of the zone cluster node , Create the cluster resource group with name of “oradb-rg”.

bash-3.2# clrg create -n uainfrazone1,uainfrazone2 oradb-rg


bash-3.2# clrg status

=== Cluster Resource Groups ===


Group Name Node Name Suspended Status
---------- --------- --------- ------
oradb-rg uainfrazone1 No Unmanaged
uainfrazone2 No Unmanaged

bash-3.2#

If you want to create the resource group for “uainfrazone” zone cluster from global zone , you can use the following command. (with -Z “zone-cluster” name)

UASOL2:# clrg create -Z uainfrazone -n uainfrazone1,uainfrazone2 oradb-rg


UASOL2:#clrg status -Z uainfrazone

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status


---------- --------- --------- ------
uainfrazone:oradb-rg uainfrazone1 No Unmanaged
uainfrazone2 No Unmanaged
UASOL2:#

3. Create the cluster IP resource for oralsn-ip . (Refer step 1)

bash-3.2# clrslh create -g oradb-rg -h oralsn-ip oralsn-ip-rs


bash-3.2# clrs status

=== Cluster Resources ===

Resource Name Node Name State Status Message


------------- --------- ----- --------------
oralsn-ip-rs uainfrazone1 Offline Offline
uainfrazone2 Offline Offline

bash-3.2#

4. Create the ZFS resource for zpool – oradbp1 (which we have created and assigned this zone cluster in first section of the document)

You must register the ZFS resource type prior to adding the resource in cluster.

bash-3.2# clresourcetype register SUNW.HAStoragePlus


bash-3.2# clrt list
SUNW.LogicalHostname:4
SUNW.SharedAddress:2
SUNW.HAStoragePlus:10
bash-3.2#

Add the dataset in zone cluster to make HA.

bash-3.2# clrs create -g oradb-rg -t SUNW.HAStoragePlus -p zpools=oradbp1 oradbp1-rs


bash-3.2# clrs status

https://unixarena68.rssing.com/chan-59694592/all_p1.html 33/87
02/01/2024 09:02 Solaris Cluster – UnixArena

=== Cluster Resources ===


Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Offline Offline
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Offline Offline


uainfrazone2 Offline Offline

bash-3.2#

5. Bring up the resource group online.

bash-3.2# clrg online -eM oradb-rg


bash-3.2# clrs status

=== Cluster Resources ===

Resource Name Node Name State Status Message


------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline

bash-3.2# uname -a
SunOS uainfrazone2 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#

6. Verify the resource status in uainfrazone1.

bash-3.2# clrg status


=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
oradb-rg uainfrazone1 No Online
uainfrazone2 No Offline

bash-3.2# clrs status


=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline
bash-3.2#
bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone uainfrazone
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.90 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.101 netmask ffffff00 broadcast 192.168.2.255
e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.102 netmask ffffff00 broadcast 192.168.2.255
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.2 netmask ffffff00 broadcast 172.16.2.255
clprivnet0:3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
zone uainfrazone
inet 172.16.3.66 netmask ffffffc0 broadcast 172.16.3.127
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
oradbp1 86.5K 2.91G 31K /oradbp1
bash-3.2#

You can see that ZFS dataset “oradbp1” and IP “192.168.2.102” is up on uainfrazone1.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 34/87
02/01/2024 09:02 Solaris Cluster – UnixArena

7. Switch the resource group to uainfrazone2 and check the resource status.

bash-3.2# clrg switch -n uainfrazone2 oradb-rg


bash-3.2# clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
oradb-rg uainfrazone1 No Offline
uainfrazone2 No Online

bash-3.2# clrs status


=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Offline Offline
uainfrazone2 Online Online

oralsn-ip-rs uainfrazone1 Offline Offline - LogicalHostname offline.


uainfrazone2 Online Online - LogicalHostname online.

bash-3.2#
bash-3.2#

Verify the result from OS level. Login to uainfrazone2 and check the following to confirm the switch over.

bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone uainfrazone
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.103 netmask ffffff00 broadcast 192.168.2.255
e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.102 netmask ffffff00 broadcast 192.168.2.255
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
clprivnet0:3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
zone uainfrazone
inet 172.16.3.65 netmask ffffffc0 broadcast 172.16.3.127
bash-3.2# df -h /oradbp1/
Filesystem size used avail capacity Mounted on
oradbp1 2.9G 31K 2.9G 1% /oradbp1
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
oradbp1 86.5K 2.91G 31K /oradbp1
bash-3.2#

We have successfully configure the Resource group and made ZFS and IP as highly available (HA) on Oracle Solaris zones via zone cluster concept. Hope this article is
informative to you. In the next article, we will see that how to add/remove/delete nodes from the zones cluster.

The post Sun Cluster – Configuring Resource Group in Zone Cluster appeared first on UnixArena.

Managing Zone Cluster – Oracle Solaris


April 13, 2016, 9:09 am

 Next  How to configure Quorum devices on Solaris cluster ?


 Previous  Sun Cluster – Configuring Resource Group in Zone Cluster

    

This article will talk about managing the Zone Cluster on oracle Solaris. The clzonecluster command supports all zone cluster administrative activity, from creation through
modification and control to final destruction. The clzonecluster command supports single point of administration, which means that the command can be executed from any

https://unixarena68.rssing.com/chan-59694592/all_p1.html 35/87
02/01/2024 09:02 Solaris Cluster – UnixArena
node and operates across the entire cluster. The clzonecluster command builds upon the Oracle Solaris zonecfg and zoneadm commands and adds support for cluster
features. We will see that how to add/remove cluster nodes,checking the resource status and listing the resources from the global zone.

Each zone cluster has its own notion of membership. The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone
Cluster Membership Monitor (ZCMM), that monitors the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone
clusters.Naturally, a zone of a zone cluster can only become operational after the global zone on the hosting machine becomes operational. A zone of a zone cluster will
not boot when the global zone is not booted in cluster mode. A zone of a zone cluster can be configured to automatically boot after the machine boots, or the administrator
can manually control when the zone boots. A zone of a zone cluster can fail or an administrator can manually halt or reboot a zone. All of these events result in the zone
cluster automatically updating its membership.

Viewing the cluster status:


1.Check the zone cluster status from global zone.

To check specific zone cluster status,

UASOL1:#clzc status -v uainfrazone


=== Zone Clusters ===
--- Zone Cluster Status ---
Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL1:#

To check all the zone cluster status ,

UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running

2. Check the resource group status of the zone cluster.

UASOL1:#clrg status -Z uainfrazone

=== Cluster Resource Groups ===


Group Name Node Name Suspended Status
---------- --------- --------- ------
uainfrazone:oradb-rg uainfrazone1 No Online
uainfrazone2 No Offline

UASOL1:#

To check, all the zone cluster’s resource group status from global zone,

UASOL1:#clrg status -Z all

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status


---------- --------- --------- ------
uainfrazone:oradb-rg uainfrazone1 No Online
uainfrazone2 No Offline

UASOL1:#

3. Let’s check the zone cluster resource from global zone.

For specific cluster,

UASOL1:#clrs status -Z uainfrazone

=== Cluster Resources ===

https://unixarena68.rssing.com/chan-59694592/all_p1.html 36/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Resource Name Node Name State Status Message


------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline

For all zone cluster,

UASOL1:#clrs status -Z all


=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline
UASOL1:#

Stop & Start the zone cluster:


1. Login to the global zone and stop the zone cluster “uainfrazone”.

UASOL1:#clzc halt uainfrazone


Waiting for zone halt commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Offline Installed
UASOL2 uainfrazone2 Offline Installed

UASOL1:#

2. Start the zone cluster “uainfrazone”.

UASOL1:#clzc boot uainfrazone


Waiting for zone boot commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL1:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
3 uainfrazone running /export/zones/uainfrazone cluster shared
UASOL1:#

3. Would you like to reboot the zone cluster ? Use the following command.

UASOL1:#clzc reboot uainfrazone


Waiting for zone reboot commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------

https://unixarena68.rssing.com/chan-59694592/all_p1.html 37/87
02/01/2024 09:02 Solaris Cluster – UnixArena
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL1:#

How to add new node to the cluster ?


1. We are assuming that only one zone node is running and planning to add one more node to the zone cluster.

UASOL1:#clzonecluster status oraweb


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL1:#

2. Here the zone cluster is already in operational and running. In an order to add the additional nodes to this cluster , we need to do add the zone configuration in zone
cluster. (clzc & clzonecluster are identical commands. You can use any one of them)

UASOL1:#clzonecluster configure oraweb


clzc:oraweb> add node
clzc:oraweb:node> set physical-host=UASOL2
clzc:oraweb:node> set hostname=oraweb2
clzc:oraweb:node> add net
clzc:oraweb:node:net> set physical=e1000g0
clzc:oraweb:node:net> set address=192.168.2.132
clzc:oraweb:node:net> end
clzc:oraweb:node> end
clzc:oraweb> exit
UASOL1:#clzonecluster status oraweb
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Offline Configured

UASOL1:#

3. Install the zone cluster node on UASOL2. (-n Physical-Hostname)

UASOL1:#clzonecluster install -n UASOL2 oraweb


Waiting for zone install commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#clzonecluster status oraweb

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone Host Name Status Zone Status


---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Offline Installed

UASOL1:#

4. Boot the zone cluster node “oraweb2” .

UASOL1:#clzonecluster boot -n UASOL2 oraweb


Waiting for zone boot commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#clzonecluster status oraweb

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone Host Name Status Zone Status

https://unixarena68.rssing.com/chan-59694592/all_p1.html 38/87
02/01/2024 09:02 Solaris Cluster – UnixArena

---- --------- -------------- ------ -----------


oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Offline Running

UASOL1:#

The zone status might show as “offline” and it will become online once the sys-config is done (via automatic reboot).

5. Check the zone status after few minutes.

UASOL1:#clzonecluster status oraweb


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Online Running
UASOL1:#

How to remove the zone cluster node ?


1. Check the zone cluster status .

UASOL1:#clzonecluster status oraweb


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Online Running
UASOL1:#

2. Stop the zone cluster node which needs to be decommissioned.

UASOL1:#clzonecluster halt -n UASOL1 oraweb


Waiting for zone halt commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#

3. Un-install the zone .

UASOL1:#clzonecluster uninstall -n UASOL1 oraweb


Are you sure you want to uninstall zone cluster oraweb (y/[n])?y
Waiting for zone uninstall commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#clzonecluster status oraweb
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Offline Configured
UASOL2 oraweb2 Online Running

UASOL1:#

4. Remove the zone configuration from cluster.

UASOL1:#clzonecluster configure oraweb


clzc:oraweb> remove node physical-host=UASOL1
clzc:oraweb> exit
UASOL1:#clzonecluster status oraweb
=== Zone Clusters ===

--- Zone Cluster Status ---

https://unixarena68.rssing.com/chan-59694592/all_p1.html 39/87
02/01/2024 09:02 Solaris Cluster – UnixArena
Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL2 oraweb2 Online Running
UASOL1:#

clzc or clzonecluster Man help:


UASOL1:#clzc --help
Usage: clzc [] [+ | ...]
clzc [] -? | --help
clzc -V | --version

Manage zone clusters for Oracle Solaris Cluster

SUBCOMMANDS:

boot Boot zone clusters


clone Clone a zone cluster
configure Configure a zone cluster
delete Delete a zone cluster
export Export a zone cluster configuration
halt Halt zone clusters
install Install a zone cluster
list List zone clusters
move Move a zone cluster
ready Ready zone clusters
reboot Reboot zone clusters
set Set zone cluster properties
show Show zone clusters
show-rev Show release version on zone cluster nodes
status Status of zone clusters
uninstall Uninstall a zone cluster
verify Verify zone clusters

UASOL1:#clzonecluster --help
Usage: clzonecluster [] [+ | ...]
clzonecluster [] -? | --help
clzonecluster -V | --version

Manage zone clusters for Oracle Solaris Cluster

SUBCOMMANDS:

boot Boot zone clusters


clone Clone a zone cluster
configure Configure a zone cluster
delete Delete a zone cluster
export Export a zone cluster configuration
halt Halt zone clusters
install Install a zone cluster
list List zone clusters
move Move a zone cluster
ready Ready zone clusters
reboot Reboot zone clusters
set Set zone cluster properties
show Show zone clusters
show-rev Show release version on zone cluster nodes
status Status of zone clusters
uninstall Uninstall a zone cluster
verify Verify zone clusters

UASOL1:#

Hope this article is informative to you . Share it ! Comment it !! Be Sociable !!!.

The post Managing Zone Cluster – Oracle Solaris appeared first on UnixArena.

How to configure Quorum devices on Solaris cluster ?


June 30, 2014, 12:51 am

 Next  How to create Resource Group on Solaris cluster ?


 Previous  Managing Zone Cluster – Oracle Solaris

https://unixarena68.rssing.com/chan-59694592/all_p1.html 40/87
02/01/2024 09:02 Solaris Cluster – UnixArena

    

Once you have configured the Solaris cluster , you have to add the quorum device for additional voting process. Without Quorum devices, cluster will be in installed mode.
You can verify the status using “cluster show -t global | grep installmode” command.Each node in a configured cluster has one ( 1 ) quorum vote and cluster require
minimum two vote to run the cluster.If any one node goes down, cluster won’t get 2 votes and it will panic second node also to avoid the data corruption on shared storage.
To avoid this situation, we can make one small SAN disk as quorum device which can provide one vote.So that, if one node fails, system can still get two votes all the time
on two node cluster.

Once you have configured the two node Solaris cluster, you can start configure the quorum device.

1.Check the cluster node status.

UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online
UASOL1 Online
UASOL1:#

2.You can see that ,currently cluster is in install mode.

# cluster show -t global | grep installmode


installmode: enabled

3.Current cluster quorum status

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
1 1 1
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 0 0 Online
UASOL1:#

4.Make sure you have small size LUN is assigned to both the cluster node from SAN.

UASOL1:#echo |format
Searching for disks...done

AVAILABLE DISK SELECTIONS:


0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL1:#

5.Let me label the disk and naming the disk.

UASOL1:#format c1t1d0
selecting c1t1d0: quorum
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision

https://unixarena68.rssing.com/chan-59694592/all_p1.html 41/87
02/01/2024 09:02 Solaris Cluster – UnixArena
volname - set 8-character volume name
! - execute , then return
quit
format> fdisk
The default partition for the disk is:

a 100% "SOLARIS System" partition

Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> volname quorum
format> quit
UASOL1:#

6.You can see the same LUN on UASOL2 node as well.

UASOL2:#echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 VMware,-VMware Virtual -1.0 cyl 1824 alt 2 hd 255 sec 63
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 VMware,-VMware Virtual -1.0 cyl 508 alt 2 hd 64 sec 32 quorum
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
UASOL2:#

7. Populate the disks in solaris cluster.

UASOL2:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL2:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL2:#

UASOL1:#cldev populate
Configuring DID devices
did instance 4 created.
did subpath UASOL1:/dev/rdsk/c1t1d0 created for instance 4.
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
UASOL1:#

8.Check the devices status.

UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
UASOL1:#cldev show d4
=== DID Device Instances ===
DID Device Name: /dev/did/rdsk/d4
Full Device Path: UASOL1:/dev/rdsk/c1t1d0
Full Device Path: UASOL2:/dev/rdsk/c1t1d0
Replication: none
default_fencing: global
UASOL1:#

9.Add the d4 as quorum device in cluster.

UASOL1:#clquorum add d4
UASOL1:#

10.Check the Quorum status

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 3 3

https://unixarena68.rssing.com/chan-59694592/all_p1.html 42/87
02/01/2024 09:02 Solaris Cluster – UnixArena

--- Quorum Votes by Node (current status) ---


Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d4 1 1 Online
UASOL1:#

We have successfully configured the quorum on two node Solaris cluster 3.3 u2.

How can we test quorum device is working or not ?

Just reboot any one of the node and you can see the voting status .

UASOL2:#reboot
updating /platform/i86pc/boot_archive
Connection to UASOL2 closed by remote host.
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 2 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 0 1 Offline
UASOL1 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d4 1 1 Online
UASOL1:#

We can see that UASOL1 is not panic by cluster. So quorum device worked well.

If you don’t have real SAN storage for shared LUN, you can use openfiler.

What’s Next ? We will configure resource group for failover local zone and perform the test.

Share it ! Comment it !! Be Sociable !!

The post How to configure Quorum devices on Solaris cluster ? appeared first on UnixArena.

How to create Resource Group on Solaris cluster ?


June 30, 2014, 1:50 pm

 Next  How to configure High Availability zone on Solaris cluster ?


 Previous  How to configure Quorum devices on Solaris cluster ?

    

This article will help you to create a resource group on Solaris cluster and adding couple of resource to it. Resource group is similar to service group in veritas cluster
which bundles the resources in one logical unit. Once you have configured the Solaris two node cluster and added the quorum devices, you can create a resource group.
Once we create the resource group ,we will add zpool storage resource and will perform the failover test.

1. Login to one of the cluster node as root and check the cluster node status.

UASOL1:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
UASOL2 Online

https://unixarena68.rssing.com/chan-59694592/all_p1.html 43/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL1 Online
UASOL1:#

2.Check the heartbeat link status of Solaris cluster.

UASOL1:#clinterconnect status
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
UASOL2:e1000g2 UASOL1:e1000g2 Path online
UASOL2:e1000g1 UASOL1:e1000g1 Path online

UASOL1:#

3.Check the quorum status.

UASOL1:#clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---

Needed Present Possible


------ ------- --------
2 3 3

--- Quorum Votes by Node (current status) ---


Node Name Present Possible Status
--------- ------- -------- ------
UASOL2 1 1 Online
UASOL1 1 1 Online

--- Quorum Votes by Device (current status) ---


Device Name Present Possible Status
----------- ------- -------- ------
d5 1 1 Online
UASOL1:#

4.In the above command output, everything seems to be fine. So let me create a resource group.

UASOL1:#clrg create UA-HA-ZRG


UASOL1:#

5.Check the resource group status.

UASOL1:#clrg status

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status


---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Unmanaged
UASOL1 No Unmanaged
UASOL1:#

We have successfully created the resource group on Solaris cluster.

Let me create a ZFS storage pool and add it in Solaris cluster.

1.Check the cluster device instances. Here d5 d6 are from SAN storage. d5 is already used for quorum setup.

UASOL1:#cldevice list -v
DID Device Full Device Path
---------- ----------------
d1 UASOL2:/dev/rdsk/c1t0d0
d1 UASOL1:/dev/rdsk/c1t0d0
d2 UASOL1:/dev/rdsk/c1t2d0
d3 UASOL2:/dev/rdsk/c1t2d0
d4 UASOL2:/dev/rdsk/c1t1d0
d4 UASOL1:/dev/rdsk/c1t1d0
d5 UASOL2:/dev/rdsk/c2t16d0
d5 UASOL1:/dev/rdsk/c2t14d0
d6 UASOL2:/dev/rdsk/c2t15d0
d6 UASOL1:/dev/rdsk/c2t13d0
UASOL1:#
UASOL1:#cldevice status
=== Cluster DID Devices ===
Device Instance Node Status
--------------- ---- ------

https://unixarena68.rssing.com/chan-59694592/all_p1.html 44/87
02/01/2024 09:02 Solaris Cluster – UnixArena

/dev/did/rdsk/d1 UASOL1 Ok
UASOL2 Ok

/dev/did/rdsk/d2 UASOL1 Ok

/dev/did/rdsk/d3 UASOL2 Ok

/dev/did/rdsk/d4 UASOL1 Ok
UASOL2 Ok

/dev/did/rdsk/d5 UASOL1 Ok
UASOL2 Ok

/dev/did/rdsk/d6 UASOL1 Ok
UASOL2 Ok
UASOL1:#

2.Create a new ZFS storage pool using d6.

UASOL1:#zpool create -f UAZPOOL /dev/did/dsk/d6s2


UASOL1:#zpool status UAZPOOL
pool: UAZPOOL
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM


UAZPOOL ONLINE 0 0 0
/dev/did/dsk/d6s2 ONLINE 0 0 0

errors: No known data errors


UASOL1:#df -h /UAZPOOL
Filesystem size used avail capacity Mounted on
UAZPOOL 3.0G 31K 3.0G 1% /UAZPOOL
UASOL1:#

3.Register the ZFS resource type in Solaris cluster.

UASOL1:#clresourcetype register SUNW.HAStoragePlus


UASOL1:#

4.Create the new cluster resource for zpool which we have created on previous step.

UASOL1:#clresource create -g UA-HA-ZRG -t SUNW.HAStoragePlus -p Zpools=UAZPOOL CLUAZPOOL


UASOL1:#

-g Resoure Group – UA-HA-ZRG


-t Resource type – SUNW.HAStoragePlus
-p Zpools – UAZPOOL(zpool name)
CLUAZPOOL – Cluster Resource name.

5.Check the resource status.

UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAZPOOL UASOL2 Offline Offline
UASOL1 Offline Offline
UASOL1:#

6.Bring the resource group online and check the resource status.

UASOL1:#clrg online -M -n UASOL1 UA-HA-ZRG


UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAZPOOL UASOL2 Offline Offline
UASOL1 Online Online
UASOL1:#

7.List the zpool where the resource group is online.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 45/87
02/01/2024 09:02 Solaris Cluster – UnixArena

UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#

8.To test the resource group, Switch the resource group to other node.

UASOL1:#clrg switch -n UASOL2 +


UASOL1:#

9.Now you can see that cluster zpool has been moved to UASOL2 node.

UASOL1:#zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool 13.9G 9.32G 4.56G 67% ONLINE -
UASOL1:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Online
UASOL1 No Offline
UASOL1:#ssh UASOL2 zpool list
Password:
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
UAZPOOL 3.05G 132K 3.05G 0% ONLINE /
rpool 13.9G 9.15G 4.73G 65% ONLINE -
UASOL1:#

So automatic failover should work for resource group which we have just created. In the next article,we will see that how add the localzone to the cluster.

Share it ! Comment it !! Be Sociable !!!

The post How to create Resource Group on Solaris cluster ? appeared first on UnixArena.

How to configure High Availability zone on Solaris cluster ?


July 1, 2014, 8:57 am

 Next  How to setup oracle Solaris cluster on VMware workstation ?


 Previous  How to create Resource Group on Solaris cluster ?

    

In this article, we will see how we can add local zone as a resource in Solaris cluster to make the zone highly available. In the past we have seen similar setup in veritas
cluster. By configuring zone as a resource, if any one node fails ,automatically zone will fly to other node with minimal downtime.(Flying zone on Solaris). Once you have
configured the below things, then we can proceed with bring the localzone under Solaris cluster.

Two node solaris cluster


Quorum devices
Configuring resource group

Unlike veritas cluster, local zone IP will be managed from global zone as cluster resource . So let me create a IP resource before proceeding with local zone creation.

1. Login to Solaris cluster nodes and add the local zone IP & Host name information in /etc/hosts file.

UASOL1:#cat /etc/hosts |grep UAHAZ1


192.168.2.94 UAHAZ1
UASOL1:#ssh UASOl2 grep UAHAZ1 /etc/hosts
Password:
192.168.2.94 UAHAZ1
UASOL1:#

Here My local zone IP is 192.168.2.94 and host name is UAHAZ1

2.Add the logical host name as resource in Solaris cluster.

UASOL1:#clreslogicalhostname create -g UA-HA-ZRG -h UAHAZ1 CLUAHAZ1


UASOL1:#

Resource Group Name = – g UA-HA-ZRG


Local zone Name = -h UAHAZ1
Local zone IP resource Name = CLUAHAZ1

3.Check the solaris cluster resource status

https://unixarena68.rssing.com/chan-59694592/all_p1.html 46/87
02/01/2024 09:02 Solaris Cluster – UnixArena

UASOL1:#clresource status

=== Cluster Resources ===

Resource Name Node Name State Status Message


------------- --------- ----- --------------
CLUAHAZ1 UASOL2 Online Online - LogicalHostname online.
UASOL1 Offline Offline

CLUAZPOOL UASOL2 Online Online


UASOL1 Offline Offline

UASOL1:#

4.You test the resource by pinging the local zone IP.

UASOL1:#ping UAHAZ1
UAHAZ1 is alive
UASOL1:#

5.You can see that local zone IP has plumbed by Solaris cluster .

UASOL2:#ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
ether 0:c:29:e:f8:ce
e1000g0:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
inet 192.168.2.94 netmask ffffff00 broadcast 192.168.2.255
e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.0.65 netmask ffffffc0 broadcast 172.16.0.127
ether 0:c:29:e:f8:d8
e1000g2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
inet 172.16.0.129 netmask ffffffc0 broadcast 172.16.0.191
ether 0:c:29:e:f8:e2
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
ether 0:0:0:0:0:1
UASOL2:#

6. Fail-over resource group to UASOL1 and check the status.

UASOL2:#clrg switch -n UASOL1 +


UASOL2:#logout
Connection to UASOL2 closed.
UASOL1:#
UASOL1:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Offline
UASOL1 No Online

UASOL1:#clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
CLUAHAZ1 UASOL2 Offline Offline - LogicalHostname offline.
UASOL1 Online Online - LogicalHostname online.

CLUAZPOOL UASOL2 Offline Offline


UASOL1 Online Online
UASOL1:#

We have successfully created logicalhostname cluster resource and tested on both the nodes.

7.Create a local zone on any one of the cluster node and copy the /etc/zones/global & /etc/zones/zonename.xml file to other node to make the zone configuration available
on both the cluster nodes.Create a local zone without adding network part.(Ex:add net)

UASOL1:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared

https://unixarena68.rssing.com/chan-59694592/all_p1.html 47/87
02/01/2024 09:02 Solaris Cluster – UnixArena
- UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared
UASOL1:#ssh UASOL2 zoneadm list -cv
Password:
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared
UASOL1:#

You can refer this article for creating the local zone but do not configure network.

8.Halt the local zone on UASOl1 and failover the resource group to UASOL2 to test the zone on it.

UASOL1:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 running /UAZPOOL/UAHAZ1 native shared
UASOL1:#zoneadm -z UAHAZ1 halt
UASOL1:#
UASOL1:#clrg switch -n UASOL2 +
UASOL1:#ssh UASOL2
Password:
Last login: Tue Jul 1 00:27:14 2014 from uasol1
Oracle Corporation SunOS 5.10 Generic Patch January 2005
UASOL2:#clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
UA-HA-ZRG UASOL2 No Online
UASOL1 No Offline
UASOL2:#

9. Attach the local zone and boot it .

UASOL2:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 configured /UAZPOOL/UAHAZ1 native shared
UASOL2:#zoneadm -z UAHAZ1 attach -F
UASOL2:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- UAHAZ1 installed /UAZPOOL/UAHAZ1 native shared
UASOL2:#zoneadm -z UAHAZ1 boot
UASOL2:#

10. Login to local zone and perform the health check .If everything seems to be fine , then just halt the localzone.

UASOL2:#zlogin UAHAZ1
[Connected to zone 'UAHAZ1' pts/4]
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# uptime
12:37am up 1 user, load average: 0.50, 0.13, 0.07
bash-3.2# exit
# ^D
[Connection to zone 'UAHAZ1' pts/4 closed]
UASOL2:#zoneadm -z UAHAZ1 halt
UASOL2:#

Click Page 2 to see how to create the resource for local zone and adding in to the resource group .

The post How to configure High Availability zone on Solaris cluster ? appeared first on UnixArena.

search RSSing.com.... Search

How to setup oracle Solaris cluster on VMware workstation ?


July 2, 2014, 10:30 am

 Next  Sun Cluster – Zone Cluster on Oracle Solaris – Overview


 Previous  How to configure High Availability zone on Solaris cluster ?

    

https://unixarena68.rssing.com/chan-59694592/all_p1.html 48/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Getting opportunity to work on cluster environment is very difficult on big companies due to security problem. If you get opportunity also you can’t play much on it since
most of the cluster environments will be critical to the client. To learn any operating system cluster, you have to build it by your own and configuring the resource groups
and resources yourself. You will be lucky if your organization provides the LAB environment with necessary hardwares for these kind of setup. Due to hardware cost, many
companies are not providing the such LAB setup . So how to become master on cluster ? Will it be possible to setup cluster environment on single Desktop/Laptop ? Yes.
Using VMware workstation, you can setup cluster. In the past we have seen for veritas cluster. Here we will see how to setup two Solaris cluster on Solaris 10 using
VMware workstation .

Desktop/Laptop Configuration:

Operating System: windows 7 64 Bit or Linux 64 Bit


Software: VMware workstation 8 or higher version
Physical Memory : 4GB (Minimum) or 8 GB (Recommended )
Processor : Any intel processor with VT technology enabled.

1. In your desktop, install VMware workstation software and create two virtual machines with below mentioned configuration.

Virtual machine configuration


Virtual machine configuration

I have allocated 4.3Gb to each VM but assigning 1 GB is enough for each virtual machine. Your virtual machines must have minimum three network adapter . One NIC for
public and two NICs for heartbeat.

2.Install Solaris 10 update 11 on both the virtual machines.

3.Install VMware tools on both the virtual machines.

4.Enable the Windows share on both the virtual machine for copying Solaris cluster software from your laptop to virtual machine.Copy the Solaris cluster 3.3 u2 to /var/tmp
on both the nodes. Otherwise use winscp to copy it.

5.Configure password less authentication between two Solaris virtual machines.

6.Install the Solaris cluster on both virtual machine .

7.Configure the Solaris cluster between these two virtual machines.

8.To proceed further on soalris cluster, you require shared storage. So create a new virtual machine and install openfiler on it.

9. Provision two LUNs to ISCSI target on openfiler web-interface. (512MB Lun for Quorum and 3Gb LUN for shared Zpool)

10.Add the openfiler ISCSI targets in both the solaris nodes.

11.Add the Quorum device to the cluster.

12.Create the Solaris cluster resource group and add the ZFS storage pools as resource.

13.Finally create the local zone and add it in to Solaris cluster for failover local zone or high availability local zone using Solaris cluster.

Solaris Cluster in a BOX


Solaris Cluster in a BOX

By performing above steps , definitely you can setup Two node Solaris cluster on Desktop/Laptop using VMware workstation.

Good Luck.

Share it ! Comment it !! Be Sociable !!!

The post How to setup oracle Solaris cluster on VMware workstation ? appeared first on UnixArena.

Sun Cluster – Zone Cluster on Oracle Solaris – Overview


April 10, 2016, 12:20 pm

 Next  Sun Cluster – How to Configure Zone Cluster on Solaris ?


 Previous  How to setup oracle Solaris cluster on VMware workstation ?

    

This article explains about zone cluster. Zone cluster is created on oracle Solaris hosts using sun cluster aka Oracle Solaris cluster. In Most of the deployments , we might
have seen the failover zones (HA Zones) using sun cluster or Veritas cluster (VCS) on Solaris. Comparatively , zone clusters are very less in the industry but used in some
of the organization very effectively . You must establish the traditional cluster between physical nodes in an order to configure a zone cluster. Since cluster applications
always run in a zone, the cluster node is always a zone.

The typical 4-Node Sun cluster looks like below. (Prior to configuring zone cluster )

https://unixarena68.rssing.com/chan-59694592/all_p1.html 49/87
02/01/2024 09:02 Solaris Cluster – UnixArena
4 Node Cluster
4 Node Cluster

After configuring zone cluster on global cluster,

zone cluster on global cluster

zone cluster on global cluster

The above diagram shows that two zone clusters have been configured on global cluster.

Global Cluster – 4 Node Cluster (Node 1, Node 2 , Node 3, Node 4 )


Zone Cluster A – 4 Node Cluster (Zone A1 , A2 , A3 , A4)
Zone Cluster B – 2 Node Cluster (Zone B1 , B2)

Zone Cluster Use Cases:


This section demonstrates the utility of zone clusters by examining a variety of use cases, including the following:

Multiple organization consolidation

Functional consolidation (See the below example)

Here you can see that both test and development systems are in different zone cluster but in same global cluster.

Functional Consolidation - Sun Cluster


Functional Consolidation – Sun Cluster

Multiple-tier consolidation. (See the below example)

In this cluster model, all the three tiers are in same global cluster but are in different zone cluster.

Multiple-tier consolidation - Sun cluster


Multiple-tier consolidation – Sun cluster

Cost containment
Administrative workload reduction

Good to know:
Distribution of nodes: You can’t host multiple zones which are part same cluster on same host. Zones must be distributed across the physical nodes.

Node creation: You must create at least one zone cluster node at the time that you create the zone cluster. The name of the zone-cluster node must be unique within the
zone cluster. The infrastructure automatically creates an underlying non-global zone on each host that supports the zone cluster. Each non-global zone is given the same
zone name, which is derived from, and identical to, the name that you assign to the zone cluster when you create the cluster. For example, if you create a zone cluster that
is named “uainfrazone”, the corresponding non-global zone name on each host that supports the zone cluster is also “uainfrazone”.

Cluster name: Each zone-cluster name must be unique throughout the cluster of machines that host the global cluster. The zone-cluster name cannot also be used by a
non-global zone elsewhere in the cluster of machines, nor can the zone-cluster name be the same as that of a global-cluster node. You cannot use “all” or “global” as a
zone-cluster name, because these are reserved names.

Public-network IP addresses: You can optionally assign a specific public-network IP address to each zone-cluster node.

Private hostnames: During creation of the zone cluster, a private hostname is automatically created for each node of the zone cluster, in the same way that hostnames
are created in global clusters.

IP type:A zone cluster is created with the shared IP type. The exclusive IP type is not supported for zone clusters.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 50/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Hope this article is informative to you. In the next article, we will see that how to configure the zone cluster on existing two node sun cluster (global cluster).

The post Sun Cluster – Zone Cluster on Oracle Solaris – Overview appeared first on UnixArena.

Sun Cluster – How to Configure Zone Cluster on Solaris ?


April 10, 2016, 1:42 pm

 Next  Sun Cluster – Configuring Resource Group in Zone Cluster


 Previous  Sun Cluster – Zone Cluster on Oracle Solaris – Overview

    

This article will walk you through the zone cluster deployment on oracle Solaris. The zone cluster consists of a set of zones, where each zone represents a virtual node.
Each zone of a zone cluster is configured on a separate machine. As such, the upper bound on the number of virtual nodes in a zone cluster is limited to the number of
machines in the global cluster. The zone cluster design introduces a new brand of zone, called the cluster brand. The cluster brand is based on the original native brand
type, and adds enhancements for clustering. The BrandZ framework provides numerous hooks where other software can take action appropriate for the brand type of zone.
For example, there is a hook for software to be called during the zone boot, and zone clusters take advantage of this hook to inform the cluster software about the boot of
the virtual node. Because zone clusters use the BrandZ framework, at a minimum Oracle Solaris 10 5/08 is required.

The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone Cluster Membership Monitor (ZCMM), that monitors
the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone clusters. Zone clusters are considerably simpler than global
clusters. For example, there are no quorum devices in a zone cluster, as a quorum device is not needed.

clzonecluster is a utility to create , modify , delete and manage the zone cluster on sun cluster environment.

Zone uses the global zone physical resoruces


Zone uses the global zone physical resoruces

Note:
Sun Cluster is a product where zone cluster is one of the cluster type in sun cluster.

Environment:
Operating System : Oracle Solaris 10 u9
Cluster : Sun Cluster 3.3 (aka Oracle Solaris cluster 3.3)

Prerequisites :
Two Oracle Solaris 10 u9 nodes or above
Sun Cluster 3.3 package

Step : 1 Create a global cluster:


The following listed articles will help you to install and configure two node sun cluster on oracle Solaris 10.

Install Oracle Solaris cluster 3.3 (Aka Sun Cluster) on Solaris 10 nodes.
Configure two node sun cluster 3.3 on Solaris 10

Step: 2 Create a zone cluster inside the global cluster:


1. Login to one of the cluster node (Global zone).

2. Ensure that node of the global cluster is in cluster mode.

UASOL2:#clnode status
=== Cluster Nodes ===

--- Node Status ---


Node Name Status
--------- ------

https://unixarena68.rssing.com/chan-59694592/all_p1.html 51/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2 Online
UASOL1 Online
UASOL2:#

3. You must keep the zone path ready for local zone installation on both the cluster nodes. Zone path must be identical on both the nodes. On Node UASOL1 ,

UASOL1:#zfs list |grep /export/zones/uainfrazone


rpool/export/zones/uainfrazone 149M 4.54G 149M /export/zones/uainfrazone
UASOL1:#

On Node UASOL2,

UASOL2:#zfs list |grep /export/zones/uainfrazone


rpool/export/zones/uainfrazone 149M 4.24G 149M /export/zones/uainfrazone
UASOL2:#

4. Create a new zone cluster.

Note:
• By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.
• Specifying an IP address and NIC for each zone cluster node is optional.

UASOL1:#clzonecluster configure uainfrazone


uainfrazone: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:uainfrazone> create
clzc:uainfrazone> set zonepath=/export/zones/uainfrazone
clzc:uainfrazone> add node
clzc:uainfrazone:node> set physical-host=UASOL1
clzc:uainfrazone:node> set hostname=uainfrazone1
clzc:uainfrazone:node> add net
clzc:uainfrazone:node:net> set address=192.168.2.101
clzc:uainfrazone:node:net> set physical=e1000g0
clzc:uainfrazone:node:net> end
clzc:uainfrazone:node> end
clzc:uainfrazone> add sysid
clzc:uainfrazone:sysid> set root_password="H/80/NT4F2H7g"
clzc:uainfrazone:sysid> end
clzc:uainfrazone> verify
clzc:uainfrazone> commit
clzc:uainfrazone> exit
UASOL1:#

Cluster Name = uainfrazone


Zone Path = /export/zones/uainfrazone
physical-host = UASOL1 (Where the uainfrazone1 should be configured)
set hostname = uainfrazone1 (zone cluster node name)
Zone IP Address (Optional)=192.168.2.101

Here , we have just configured one zone on UASOL1 . Clustering make sense when you configure with two or more nodes. So let me create a one more zone on UASOL2
node in same zone cluster.

UASOL1:#clzonecluster configure uainfrazone


clzc:uainfrazone> add node
clzc:uainfrazone:node> set physical-host=UASOL2
clzc:uainfrazone:node> set hostname=uainfrazone2
clzc:uainfrazone:node> add net
clzc:uainfrazone:node:net> set address=192.168.2.103
clzc:uainfrazone:node:net> set physical=e1000g0
clzc:uainfrazone:node:net> end
clzc:uainfrazone:node> end
clzc:uainfrazone> commit
clzc:uainfrazone> info
zonename: uainfrazone
zonepath: /export/zones/uainfrazone
autoboot: true
hostid:
brand: cluster
bootargs:
pool:
limitpriv:

https://unixarena68.rssing.com/chan-59694592/all_p1.html 52/87
02/01/2024 09:02 Solaris Cluster – UnixArena

scheduling-class:
ip-type: shared
enable_priv_net: true
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
sysid:
root_password: H/80/NT4F2H7g
name_service: NONE
nfs4_domain: dynamic
security_policy: NONE
system_locale: C
terminal: xterm
timezone: Asia/Calcutta
node:
physical-host: UASOL1
hostname: uainfrazone1
net:
address: 192.168.2.101
physical: e1000g0
defrouter not specified
node:
physical-host: UASOL2
hostname: uainfrazone2
net:
address: 192.168.2.103
physical: e1000g0
defrouter not specified
clzc:uainfrazone> exit

Cluster Name = uainfrazone


Zone Path = /export/zones/uainfrazone
physical-host = UASOL2 (Where the uainfrazone2 should be configured)
set hostname = uainfrazone2 (zone cluster node name)
Zone IP Address (Optional)=192.168.2.103

The encrypted root password is “root123” . (Zone’s root password.)

5. Verify the zone cluster.

UASOL2:#clzonecluster verify uainfrazone


Waiting for zone verify commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL2:#

6. Check the zone cluster status. At this stage zones are in configured status.

UASOL2:#clzonecluster status uainfrazone

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone Host Name Status Zone Status


---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Offline Configured
UASOL2 uainfrazone2 Offline Configured

UASOL2:#

7. Install the zones using following command.

UASOL2:#clzonecluster install uainfrazone


Waiting for zone install commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL2:#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 53/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- uainfrazone installed /export/zones/uainfrazone cluster shared
UASOL2:#

Here you can see that uainfrazone is created and installed. You should be able to see the same on UASOL1 as well.

UASOL1:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
- uainfrazone installed /export/zones/uainfrazone cluster shared
UASOL1:#

Note: There is no difference if you run a command from UASOL1 or UASOL2 since both are in cluster.

8.Bring up the zones using clzonecluster . (You should not use zoneadm command to boot the zones)

UASOL1:#clzonecluster boot uainfrazone


Waiting for zone boot commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#
UASOL1:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 uainfrazone running /export/zones/uainfrazone cluster shared
UASOL1:#

In UASOL2,

UASOL2:#zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / native shared
3 uainfrazone running /export/zones/uainfrazone cluster shared
UASOL2:#

9. Check the zone cluster status.

UASOL1:#clzonecluster status uainfrazone


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Offline Running
UASOL2 uainfrazone2 Offline Running
UASOL1:#

10. Zones will reboot automatically for sysconfig. You could see that when you access the zone’s console.

UASOL1:#zlogin -C uainfrazone
[Connected to zone 'uainfrazone' console]
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair
Configuring network interface addresses: clprivnet0.

rebooting system due to change(s) in /etc/default/init

Apr 10 13:21:47 Cluster.Framework: cl_execd: Going down on signal 15.


Apr 10 13:21:47 Cluster.Framework: cl_execd: Going down on signal 15.

[NOTICE: Zone rebooting]

SunOS Release 5.10 Version Generic_147148-26 64-bit


Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
Hostname: uainfrazone1

uainfrazone1 console login:

https://unixarena68.rssing.com/chan-59694592/all_p1.html 54/87
02/01/2024 09:02 Solaris Cluster – UnixArena

11. Check the zone cluster status.

UASOL2:#clzonecluster status
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL2:#

We have successfully configured the two node zone cluster. What’s Next ? You should login to one of the zone and configure the resource group and resources. Just
login to any one of the local zone and check the cluster status.

UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/2]
Last login: Mon Apr 11 01:58:20 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===

--- Node Status ---


Node Name Status
--------- ------
uainfrazone1 Online
uainfrazone2 Online
bash-3.2#

Similar to this , you could create a N-number of zone cluster under the global cluster. These zone cluster uses the host’s private network and other required resources. In
the next article, we will see that how to configure the resource group on local zone.

Hope this article is informative to you.

The post Sun Cluster – How to Configure Zone Cluster on Solaris ? appeared first on UnixArena.

Sun Cluster – Configuring Resource Group in Zone Cluster


April 11, 2016, 10:27 am

 Next  Managing Zone Cluster – Oracle Solaris


 Previous  Sun Cluster – How to Configure Zone Cluster on Solaris ?

    

This article will walk you through how to configure a resource group in zone cluster. Unlike traditional cluster, resource group and cluster resources are should be created
inside the non-global zone. The required physical or logical resources need to be pinned from the global zone using “clzonecluster” or “clzc” command. In this article, we
will configure HA filesystem and IP resource on one of the zone cluster which we have created earlier. Adding to that , you can also configure DB or Application resource
for HA.

Global Cluster Nodes – UASOL1 & UASOL2


zone Cluster Nodes – uainfrazone1 & uainfrazone2

1.Login to one of the global cluster node.

2.Check the cluster status.

Global Cluster:
UASOL2:#clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------

https://unixarena68.rssing.com/chan-59694592/all_p1.html 55/87
02/01/2024 09:02 Solaris Cluster – UnixArena
UASOL2 Online
UASOL1 Online

Zone Cluster :
Login to one of the zone and check the cluster status. (extend the command search path to “/usr/cluster/bin”)

UASOL2:#zlogin uainfrazone
[Connected to zone 'uainfrazone' pts/3]
Last login: Mon Apr 11 02:00:17 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2# export PATH=/usr/cluster/bin:$PATH
bash-3.2# clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
uainfrazone1 Online
uainfrazone2 Online
bash-3.2#

Make sure that both the host names are updated on each nodes “/etc/inet/hosts” file.

3. Login to one of the global zone (Global Cluster) and add the IP detail in zone cluster. (IP which needs to highly available)

UASOL2:#clzc configure uainfrazone


clzc:uainfrazone> add net
clzc:uainfrazone:net> set address=192.168.2.102
clzc:uainfrazone:net> info
net:
address: 192.168.2.102
physical: auto
defrouter not specified
clzc:uainfrazone:net> end
clzc:uainfrazone> commit
clzc:uainfrazone> exit

4 . Create the ZFS pool on shared SAN LUN. So that zpool can be exported and imported other cluster nodes.

UASOL2:#zpool create oradbp1 c2t15d0


UASOL2:#zpool list oradbp1
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
oradbp1 2.95G 78.5K 2.95G 0% ONLINE -
UASOL2:#

Just manually export the zpool on UASOL2 & try to import it on UASOL1.

UASOL2:#zpool export oradbp1


UASOL2:#logout
Connection to UASOL2 closed.
UASOL1:#zpool import oradbp1
UASOL1:#zpool list oradbp1
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
oradbp1 2.95G 133K 2.95G 0% ONLINE -
UASOL1:#

It works. Let’s map this zpool to the zone cluster – uainfrazone.

5. In one of the global cluster node , invoke “clzc” to add the zpool.

UASOL1:#clzc configure uainfrazone


clzc:uainfrazone> add dataset
clzc:uainfrazone:dataset> set name=oradbp1
clzc:uainfrazone:dataset> info
dataset:
name: oradbp1
clzc:uainfrazone:dataset> end
clzc:uainfrazone> commit

https://unixarena68.rssing.com/chan-59694592/all_p1.html 56/87
02/01/2024 09:02 Solaris Cluster – UnixArena

clzc:uainfrazone> exit
UASOL1:#

We have successfully added IP address and dataset on the zone cluster configuration. At this point, you are eligible to use these resource under the zone cluster to
configure the cluster resources.

Configure Resource group and cluster Resources on Zone Cluster:


1. Add the IP in /etc/hosts of the zone cluster nodes (uainfrazone1 & uainfrazone2). We will make this IP as highly available through cluster.

bash-3.2# grep ora /etc/hosts


192.168.2.102 oralsn-ip
bash-3.2#

2. In one of the zone cluster node , Create the cluster resource group with name of “oradb-rg”.

bash-3.2# clrg create -n uainfrazone1,uainfrazone2 oradb-rg


bash-3.2# clrg status

=== Cluster Resource Groups ===


Group Name Node Name Suspended Status
---------- --------- --------- ------
oradb-rg uainfrazone1 No Unmanaged
uainfrazone2 No Unmanaged

bash-3.2#

If you want to create the resource group for “uainfrazone” zone cluster from global zone , you can use the following command. (with -Z “zone-cluster” name)

UASOL2:# clrg create -Z uainfrazone -n uainfrazone1,uainfrazone2 oradb-rg


UASOL2:#clrg status -Z uainfrazone

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status


---------- --------- --------- ------
uainfrazone:oradb-rg uainfrazone1 No Unmanaged
uainfrazone2 No Unmanaged
UASOL2:#

3. Create the cluster IP resource for oralsn-ip . (Refer step 1)

bash-3.2# clrslh create -g oradb-rg -h oralsn-ip oralsn-ip-rs


bash-3.2# clrs status

=== Cluster Resources ===

Resource Name Node Name State Status Message


------------- --------- ----- --------------
oralsn-ip-rs uainfrazone1 Offline Offline
uainfrazone2 Offline Offline

bash-3.2#

4. Create the ZFS resource for zpool – oradbp1 (which we have created and assigned this zone cluster in first section of the document)

You must register the ZFS resource type prior to adding the resource in cluster.

bash-3.2# clresourcetype register SUNW.HAStoragePlus


bash-3.2# clrt list
SUNW.LogicalHostname:4
SUNW.SharedAddress:2
SUNW.HAStoragePlus:10
bash-3.2#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 57/87
02/01/2024 09:02 Solaris Cluster – UnixArena

Add the dataset in zone cluster to make HA.

bash-3.2# clrs create -g oradb-rg -t SUNW.HAStoragePlus -p zpools=oradbp1 oradbp1-rs


bash-3.2# clrs status

=== Cluster Resources ===


Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Offline Offline
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Offline Offline


uainfrazone2 Offline Offline

bash-3.2#

5. Bring up the resource group online.

bash-3.2# clrg online -eM oradb-rg


bash-3.2# clrs status

=== Cluster Resources ===

Resource Name Node Name State Status Message


------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline

bash-3.2# uname -a
SunOS uainfrazone2 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#

6. Verify the resource status in uainfrazone1.

bash-3.2# clrg status


=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
oradb-rg uainfrazone1 No Online
uainfrazone2 No Offline

bash-3.2# clrs status


=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline
bash-3.2#
bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone uainfrazone
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.90 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.101 netmask ffffff00 broadcast 192.168.2.255
e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.102 netmask ffffff00 broadcast 192.168.2.255
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.2 netmask ffffff00 broadcast 172.16.2.255
clprivnet0:3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
zone uainfrazone

https://unixarena68.rssing.com/chan-59694592/all_p1.html 58/87
02/01/2024 09:02 Solaris Cluster – UnixArena

inet 172.16.3.66 netmask ffffffc0 broadcast 172.16.3.127


bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
oradbp1 86.5K 2.91G 31K /oradbp1
bash-3.2#

You can see that ZFS dataset “oradbp1” and IP “192.168.2.102” is up on uainfrazone1.

7. Switch the resource group to uainfrazone2 and check the resource status.

bash-3.2# clrg switch -n uainfrazone2 oradb-rg


bash-3.2# clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
oradb-rg uainfrazone1 No Offline
uainfrazone2 No Online

bash-3.2# clrs status


=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Offline Offline
uainfrazone2 Online Online

oralsn-ip-rs uainfrazone1 Offline Offline - LogicalHostname offline.


uainfrazone2 Online Online - LogicalHostname online.

bash-3.2#
bash-3.2#

Verify the result from OS level. Login to uainfrazone2 and check the following to confirm the switch over.

bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone uainfrazone
inet 127.0.0.1 netmask ff000000
e1000g0: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.2.91 netmask ffffff00 broadcast 192.168.2.255
groupname sc_ipmp0
e1000g0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.103 netmask ffffff00 broadcast 192.168.2.255
e1000g0:2: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
zone uainfrazone
inet 192.168.2.102 netmask ffffff00 broadcast 192.168.2.255
clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.2.1 netmask ffffff00 broadcast 172.16.2.255
clprivnet0:3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
zone uainfrazone
inet 172.16.3.65 netmask ffffffc0 broadcast 172.16.3.127
bash-3.2# df -h /oradbp1/
Filesystem size used avail capacity Mounted on
oradbp1 2.9G 31K 2.9G 1% /oradbp1
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
oradbp1 86.5K 2.91G 31K /oradbp1
bash-3.2#

We have successfully configure the Resource group and made ZFS and IP as highly available (HA) on Oracle Solaris zones via zone cluster concept. Hope this article is
informative to you. In the next article, we will see that how to add/remove/delete nodes from the zones cluster.

The post Sun Cluster – Configuring Resource Group in Zone Cluster appeared first on UnixArena.

Managing Zone Cluster – Oracle Solaris


April 13, 2016, 9:09 am

 Next  Oracle Solaris Cluster – Configure Oracle Database with Dataguard- Part 1

https://unixarena68.rssing.com/chan-59694592/all_p1.html 59/87
02/01/2024 09:02 Solaris Cluster – UnixArena
 Previous  Sun Cluster – Configuring Resource Group in Zone Cluster

    

This article will talk about managing the Zone Cluster on oracle Solaris. The clzonecluster command supports all zone cluster administrative activity, from creation through
modification and control to final destruction. The clzonecluster command supports single point of administration, which means that the command can be executed from any
node and operates across the entire cluster. The clzonecluster command builds upon the Oracle Solaris zonecfg and zoneadm commands and adds support for cluster
features. We will see that how to add/remove cluster nodes,checking the resource status and listing the resources from the global zone.

Each zone cluster has its own notion of membership. The system maintains membership information for zone clusters. Each machine hosts a component, called the Zone
Cluster Membership Monitor (ZCMM), that monitors the status of all cluster brand zones on that machine. The ZCMM knows which zones belong to which zone
clusters.Naturally, a zone of a zone cluster can only become operational after the global zone on the hosting machine becomes operational. A zone of a zone cluster will
not boot when the global zone is not booted in cluster mode. A zone of a zone cluster can be configured to automatically boot after the machine boots, or the administrator
can manually control when the zone boots. A zone of a zone cluster can fail or an administrator can manually halt or reboot a zone. All of these events result in the zone
cluster automatically updating its membership.

Viewing the cluster status:


1.Check the zone cluster status from global zone.

To check specific zone cluster status,

UASOL1:#clzc status -v uainfrazone


=== Zone Clusters ===
--- Zone Cluster Status ---
Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL1:#

To check all the zone cluster status ,

UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running

2. Check the resource group status of the zone cluster.

UASOL1:#clrg status -Z uainfrazone

=== Cluster Resource Groups ===


Group Name Node Name Suspended Status
---------- --------- --------- ------
uainfrazone:oradb-rg uainfrazone1 No Online
uainfrazone2 No Offline

UASOL1:#

To check, all the zone cluster’s resource group status from global zone,

UASOL1:#clrg status -Z all

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status


---------- --------- --------- ------
uainfrazone:oradb-rg uainfrazone1 No Online
uainfrazone2 No Offline

UASOL1:#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 60/87
02/01/2024 09:02 Solaris Cluster – UnixArena

3. Let’s check the zone cluster resource from global zone.

For specific cluster,

UASOL1:#clrs status -Z uainfrazone

=== Cluster Resources ===

Resource Name Node Name State Status Message


------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline

For all zone cluster,

UASOL1:#clrs status -Z all


=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oradbp1-rs uainfrazone1 Online Online
uainfrazone2 Offline Offline

oralsn-ip-rs uainfrazone1 Online Online - LogicalHostname online.


uainfrazone2 Offline Offline
UASOL1:#

Stop & Start the zone cluster:


1. Login to the global zone and stop the zone cluster “uainfrazone”.

UASOL1:#clzc halt uainfrazone


Waiting for zone halt commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Offline Installed
UASOL2 uainfrazone2 Offline Installed

UASOL1:#

2. Start the zone cluster “uainfrazone”.

UASOL1:#clzc boot uainfrazone


Waiting for zone boot commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL1:#zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
3 uainfrazone running /export/zones/uainfrazone cluster shared
UASOL1:#

3. Would you like to reboot the zone cluster ? Use the following command.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 61/87
02/01/2024 09:02 Solaris Cluster – UnixArena

UASOL1:#clzc reboot uainfrazone


Waiting for zone reboot commands to complete on all the nodes of the zone cluster "uainfrazone"...
UASOL1:#clzc status -v
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
uainfrazone UASOL1 uainfrazone1 Online Running
UASOL2 uainfrazone2 Online Running
UASOL1:#

How to add new node to the cluster ?


1. We are assuming that only one zone node is running and planning to add one more node to the zone cluster.

UASOL1:#clzonecluster status oraweb


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL1:#

2. Here the zone cluster is already in operational and running. In an order to add the additional nodes to this cluster , we need to do add the zone configuration in zone
cluster. (clzc & clzonecluster are identical commands. You can use any one of them)

UASOL1:#clzonecluster configure oraweb


clzc:oraweb> add node
clzc:oraweb:node> set physical-host=UASOL2
clzc:oraweb:node> set hostname=oraweb2
clzc:oraweb:node> add net
clzc:oraweb:node:net> set physical=e1000g0
clzc:oraweb:node:net> set address=192.168.2.132
clzc:oraweb:node:net> end
clzc:oraweb:node> end
clzc:oraweb> exit
UASOL1:#clzonecluster status oraweb
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Offline Configured

UASOL1:#

3. Install the zone cluster node on UASOL2. (-n Physical-Hostname)

UASOL1:#clzonecluster install -n UASOL2 oraweb


Waiting for zone install commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#clzonecluster status oraweb

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone Host Name Status Zone Status


---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Offline Installed

UASOL1:#

4. Boot the zone cluster node “oraweb2” .

https://unixarena68.rssing.com/chan-59694592/all_p1.html 62/87
02/01/2024 09:02 Solaris Cluster – UnixArena

UASOL1:#clzonecluster boot -n UASOL2 oraweb


Waiting for zone boot commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#clzonecluster status oraweb

=== Zone Clusters ===

--- Zone Cluster Status ---

Name Node Name Zone Host Name Status Zone Status


---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Offline Running

UASOL1:#

The zone status might show as “offline” and it will become online once the sys-config is done (via automatic reboot).

5. Check the zone status after few minutes.

UASOL1:#clzonecluster status oraweb


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Online Running
UASOL1:#

How to remove the zone cluster node ?


1. Check the zone cluster status .

UASOL1:#clzonecluster status oraweb


=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Online Running
UASOL2 oraweb2 Online Running
UASOL1:#

2. Stop the zone cluster node which needs to be decommissioned.

UASOL1:#clzonecluster halt -n UASOL1 oraweb


Waiting for zone halt commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#

3. Un-install the zone .

UASOL1:#clzonecluster uninstall -n UASOL1 oraweb


Are you sure you want to uninstall zone cluster oraweb (y/[n])?y
Waiting for zone uninstall commands to complete on all the nodes of the zone cluster "oraweb"...
UASOL1:#clzonecluster status oraweb
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL1 oraweb1 Offline Configured
UASOL2 oraweb2 Online Running

UASOL1:#

https://unixarena68.rssing.com/chan-59694592/all_p1.html 63/87
02/01/2024 09:02 Solaris Cluster – UnixArena
4. Remove the zone configuration from cluster.

UASOL1:#clzonecluster configure oraweb


clzc:oraweb> remove node physical-host=UASOL1
clzc:oraweb> exit
UASOL1:#clzonecluster status oraweb
=== Zone Clusters ===

--- Zone Cluster Status ---


Name Node Name Zone Host Name Status Zone Status
---- --------- -------------- ------ -----------
oraweb UASOL2 oraweb2 Online Running
UASOL1:#

clzc or clzonecluster Man help:


UASOL1:#clzc --help
Usage: clzc [] [+ | ...]
clzc [] -? | --help
clzc -V | --version

Manage zone clusters for Oracle Solaris Cluster

SUBCOMMANDS:

boot Boot zone clusters


clone Clone a zone cluster
configure Configure a zone cluster
delete Delete a zone cluster
export Export a zone cluster configuration
halt Halt zone clusters
install Install a zone cluster
list List zone clusters
move Move a zone cluster
ready Ready zone clusters
reboot Reboot zone clusters
set Set zone cluster properties
show Show zone clusters
show-rev Show release version on zone cluster nodes
status Status of zone clusters
uninstall Uninstall a zone cluster
verify Verify zone clusters

UASOL1:#clzonecluster --help
Usage: clzonecluster [] [+ | ...]
clzonecluster [] -? | --help
clzonecluster -V | --version

Manage zone clusters for Oracle Solaris Cluster

SUBCOMMANDS:

boot Boot zone clusters


clone Clone a zone cluster
configure Configure a zone cluster
delete Delete a zone cluster
export Export a zone cluster configuration
halt Halt zone clusters
install Install a zone cluster
list List zone clusters
move Move a zone cluster
ready Ready zone clusters
reboot Reboot zone clusters
set Set zone cluster properties
show Show zone clusters
show-rev Show release version on zone cluster nodes
status Status of zone clusters
uninstall Uninstall a zone cluster
verify Verify zone clusters

UASOL1:#

Hope this article is informative to you . Share it ! Comment it !! Be Sociable !!!.

https://unixarena68.rssing.com/chan-59694592/all_p1.html 64/87
02/01/2024 09:02 Solaris Cluster – UnixArena

The post Managing Zone Cluster – Oracle Solaris appeared first on UnixArena.


Viewing all 22 articles Page 1  Browse latest View live

More Pages to Explore .....

  click here for Latest and Popular articles on  Mesothelioma


and Asbestos     

  click here for Latest and Popular articles on  Mesothelioma


and Asbestos     

search RSSing.com.... Search

TOP-RATED IMAGES  

  

Bahubali (2015) Hindi 1080p


Fan 2016 Hindi 700MB DVDScr Here's wha
Words of Bewilderment… BluRay DTS-HDMA7.1 x264-
x264 'Shazam' look
DDR

LATEST IMAGES

 Underwood &
English and 
Videohive Premium LIKES & LOVES| regiments en
Transitions Spin DECEMBER 2023 Mechanic Cooperative #1 of football
December 31, 2023, 11:51 pm December 30, 2023, 6:11 am December 30, 2023, 4:00 am December 30, 2023, 4:00

https://unixarena68.rssing.com/chan-59694592/all_p1.html 65/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 66/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 67/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 68/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 69/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 70/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 71/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 72/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 73/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 74/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 75/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 76/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 77/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 78/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 79/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 80/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 81/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 82/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 83/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 84/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 85/87
02/01/2024 09:02 Solaris Cluster – UnixArena

https://unixarena68.rssing.com/chan-59694592/all_p1.html 86/87
02/01/2024 09:02 Solaris Cluster – UnixArena

  click here for Latest and Popular articles on  Mesothelioma


and Asbestos     


© 2024 //www.rssing.com

https://unixarena68.rssing.com/chan-59694592/all_p1.html 87/87

You might also like