Sfha Virtualization 51
Sfha Virtualization 51
Sfha Virtualization 51
Solaris
5.1
Veritas Storage Foundation and High Availability
Solutions Virtualization Guide
The software described in this book is furnished under a license agreement and may be used
only in accordance with the terms of the agreement.
Legal Notice
Copyright © 2010 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, Veritas, Veritas Storage Foundation are trademarks or
registered trademarks of Symantec Corporation or its affiliates in the U.S. and other
countries. Other names may be trademarks of their respective owners.
The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,
REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO
BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL
OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations. Any use, modification, reproduction release,
performance, display or disclosure of the Licensed Software and Documentation by the U.S.
Government shall be solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical
Support’s primary role is to respond to specific queries about product features
and functionality. The Technical Support group also creates content for our online
Knowledge Base. The Technical Support group works collaboratively with the
other functional areas within Symantec to answer your questions in a timely
fashion. For example, the Technical Support group works with Product Engineering
and Symantec Security Response to provide alerting services and virus definition
updates.
Symantec’s support offerings include the following:
■ A range of support options that give you the flexibility to select the right
amount of service for any size organization
■ Telephone and/or web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers automatic software upgrades protection
■ Global support purchased on a regional business hours or 24 hours a day, 7
days a week basis
■ Premium service offerings that include Account Management Services
For information about Symantec’s support offerings, you can visit our web site
at the following URL:
www.symantec.com/business/support/index.jsp
All support services will be delivered in accordance with your support agreement
and the then-current enterprise technical support policy.
Customer service
Customer service information is available at the following URL:
www.symantec.com/business/support/
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and support contracts
■ Information about the Symantec Buying Programs
■ Advice about Symantec's technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs or manuals
Documentation feedback
Your feedback on product documentation is important to us. Send suggestions
for improvements and reports on errors or omissions. Include the title and
document version (located on the second page), and chapter and section titles of
the text on which you are reporting. Send feedback to:
sfha_docs@symantec.com
Managed Services These services remove the burden of managing and monitoring security devices
and events, ensuring rapid response to real threats.
Consulting Services Symantec Consulting Services provide on-site technical expertise from
Symantec and its trusted partners. Symantec Consulting Services offer a variety
of prepackaged and customizable options that include assessment, design,
implementation, monitoring, and management capabilities. Each is focused on
establishing and maintaining the integrity and availability of your IT resources.
Education Services Education Services provide a full array of technical training, security education,
security certification, and awareness communication programs.
To access more information about enterprise services, please visit our web site
at the following URL:
www.symantec.com/business/services/
Select your country or language from the site index.
Contents
■ Overview
Overview
This document provides information about Veritas Storage Foundation and High
Availability Virtualization Solutions. Review this entire document before you
install Veritas Storage Foundation and High Availability products in zones, branded
zones, and logical domains.
This book provides many high-level examples and information. As such, you
should be a skilled user of Veritas products and knowledgeable concerning Sun's
virtualization technologies.
This guide is divided into five main chapters excluding this overview. Each chapter
presents information on using a particular Sun virtualization technology with
Veritas products. These chapters follow:
■ Storage Foundation and High Availability Solutions support for Solaris Zones
■ Storage Foundation and High Availability Solutions support for Branded Zones
14 Overview of Veritas Storage Foundation™ and High Availability Virtualization Solutions
Overview
■ Storage Foundation and High Availability Solutions support for Solaris Logical
Domains
■ Using multiple nodes in a Logical Domain environment
■ Configuring Logical Domains for high availability
Reference documentation
The following documentation provides information on installing, configuring,
and using Veritas Cluster Server:
■ Veritas Cluster Server Release Notes
■ Veritas Cluster Server Installation Guide
■ Veritas Cluster Server Bundled Agents Reference Guide
■ Veritas Cluster Server Agent for DB2 Installation and Configuration Guide
■ Veritas Cluster Server Agent for Oracle Installation and Configuration Guide
■ Veritas Cluster Server Agent for Sybase Installation and Configuration Guide
The following documentation provides information on installing, configuring,
and using Veritas Storage Foundation products:
■ Veritas Storage Foundation Release Notes
■ Veritas Storage Foundation Installation Guide
■ Veritas Volume Manager Administrator's Guide
■ Veritas File System Administrator's Guide
The following documentation provides information on installing, configuring,
and using Veritas Storage Foundation Cluster File System:
■ Veritas Storage Foundation Cluster File System Release Notes
■ Veritas Storage Foundation Cluster File System Installation Guide
■ Veritas Storage Foundation Cluster File System Administrator's Guide
Note: Storage Foundation Cluster File System does not support branded zones.
For Solaris Logical Domains, Branded Zone, and Zone installation and configuration
information, refer to the Sun Microsystems site: www.sun.com.
Sun Microsystems provides regular updates and patches for the Solaris Logical
Domains, Branded Zones, and Zone features. Contact Sun Microsystems for details.
Overview of Veritas Storage Foundation™ and High Availability Virtualization Solutions 15
About Veritas Storage Foundation and High Availability Virtualization Solutions
Reference online
For the latest information about this guide, see:
http://seer.entsupport.symantec.com/docs/vascont/278.html
running applications. This isolation prevents processes that are running in one
zone from monitoring or affecting processes running in other zones.
See the System Administration Guide: Solaris Containers--Resource Management
and Solaris Zones Solaris operating environment document.
Sun Microsystems, Inc. provides regular updates and patches for the Solaris Zones
feature. Contact Sun Microsystems, Inc. for more information.
VCS defines the zone information at the level of the service group so that you do
not have to define it for each resource. You need to specify a per-system value for
the ContainerInfo attribute.
When the value of the RunInContainer key is 0, the agent function (entry point)
for that resource runs outside the local container (in the global environment).
A limitation for the RunInContainer value is that only script agent functions
(entry points) can run inside a container.
■ PassCInfo
When the value of the PassCInfo key is 1, the agent function receives the
container information that is defined in the service group’s ContainerInfo
attribute. An example use of this value is to pass the name of the container to
the agent.
Zone-aware resources
Table 2-1 1ists the ContainerOpts attributes default values for resource types.
Zone-aware resources have predefined values for the ContainerOpts attribute.
Note: Symantec recommends that you do not modify the value of the ContainerOpts
attribute, with the exception of the Mount agent.
Table 2-1 ContainerOpts attribute default values for applications and resource
types
Apache 1 0
Application 1 0
ASMInst 1 0
ASMDG 1 0
Db2udb 1 0
IP 0 1
IPMultiNIC 0 1
IPMultiNICB 0 1
Process 1 0
Zone 0 1
Oracle 1 0
Storage Foundation and High Availability Solutions support for Solaris Zones 21
About VCS support for zones
Table 2-1 ContainerOpts attribute default values for applications and resource
types (continued)
Netlsnr 1 0
Sybase 1 0
SybaseBk 1 0
Second Decide on the location of the zone root, which is either on local storage.
Fourth Create the application service group and configure its resources.
See “Configuring the service group for the application” on page 27.
■ Use a loopback file system. All mounts that the application uses must be
part of the zone configuration and must be configured in the service group.
For example, you can create a zone, z-ora, and define the file system
containing the application’s data to have the mount point as /oradata.
When you create the zone, you can define a path in the global zone. An
example is /export/home/oradata, which the mount directory in the
non-global zone maps to. The MountPoint attribute of the Mount resource
for the application is set to /export/home/oradata. Confirm that
/export/home/oradata maps to /oradata.
■ Use a direct mount file system. All file system mount points that the
application uses that run in a zone must be set relative to the zone’s root.
For example, if the Oracle application uses /oradata, and you create the
zone with the zonepath as /z_ora, then the mount must be
/z_ora/root/oradata. The MountPoint attribute of the Mount resource must
be set to this path. The Mount resource depends on the Zone resource.
non-global zone. When you run the ifconfig command in the global zone
with the zone option--it plumbs the IP and makes it available to the zone
that you specify. The need for the container's name comes from the use of
this command, even though it cannot run in the container.
zonecfg -z newzone
zonecfg:newzone> create
2 Set the zonepath parameter to specify a location for the zone root.
3 If your application data resides on a loopback mount file system, create the
loopback file system in the zone.
4 Exit the zonecfg configuration.
zonecfg> exit
mkdir zonepath
11 If the application data is on a direct mount file system, mount the file system
from the global zone with the complete path that starts with the zone root.
zonecfg -z newzone
zonecfg:newzone> create
3 Set the zonepath parameter to specify a location for the zone root.
4 If your application data resides on a loopback mount file system, create the
loopback file system in the zone.
5 Exit the zonecfg configuration.
zonecfg> exit
mkdir zonepath
11 If the application data is on a loopback file system, mount the file system
containing the application’s data on shared storage.
Storage Foundation and High Availability Solutions support for Solaris Zones 27
Configuring VCS in zones
13 If the application data is on a direct mount file system, mount the file system
from the global zone with the complete path that starts with the zone root.
Figure 2-1 Zone root on local disks with loopback file system
Application
IP
Zone
NIC Mount
DiskGroup Application
Figure 2-2 depicts the dependency diagram when the zone root is set up on local
storage with a direct mount file system for the application. You can replace the
Mount resource with the CFSMount resource and the DiskGroup resource with
the CVMVolDg resource in the following diagram. In this configuration, decide if
you want the service group to be a parallel service group. If so, you may need to
localize certain attributes for resources in the service group. For example, you
have to change the IP resource's Address attribute for each node.
Figure 2-2 Zone root on local disks with direct mount file system
Application
IP Mount
NIC Zone
Manages
DiskGroup mounting and
umounting the
Application file
system
attributes for resources in the service group. For example, you have to change the
IP resource's Address attribute for each node.
Figure 2-3 Zone root on shared storage with loopback file system
Application
IP
Zone
Zone Application
root DiskGroup DiskGroup file system
Figure 2-4 depicts the dependency diagram when a zone root is set up on shared
storage with the direct mount file system for the application. You can replace the
Mount resource with the CFSMount resource and the DiskGroup resource with
the CVMVolDg resource in the following diagram. In this configuration, decide if
you want the service group to be a parallel service group. If so, you may need to
localize certain attributes for resources in the service group. For example, you
have to change the IP resource's Address attribute for each node.
Figure 2-4 Zone root on shared storage a direct mounted file system
Application
IP Mount
Application
file system
NIC Zone
DiskGroup Mount
Application
disk group Zone root
DiskGroup file system
Use the following principles when you create the service group:
■ Set the MountPoint attribute of the Mount resource to the mount path.
■ If the application requires an IP address, configure the IP resource in the
service group.
30 Storage Foundation and High Availability Solutions support for Solaris Zones
Configuring VCS in zones
■ If the zone root file system is on shared storage, you can configure separate
mounts for the zone and the application (as shown in the illustration), but you
can configure the same disk group for both.
If the application service group does not exist, the script creates a service
group with a resource of type Zone.
The script adds a resource of type Zone to the application service group. It
also creates a user account with group administrative privileges to enable
inter-zone communication.
2 Modify the resource dependencies to reflect your zone configuration. See the
resource dependency diagrams for more information.
3 Save the service group configuration and bring the service group online.
■ The systems hosting the service group have the required operating system to
run zones.
■ The service group does not have more than one resource of type Zone.
■ The dependencies of the Zone resource are correct.
To verify the zone configuration
1 If you use custom agents make sure the resource type is added to the
APP_TYPES or SYS_TYPES environment variable.
See “Using custom agents in zones” on page 23.
2 Run the hazoneverify command to verify the zone configuration.
# hazoneverify servicegroup_name
Troubleshooting zones
Use following information to troubleshoot VCS and zones:
■ VCS HA commands do not work.
Recommended actions:
■ Verify the VCS packages are installed.
■ Run the halogin command from the zone.
For more information on the halogin command, refer to the Veritas Cluster
Server User's Guide.
■ Verify your VCS credentials. Make sure the password is not changed.
■ Verify the VxSS certificate is not expired.
Recommended actions:
■ Verify VCS and the agent packages are installed correctly.
■ Verify the application is installed in the zone.
■ Verify the configuration definition of the agent.
Figure 2-5 An application service group that can fail over into a zone and back
sysA sysB
The Application
While the Zone Application Application resource resides
resource is in the non-
needed, it does global zone
Zone Zone
not manage a
real entity
Mount Mount
DiskGroup DiskGroup
Solaris 9 Solaris 10
In the main.cf configuration file, define the container name, type of container,
and whether it is enabled or not.
On sysA, set the value of Enabled to 2 to ignore zones so that the application runs
on the physical system. When the service group fails over to sysB, the application
runs inside the zone after the failover because Enabled is set to 1 on sysB. The
application can likewise fail over to sysA from sysB.
available in the non-global zone is to share access of this file system with one or
more non-global zones. For example, if a configuration file is available in a
particular file system and this configuration file is required by the non-global
zone, then the file system can be shared with the non-global zone using a loopback
file system mount.
The following commands share access of file system /mnt1 as a loopback file
system mount with the non-global zone myzone:
The value of dir is a directory in the non-global zone. The value of special is a
directory in the global zone to be mounted in the non-global zone.
Caution: Sharing file systems with non-global zones through a loopback file
system mount makes the file system available for simultaneous access from all
the non-global zones. This method should be used only when you want shared
read-only access to the file system.
The loopback file system mount mode of sharing file systems in the non-global
zones is supported in Veritas File System 4.1 and later.
Note: VxFS entries in the global zone /etc/vfstab file for non-global zone direct
mounts are not supported, as the non-global zone may not yet be booted at the
time of /etc/vfstab execution.
Once a file system has been delegated to a non-global zone through a direct mount,
the mount point will be visible in the global zone through the mount command,
but not through the df command.
3 Log in to the non-global zone and ensure that the file system is mounted:
before the file system is mounted. If the fsck command fails, the zone fails to
boot.
To add a direct mount to a zone's configuration
1 Check the status and halt the zone:
2 Add a new FS entry to the zone's configuration and set type to vxfs. For a
non-cluster file system, for example:
4 Log in to the non-global zone and ensure that the file system is mounted:
# zonecfg -z zonename
# rm -rf zonepath/zonename
If the non-global zone is not in the installed state, then you might have not
configured the non-global zone correctly. Refer to Sun Microsystems’
documentation.
3 Repeat steps 1 and 2 for each system in the cluster, except the last system.
4 On the last system in the cluster, configure, install, and boot the non-global
zone.
■ Configure the non-global zone:
# zonecfg -z zonename
■ On the systems that were halted in step 1, boot the non-global zone:
fd=open(filename, oflag)
ioctl(fd, VX_SETCACHE, VX_CONCURRENT)
write(fd, buff, numofbytes)
To enable Oracle Disk Manager file access from non-global zones with Veritas File
System
1 Make global zone licenses visible to the non-global zone by exporting the
/etc/vx/licenses/lic directory to the non-global zone as a lofs:
2 Create the /dev/odm directory in the non-global zonepath from the global
zone:
3 Log in to the non-global zone and mount /dev/odm either manually or use
the startup script. Use one of the following:
■ global# zlogin myzone
myzone# mount -F odm /dev/odm /dev/odm
Or:
■ global# zlogin myzone
myzone# /lib/svc/method/odm start
Caution: Exporting raw volumes to non-global zones has implicit security risks.
It is possible for the zone administrator to create malformed file systems that
could later panic the system when a mount is attempted. Directly writing to raw
volumes, exported to non-global zones, and using utilities such as dd can lead to
data corruption in certain scenarios.
40 Storage Foundation and High Availability Solutions support for Solaris Zones
Exporting VxVM volumes to a non-global zone
global# ls -l /dev/vx/rdsk/rootdg/vol1
crw------- 1 root root 301, 102000 Jun 3
12:54 /dev/vx/rdsk/rootdg/vol1crw------- 1 root sys 301, 10200
0 Jun 3 12:54 /devices/pseudo/vxio@0:rootdg,vol1,102000,raw
4 Verify that /myzone/dev/vx contains the raw volume node and that the
non-global zone can perform I/O to the raw volume node.
The exported device can now be used for performing I/O or for creating file
systems.
Note: A VxFS file system can only be constructed and mounted from the global
zone.
vm240v1:/-> ls -l /devices/pseudo/vxio*vol1*
brw------- 1 root sys 302, 66000 Mar 25
17:21 /devices/pseudo/vxio@0:mydg,vol1,66000,blk
crw------- 1 root sys 302, 66000 Mar 25
17:21 /devices/pseudo/vxio@0:mydg,vol1,66000,raw
vm240v1:/-> ls -l /dev/vx/*dsk/mydg/vol1
brw------- 1 root root 302, 66000 Mar 25 17:21 /dev/vx/dsk/mydg/vol1
crw------- 1 root root 302, 66000 Mar 25 17:21 /dev/vx/rdsk/mydg/vol1
zonecfg:myzone1>add inherit-pkg-dir
zonecfg:myzone1:inherit-pkg-dir>set dir=/etc/fs/vxfs
zonecfg:myzone1:inherit-pkg-dir>end
42 Storage Foundation and High Availability Solutions support for Solaris Zones
Software limitations of Storage Foundation support of non-global zones
Note: This issue applies to any Solaris device for which the /dev or /devices
device node is changed and has been configured in the non-global zone before the
change, including Solaris Volume Manager volumes.
Chapter 3
Storage Foundation and
High Availability Solutions
support for Branded Zones
This chapter includes the following topics:
■ System requirements
System requirements
Veritas Cluster Server (VCS) and Veritas Storage Foundation (SF) requirements
in a branded zone environment are as follows:
You can obtain the Containers software bundles from Sun Download
Center at: http://www.sun.com/software/solaris/containers. For
detailed information about the above requirements, read Sun
Microsystems' README files from the software bundle.
http://sunsolve.sun.com
Note: Solaris 8 is not supported in this release.
SF requirements ■ SF 5.1
Database support The following Oracle versions in branded zone are supported:
■ 9iR2
■ 10gR2
■ 11gR1
On Solaris 9 systems:
Uninstall VCS/SF
On Solaris 10 systems:
# zonecfg -z sol9-zone
sol9-zone: No such zone configured
Use 'create' to begin configuring a new zone.
Storage Foundation and High Availability Solutions support for Branded Zones 49
Configuring VCS/SF in a branded zone environment
Note that zone root for the branded zone can either be on the local storage
or the shared storage (raw volumes with VxFS).
■ Add a virtual network interface.
■ Verify the zone configuration for the zone and exit the zonecfg command
prompt.
zonecfg:sol9-zone> verify
zonecfg:sol9-zone> exit
6 Verify the zone information for the solaris9 zone you configured.
After the zone installation is complete, run the following command to list
the installed zones and to verify the status of the zones.
# /usr/lib/brand/solaris9/s9_p2v sol9-zone
After the zone booting is complete, run the following command to verify the
status of the zones.
# zoneadm list -v
■ For Solaris 10, edit the VCS startup script (/etc/init.d/vcs) to add the
following in the beginning of the file:
exit 0
10 If you configured Oracle to run in the branded zone, then install the VCS
agent for Oracle packages (VRTSvcsea) and the patch in the branded zone.
See the Veritas Cluster Server Agent for Oracle Installation and Configuration
Guide for installation instructions.
11 For ODM support, install the following additional packages and patches in
the branded zone:
■ Install the following 5.1 packages:
■ VRTSvlic
■ VRTSodm
12 If using ODM support, relink Oracle ODM library in Solaris 9 branded zones:
■ Log into Oracle instance.
■ Relink Oracle ODM library.
If you are running Oracle 9iR2:
$ rm $ORACLE_HOME/lib/libodm9.so
$ ln -s /opt/VRTSodm/lib/sparcv9/libodm.so \
$ORACLE_HOME/lib/libodm9.so
$ rm $ORACLE_HOME/lib/libodm10.so
$ ln -s /opt/VRTSodm/lib/sparcv9/libodm.so \
$ORACLE_HOME/lib/libodm10.so
$ rm $ORACLE_HOME/lib/libodm11.so
$ ln -s /opt/VRTSodm/lib/sparcv9/libodm.so \
$ORACLE_HOME/lib/libodm11.so
■ Ensure that you have the correct license to run ODM. If you are migrating
from a host which has a licence to run ODM, ensure you have the correct
license in /etc/vx/license/vx directory.
Otherwise, make global zone licenses visible to the non-global zone by
exporting the /etc/vx/licenses/lic directory to the non-global zone as
a lofs:
Check if the /dev/odm directory exists, if not create the /dev/odm directory
in the non-global zone from the global zone:
■ If you are using ODM, first ensure that Storage Foundation works properly.
If ODM is not started in the branded zone, start ODM. Log in to the branded
zone and mount /dev/odm or run /lib/svc/method/odm start:
Or:
13 Configure the resources in the VCS configuration file in the global zone. For
example:
group g1 (
SystemList = { vcs_sol1 = 0, vcs_sol2 = 1 }
ContainterInfo@vcs_sol1 {Name = zone1, Type = zone, Enabled = 1 }
ContainterInfo@vcs_sol2 {Name = zone1, Type = zone, Enabled = 1 }
AutoStartList = { vcs_sol1 }
Administrators = { "z_z1@vcs_lzs@vcs_sol2.symantecexample.com" }
)
Process p1 (
PathName = "/bin/ksh"
Arguments = "/var/tmp/cont_yoyo"
)
Zone z1 (
)
p1 requires z1
See the Veritas Cluster Server Bundled Agents Reference Guide for VCS Zone
agent details.
Chapter 4
Storage Foundation and
High Availability Solutions
support for Solaris Logical
Domains
This chapter includes the following topics:
■ New features
■ System requirements
■ Product licensing
■ Using Veritas Volume Manager snapshots for cloning Logical Domain boot
disks
■ Software limitations
■ Known issues
Term Definition
Term Definition
Note: The SFCFS stack can be installed across multiple I/O domains within or
across physical servers.
See “Veritas Cluster Server limitations” on page 99.
Standardization of tools
Independent of how an operating system is hosted, consistent storage management
tools save an administrator time and reduce the complexity of the environment.
Storage Foundation in the control domain provides the same command set, storage
namespace, and environment as in a non-virtual environment.
Array migration
Data migration for Storage Foundation can be executed in a central location,
migrating all storage from an array utilized by Storage Foundation managed hosts.
This powerful, centralized data migration functionality is available with Storage
Foundation Manager 1.1 and later.
New features
This section describes the new features in Solaris Logical Domains (LDoms) using
the products in the Veritas Storage Foundation and High Availability Solutions.
mapping all the partitions contained within that volume using the vxlo driver.
The partitions can then be mounted if they contain valid file systems.
To use this vxloadm utility
1 Load the vxlo driver in memory:
# cd /kernel/drv/sparcv9
# add_drv -m '* 0640 root sys' vxlo
# modload vxlo
# /etc/vx/bin/vxloadm
# rem_drv vxlo
# modinfo| grep vxlo
226 7b3ec000 3870 306 1 vxlo (Veritas Loopback Driver 0.1)
# modunload -i 226
where 226 is the module ID from the modinfo | grep vxlo command.
This creates a device node entry for every slice or partition contained within the
volume in the /dev/vxlo/dsk/ and /dev/vxlo/rdsk/ directories.
# ls -l /dev/vxlo/dsk/
lrwxrwxrwx 1 root root 46 Sep 25 14:04 vol1s0
-> ../../../devices/pseudo/vxlo@0:vol1s0,1,blk
Storage Foundation and High Availability Solutions support for Solaris Logical Domains 59
New features
# ls -l /dev/vxlo/rdsk/
lrwxrwxrwx 1 root root 46 Sep 25 14:04 vol1s0
-> ../../../devices/pseudo/vxlo@0:vol1s0,1,raw
lrwxrwxrwx 1 root root 46 Sep 25 14:04 vol1s3
-> ../../../devices/pseudo/vxlo@0:vol1s3,2,raw
Use the vxloadm get command to display the list of all currently mapped
partition(s) created using the vxloadm utility. For example:
# /etc/vx/bin/vxloadm get
VxVM INFO V-5-1-0 NAME FILENAME
MOUNT OFFSET C/H/S
VxVM INFO V-5-1-15260 vol1s0 /dev/vx/dsk/testdg/vol1
6180 6787/1/618
VxVM INFO V-5-1-15260 vol1s3 /dev/vx/dsk/testdg/vol1
4326000 50902/1/618
Use the appropriate file system commands to access the file system(s) For example:
# fstyp /dev/vxlo/rdsk/vol1s0
ufs
# mount -F ufs /dev/vxlo/dsk/vol1s0 /mnt
Use the vxloadm delete to remove the partition mappings of a volume. For
example:
Note: This vxloadm utility should only be used on volumes that are currently not
in use or held open by a guest domain.
■ The relabel succeeds only - if the available blocks is greater than the last sector
of each and every non-s2 partition.
Otherwise, thevxformat command displays the following message and then
exits:
# /etc/vx/bin/vxformat c0d1s2
rawpath: /dev/rdsk/c0d1s2
Old disk capacity: 2097000 blocks
New disk capacity: 4194000 blocks
Device /dev/rdsk/c0d1s2 has been successfully re-labeled.
Please use prtvtoc(1) to obtain the latest partition table information
If the underlying device size has not changed, the vxformat command displays
the following message without changing the label. For example:
# /etc/vx/bin/vxformat c0d1s2
Old disk capacity: 2343678 blocks
New disk capacity: 2343678 blocks
size of device /dev/rdsk/c0d2s2 is unchanged
Note: For resizing a volume exported as a single slice: The new size should be
visible dynamically in the guest immediately.
For resizing a volume exported as a full disk: Even though the new size is visible
dynamically in the guest, the new space allocated in the volume cannot be utilized
unless the label in the vdisk has been adjusted to reflect the new sectors. This
adjustment of the label needs to be done carefully.
Symantec has developed a utility to automatically do the relabeling of the vdisk
in the guest that will be available in an upcoming release.
Figure 4-1 Split Storage Foundation stack model with Solaris Logical Domains
Volume
Virtual Disk Client VxVM/CVM
DMP
Domain Channel
Hypervisor
Server
Storage
See “Cluster Volume Manager in the control domain for providing high
availability” on page 94.
■ VxFS drivers in the guest domain cannot currently interact with the VxVM
drivers in the control domain. This renders some features, which require direct
VxVM-VxFS coordination, unusable in such a configuration.
See “Veritas Storage Foundation features restrictions” on page 63.
Note: VxFS can also be placed in the control domain, but there will be no
coordination between the two VxFS instances in the guest and the control
domain.
Shrinking a VxFS file system, on the other hand, requires you to first shrink
the file system in the guest LDom using the fsadm command, and then the
volume in the control domain using the vxassist command. Using the
vxassist command requires you to use the -f option of the command, as in
the following example.
Caution: Do not shrink the underlying volume beyond the size of the VxFS file
system in the guest as this can lead to data loss.
Domain Channel
Hypervisor
Server
Path A
Path B
Storage
Figure 4-2 illustrates the guest-based Storage Foundation stack model with Solaris
Logical Domains.
Note: Only full SCSI disks can be used under Veritas Volume Manager (VxVM)
and DMP in this model. Non-SCSI devices (volume, file, slice, etc) are not supported.
Veritas Storage Foundation and High Availability Solutions and Veritas Storage
Foundation Cluster File System supports running in the guest Logical Domains
66 Storage Foundation and High Availability Solutions support for Solaris Logical Domains
Guest-based Storage Foundation stack model
Server (T5240)
Guest Domain 1 SFCFS Guest Domain 2
Node 1 Cluster Node 2
IPMP HB HB IPMP
DMP DMP
Figure 4-3 illustrates that each guest domain gets network and disk storage
redundancy from the two I/O domains.
68 Storage Foundation and High Availability Solutions support for Solaris Logical Domains
Guest-based Storage Foundation stack model
Server 1 Server 2
Guest Domain 1 Guest Domain 2
SFCFS
Node 1 Cluster Node 2
IPMP HB IPMP HB
DMP DMP
Interconnect
Public
Network
Storage
VSW = Virtual Switch
IPMP = IP Multipath VDS = Virtual Disk Service
HB = LLT Heartbeat Links SFCFS = Storage Foundation
DMP = Dynamic Multipath Cluster File System
Figure 4-4 illustrates that each guest domain gets network and disk storage
redundancy from the two I/O domains on that physical server. The guest cluster
spans across two physical servers.
Storage Foundation and High Availability Solutions support for Solaris Logical Domains 69
Guest-based Storage Foundation stack model
Server 1 Server 2
Guest Domain 1 Guest Domain 2
SFCFS Guest Domain 3 Guest Domain 4
Node 1 Node 2 Cluster Node 3 Node 4
IPMP HB IPMP HB IPMP HB IPMP HB
DMP DMP DMP DMP
Interconnect
Public
Network
Storage
IPMP = IP Multipath VSW = Virtual Switch
HB = LLT Heartbeat Links VDS = Virtual Disk Service
DMP = Dynamic Multipath SFCFS = Storage Foundation
Cluster File System
Figure 4-5 illustrates that each guest domain gets network and disk storage
redundancy from two I/O domains on that physical server. The guest cluster spans
across two physical servers.
70 Storage Foundation and High Availability Solutions support for Solaris Logical Domains
Guest-based Storage Foundation stack model
Server (T5440)
SFCFS Cluster
Guest Domain 1 Guest Domain 2 Guest Domain 3 Guest Domain 4
Node 1 Node 2 Node 3 Node 4
IPMP HB IPMP HB IPMP HB IPMP HB
DMP DMP DMP DMP
Interconnect
Physical
Physical Physical Physical
Adapter
Adapter Adapter Adapter
Public
Storage Network
IPMP = IP Multipath VSW = Virtual Switch
HB = LLT Heartbeat Links VDS = Virtual Disk Service
DMP = Dynamic Multipath SFCFS = Storage Foundation
Cluster File System
Figure 4-6 illustrates each guest gets its disk storage redundancy from two out of
the four I/O domains. Each guest gets its network redundancy from all the four
I/O domains.
System requirements
This section describes the system requirements for this release.
Warning: Patch version and information are determined at the time of product
release. Contact your vendor for the most current patch version and information.
139562-02 (obsoleted by 138888-07) This patch fixes the following SUN bugs that
affects SF functionality:
Product licensing
Customers running Veritas Storage Foundation or Veritas Storage Foundation
Cluster File System in a Solaris LDom environment are entitled to use an unlimited
number of logical domains on each licensed server or CPU.
# cd pkgs
# pkgadd -d VRTSvlic.pkg
# pkgadd -d VRTSvxfs.pkg
# pkgadd -d VRTSfssdk.pkg
Note: This section applies to only the Split Storage Foundation model.
In the following example control domain is named “primary” and the guest domain
is named “ldom1.” The prompts in each step show in which domain to run the
command.
To create virtual disks on top of the Veritas Volume Manager data volumes using
the ldm command
1 The VxVM diskgroup on the target LDom host is imported in the control
domain, after which volumes are visible from inside the control domain.
See the Veritas Volume Manager Administrator’s Guide to move disk groups
between systems.
2 In the control domain (primary), configure a service exporting the VxVM
volume containing a VxFS or UFS filesystem as a slice using the
options=slice option:
Caution: With Solaris 10, Update 5 and LDoms 1.1, a volume by default shows
up as a full disk in the guest. The Virtual Disk Client driver writes a VTOC on
block 0 of the virtual disk, which will end up as a WRITE on block 0 of the
VxVM volume. This can potentially cause data corruption, because block 0
of the VxVM volume contains user data. Using options=slice exports a
volume as a slice to the guest and does not cause any writes to block 0,
therefore preserving user data.
4 Start the guest domain, and ensure that the new virtual disk is visible.
5 If the new virtual disk device node entires do not show up in the/dev/[r]dsk
directories, then run the devfsadm command in the guest domain:
ldom1# devfsadm -C
ldom1# ls -l /dev/dsk/c0d1s0
6 Mount the file system on the disk to access the application data:
Note: This section applies to the Split Storage Foundation stack model only.
For the guest-based Storage Foundation model:
See “How Storage Foundation and High Availability Solutions works in the guest
Logical Domains” on page 65.
2 Create a VxVM volume of the desired layout (in this example, creating a simple
volume):
5 Start the guest domain, and ensure that the new virtual disk is visible:
6 If the new virtual disk device node entires do not show up in the/dev/[r]dsk
directories, then run the devfsadm command in the guest domain:
ldom1# devfsadm -C
7 Label the disk using the format command to create a valid label before trying
to access it.
See the format(1M) manual page.
8 Create the file system where c0d1s2 is the disk.
Figure 4-7 Example of using Veritas Volume Manager snapshots for cloning
Logical Domain boot disks
clone
c0d0s2 c0d0s2
vdisk2 is created by
vdisk1 is created exporting the
by exporting a snapshot volume
large volume vdisk1 vdisk2 “SNAP-bootdisk1-
“bootdisk1-vol” vol”
snapshot
bootdisk1-vol SNAP-bootdisk1-vol
Control domain
Before this procedure, ldom1 has its boot disk contained in a large volume,
/dev/vx/dsk/boot_dg/bootdisk1-vol.
# ldm list-constraints -x
# ldm add-domain -i
If you specify the -b option to the vxsnap addmir command, you can use the
vxsnap snapwait command to wait for synchronization of the snapshot
plexes to complete, as shown in the following example:
Caution: Shut down the guest domain before executing the vxsnap command
to take the snapshot.
Either of the following attributes may be specified to create the new snapshot
volume, snapvol, by breaking off one or more existing plexes in the original
volume:
plex Specifies the plexes in the existing volume that are to be broken off. This
attribute can only be used with plexes that are in the ACTIVE state.
nmirror Specifies how many plexes are to be broken off. This attribute can only be
used with plexes that are in the SNAPDONE state. Such plexes could have
been added to the volume by using the vxsnap addmir command.
Snapshots that are created from one or more ACTIVE or SNAPDONE plexes
in the volume are already synchronized by definition.
For backup purposes, a snapshot volume with one plex should be sufficient.
For example,
5 Start ldom1 and boot ldom1 from its primary boot disk vdisk1.
6 If the new virtual disk device node entires do not show up in the/dev/[r]dsk
directories, then run the devfsadm command in the guest domain:
ldom1# devfsadm -C
ldom1# ls /dev/dsk/c0d2s*
7 Mount the root file system of c0d2s0 and modify the /etc/vfstab entries
such that all c#d#s# entries are changed to c0d0s#. You must do this because
ldom2 is a new LDom and the first disk in the OS device tree is always named
as c0d0s#.
8 After you change the vfstab file, unmount the file system and unbind vdisk2
from ldom1:
After booting ldom2, appears as ldom1 on the console because the other
host-specific parameters like hostname and IP address are still that of ldom1.
10 To change the parameters bring ldom2 to single-user mode and run the
sys-unconfig command.
86 Storage Foundation and High Availability Solutions support for Solaris Logical Domains
Software limitations
11 Reboot ldom2.
During the reboot, the operating system prompts you to configure the
host-specific parameters such as hostname and IP address, which you must
enter corresponding to ldom2.
12 After you have specified all these parameters, ldom2 boots successfully.
Software limitations
The following section describes some of the limitations of the Solaris Logical
Domains software and how those software limitations affect the functionality of
the Veritas Storage Foundation products.
These error messages are due to a kernel memory corruption occurring in the
Solaris kernel driver stacks (virtual disk drivers). This issue occurs when issuing
USCSICMD with the sense request enable (USCSI_RQENABLE) on a virtual disk
from the guest.
Symantec has an open escalation with Sun Microsystems and an associated SUN
bug id for this issue:
SUN Escalation number: 1-23915211
SUN bug id: 6705190 (ABS: uscsicmd on vdisk can overflow the sense buffer)
http://sunsolve.sun.com/search/document.do?assetkey=1-1-6705190-1
This SUN bug has been fixed in Sun patch 139562-02.
See “Solaris patch requirements” on page 72.
Workaround: Export VxVM volumes using their block device nodes instead. This
issue is under investigation by Sun Microsystems.
SUN bug id: 6716365 (disk images on volumes should be exported using the ldi
interface)
This SUN bug is fixed in Sun patch 139562-02.
See “Solaris patch requirements” on page 72.
Known issues
The following section describes some of the known issues of the Solaris Logical
Domains software and how those known issues affect the functionality of the
Veritas Storage Foundation products.
88 Storage Foundation and High Availability Solutions support for Solaris Logical Domains
Known issues
VxVM vxslicer ERROR V-5-1-599 Disk layout does not support swap shrinking
VxVM vxslicer ERROR V-5-1-5964 Unsupported disk layout.
Encapsulation requires atleast 0 sectors of unused space either at the
beginning or end of the disk drive.
This is because while installing the OS on such a disk, it is required to specify the
entire size of the backend device as the size of slice "s0", leaving no free space on
the disk.
Boot disk encapsulation requires free space at the end or the beginning of the disk
for it to proceed ahead.
While performing I/O on a mirrored volume inside a guest, it was observed that
a vdisk would go offline intermittently even when at least one I/O domain which
provided a path to that disk was still up and running.
This issue is still under investigation. Symantec recommends that you install
Solaris 10 Update 7 that contains the fix for Sun bug id 6742587 (vds can ACK a
request twice). This fix possibly resolves this issue.
This SUN bug is fixed in Sun patch 139562-02 that has been obsoleted by
138888-07.
See “Solaris patch requirements” on page 72.
■ Cluster Volume Manager in the control domain for providing high availability
# haconf -makerw
# hastop -all
# hastart
# haconf -makerw
7 Modify the resource so that a failure of this resource does not bring down the
entire group:
Host A Host B
c0d0s2 c0d0s2
Shared storage
Caution: As such applications running in the guests may resume or time out based
on the individual application settings. The user must decide if the application
must be restarted on another guest on the failed-over control domain. There is a
potential data corruption scenario if the underlying shared volumes get accessed
from both of the guests simultaneously.
Shared volumes and their snapshots can be used as a backing store for guest
LDoms.
96 Veritas Cluster Server support for using multiple nodes in a Logical Domain environment
Cluster Volume Manager in the control domain for providing high availability
Note: The ability to take online snapshots is currently inhibited because the file
system in the guest cannot coordinate with the VxVM drivers in the control
domain.
Make sure that the volume whose snapshot is being taken is closed before the
snapshot is taken.
The following example procedure shows how snapshots of shared volumes are
administered in such an environment. In the example, datavol1 is a shared volume
being used by guest LDom ldom1 and c0d2s2 is the front end for this volume
visible from ldom1.
To take a snapshot of datavol1
1 Unmount any VxFS file systems that exist on c0d1s0.
2 Stop and unbind ldom1:
This ensures that all the file system metadata is flushed down to the backend
volume, datavol1.
3 Create a snapshot of datavol1.
See the Veritas Volume Manager Administrator's Guide for information on
creating and managing third-mirror break-off snapshots.
4 Once the snapshot operation is complete, rebind and restart ldom1.
Table 6-1 Veritas Cluster Server failover options for Logical Domain failure
LDoms, their storage, or VCS fails over the LDom VCS is installed in the control
switches fail from one node to the LDom domain of each node.
on another node
See “Veritas Cluster Server
setup to fail over a Logical
Domain on a failure”
on page 102.
LDoms, their storage, or VCS fails over the LDom VCS is installed in the control
switches fail from one node to the LDom domain of each node, and
on another node. single node VCS is installed
Or
on each guest domain.
The application starts on the
Applications that run in
same LDom after the LDom See “Veritas Cluster Server
LDoms fail
failover. setup to fail over a Logical
Domain on a failure”
on page 102.
Applications that run in VCS fails over the application VCS is installed in the guest
LDoms fail from one LDom to another. domain of each node.
Or See “Veritas Cluster Server
setup to fail over an
LDom where the application
application on a failure”
is running fails
on page 105.
Unless otherwise noted, all references to other documents refer to the Veritas
Cluster Server documents version 5.1 for Solaris.
■ If you want to configure I/O fencing in guest domain, then do not export
physical devices to more than one guest domain on the same physical node.
Otherwise, I/O fencing fences off the device whenever one of the guest domain
dies. This situation causes the other guest domains also to lose access to the
device.
Symantec recommends you to disable I/O fencing if you exported the same physical
device to multiple guest domains.
Shutting down the control domain may cause the guest domain
to crash (1631762)
Figure 6-1 Typical setup for Logical Domain high availability with VCS control
domains
ldom1
VCS VCS
Virtual layer
Physical layer
Node1 Node2
A typical two-node VCS configuration for LDom high availability has the following
software and hardware infrastructure:
■ Sun LDom software is installed on each system Node1 and Node2.
■ Shared storage is attached to each system.
■ An LDom ldom1 exists on both the nodes with a shared boot device.
■ VCS is installed in the control domains of each node.
Figure 6-2 Typical setup for application high availability with VCS in control
domains
ldom1
Application Application
VCS VCS
VCS VCS
(one-node) (one-node)
Virtual layer
Physical layer
VCS private
network
Node1 Node2
A typical two-node VCS configuration that fails over the LDoms to keep the
applications that run in LDoms highly available has the following infrastructure:
■ Sun LDom software is installed on each system Node1 and Node2.
■ Shared storage is attached to each system.
■ An LDom ldom1 with same configuration details exists on both the nodes with
a shared boot device.
■ Each LDom has an operating system installed.
■ VCS is installed in the control domains of each node.
■ Each guest domain has single-node VCS installed. VCS kernel components are
not required.
■ VCS service group exists for the application that VCS must manage.
■ VCS RemoteGroup service group with an online global firm dependency to the
LDom service group is created to monitor the Application service group.
Veritas Cluster Server: Configuring Logical Domains for high availability 105
About Veritas Cluster Server configuration models in a Logical Domain environment
Figure 6-3 Typical setup for applications high availability with Veritas Cluster
Server in control domains
Application
Application Application
VCS VCS
Virtual layer
Physical layer
VCS private
network
Node1 Node2
A typical two-node VCS configuration that fails over the applications to keep the
applications that run in LDoms highly available has the following infrastructure:
A typical two-node configuration where VCS keeps the applications that run in
LDoms highly available has the following software and hardware infrastructure:
■ Sun LDom software is installed on each system Node1 and Node2.
■ Shared storage is attached to each system.
■ LDoms are created on both the nodes that may have local boot devices.
■ VCS is installed in the guest domains of each node.
106 Veritas Cluster Server: Configuring Logical Domains for high availability
Configuring Veritas Cluster Server to fail over a Logical Domain on a failure
Note: If you create the RemoteGroup resource as part of the LDom service group,
then the RemoteGroup resource state remains as UNKNOWN if the LDom is down.
So, VCS does not probe the service group and cannot bring the LDom online. The
online global firm dependency between the service groups allows VCS to fail over
a faulted child LDom service group independent of the state of the parent
RemoteGroup service group.
Perform the following tasks to configure VCS to fail over an LDom on an LDom
failure:
■ Review the configuration scenarios
See “Configuration scenarios” on page 107.
■ Configure logical domains
See “Configuring logical domain” on page 109.
■ Install VCS on control domain
See “Installing Veritas Cluster Server inside the control domain” on page 110.
■ Create VCS service group for LDom
See “Creating the Veritas Cluster Server service groups for Logical Domain”
on page 110.
Perform the following additional tasks to configure VCS to fail over an LDom on
an application failure:
■ Install single-node VCS on guest domain
See “Installing single-node Veritas Cluster Server inside the guest domain”
on page 111.
■ Configure VCS in control domain to monitor the application in guest domain
See “Configuring Veritas Cluster Server to monitor the application in the guest
domain” on page 111.
Veritas Cluster Server: Configuring Logical Domains for high availability 107
Configuring Veritas Cluster Server to fail over a Logical Domain on a failure
Figure 6-4 depicts the workflow to configure VCS to manage the failure of an
LDom or the failure of an application that runs in an LDom.
Figure 6-4 Workflow to configure VCS to fail over a Logical Domain on a failure
Configuration scenarios
Figure 6-5 shows the basic dependencies for an LDom resource.
108 Veritas Cluster Server: Configuring Logical Domains for high availability
Configuring Veritas Cluster Server to fail over a Logical Domain on a failure
LDom
Storage Network
Network configuration
Use the NIC agent to monitor the primary network interface, whether it is virtual
or physical. Use the interface that appears using the ifconfig command.
Figure 6-6 is an example of an LDom service group. The LDom resource requires
both network (NIC) and storage (Volume and DiskGroup) resources.
See the Veritas Cluster Server Bundled Agents Reference Guide for more information
about the NIC agent.
Storage configurations
Depending on your storage configuration, use a combination of the Volume,
DiskGroup, and Mount agents to monitor storage for LDoms.
Note: VCS in a control domain supports only volumes or flat files in volumes that
are managed by VxVM for LDom storage.
Figure 6-6 The Logical Domain resource can depend on many resources, or
just the NIC, Volume, and DiskGroup resources depending on the
environment
LDom
LDom
Volume NIC
DiskGroup
For more information about the Volume and DiskGroup agents, refer to the Veritas
Cluster Server Bundled Agents Reference Guide.
Image files
Use the Mount, Volume, and DiskGroup agents to monitor an image file.
Figure 6-7 shows how the Mount agent works with different storage resources.
Figure 6-7 The mount resource in conjunction with different storage resources
LDom
LDom
Mount NIC
Volume
DiskGroup
See the Veritas Cluster Server Bundled Agents Reference Guide for more information
about the Mount agent.
Creating the Veritas Cluster Server service groups for Logical Domain
You can also create and manage service groups using the Veritas Cluster Server
Management Server, the Cluster Manager (Java Console), or through the command
line.
See the Veritas Cluster Server User’s Guide for complete information about using
and managing service groups, either through CLI or GUI.
ldom1
LDom Storage Network
bootmnt vsw0
Mount NIC
vol1
Volume
dg1
DiskGroup
Note: RemoteGroup and Application service groups are required only when you
want to configure VCS to monitor the application in the guest domain.
RemoteGroup rsg1 (
GroupName = lsg1
IpAddress = <IP address of ldom1>
ControlMode = OnOff
Username = lsg1-admin
Password = <lsg1-admin's password>
)
See the Veritas Cluster Server Bundled Agents Reference Guide for more information
on the RemoteGroup agent.
Veritas Cluster Server: Configuring Logical Domains for high availability 113
Configuring Veritas Cluster Server to fail over an application on a failure
Note: If you do not complete this step, VCS reports the status of the resource
on the source node as UNKNOWN after the migration is complete.
2 Before initiating migration, freeze the service group that contains LDom
resources.
3 Migrate the logical domain.
4 After the migration is complete, VCS takes about five minutes to detect the
online status of the service group. You may probe the resources on the target
node to verify status.
5 Unfreeze the service group.
116 Veritas Cluster Server: Configuring Logical Domains for high availability
LDom migration in a VCS environment