Zfs Admin
Zfs Admin
Zfs Admin
zpool status
zpool status -v
fmdump
format or rmformat
Identify hardware problems with the zpool status commands. If a pool is in the DEGRADED state,
use the zpool status command to identify if a disk is unavailable. For example:
# zpool status -x
pool: zeepool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-D3
scrub: resilver completed after 0h12m with 0 errors on Thu Aug 28 09:29:43 2008
config:
zeepool DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c1t2d0 ONLINE 0 0 0
spare DEGRADED 0 0 0
c2t3d0 ONLINE 0 0 0
spares
c1t3d0 AVAIL
Identify potential data corruption with the zpool status -v command. If only one file is corrupted,
then you might choose to deal with it directly, without needing to restore the entire pool.
pool: rpool
state: DEGRADED
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: scrub completed after 0h2m with 1 errors on Tue Mar 11 13:12:42 2008
config:
rpool DEGRADED 0 0 9
c2t0d0s0 DEGRADED 0 0 9
/mnt/root/lib/amd64/libc.so.1
* Display the list of suspected faulty devices using the fmdump command. It is also useful to
know the diagnosis engines available on your system and how busy they have been, which is
obtained via the fmstat command. Similarly, fmadm will show the status of the diagnosis
engines. You can also see that there are 4 diagnosis engines which are appropriate to devices
and ZFS: disk-transport, io-retire, zfs-diagnosis, and zfs-retire. Check your OS release for the
available FMA diagnosis engine capability.
# fmdump
TIME UUID SUNW-MSG-ID
# fmstat
# fmadm config
* Display more details about potential hardware problems by examining the error reports with
fmdump -ev. Display even more details with fmdump -eV.
# fmdump -eV
TIME CLASS
nvlist version: 0
class = ereport.fs.zfs.vdev.open_failed
ena = 0xd3229ac5100401
version = 0x0
scheme = zfs
pool = 0x4540c565343f39c2
vdev = 0xcba57455fe08750b
(end detector)
pool = whoo
pool_guid = 0x4540c565343f39c2
pool_context = 1
pool_failmode = wait
vdev_guid = 0xcba57455fe08750b
vdev_type = disk
vdev_path = /dev/ramdisk/rdx
parent_guid = 0x4540c565343f39c2
parent_type = root
prev_state = 0x1
__ttl = 0x1
* If expected devices can't be displayed with the format or rmformat utility, then those devices
won't be visible to ZFS.
* To be supplied.
* If the replaced disk is not visible in the zpool status output, make sure all cables are
reconnected properly.
* If the system is booted under a virtualization product, such as Xvm, when this problem occurs,
make sure the devices are accessible by ZFS outside of the virtualization product.
* Then, solve the device configuration problems within the virtualization product.
During the boot process, each pool must be opened, which means that pool failures might cause
a system to enter into a panic-reboot loop. In order to recover from this situation, ZFS must be
informed not to look for any pools on startup.
ok boot -m milestone=none
problem. If you have multiple pools on the system, do these additional steps:
* Determine which pool might have issues by using the fmdump -eV command to display the
pools with reported fatal errors.
* Import the pools one-by-one, skipping the pools that are having issues, as described in the
fmdump output.
* If the system is back up, issue the svcadm milestone all command.
Template:Draft
If you are running a Solaris SXCE or Solaris 10 release, you might be able to boot from the
OpenSolaris Live CD and fix whatever is causing the pool import to fail.
* Resolve the issue that causes the pool import to fail, such as replace a failed disk
* Export the pool (?)
* If you resize a LUN from a storage array and the zpool status command doesn't display the
LUN's expected capacity, export and import the pool to see expected capacity. This is CR xxxxxxx.
* If zpool status doesn't display the array's LUN expected capacity, confirm that the expected
capacity is visible from the format utility. For example, the format output below shows that one
LUN is configured as 931.01 Gbytes and one is configured as 931.01 Mbytes.
2. c6t600A0B800049F93C0000030A48B3EA2Cd0 <SUN-LCSM100_F-0670-931.01GB>
/scsi_vhci/ssd@g600a0b800049f93c0000030a48b3ea2c
3. c6t600A0B800049F93C0000030D48B3EAB6d0 <SUN-LCSM100_F-0670-931.01MB>
/scsi_vhci/ssd@g600a0b800049f93c0000030d48b3eab6
* You will need to reconfigure the array's LUN capacity with the array sizing tool to correct this
sizing problem.
* When the LUN sizes are resolved, export and import the pool if the pool has already been
created with these LUNs.
* The Solaris 10 10/08 release includes modifications to support the Solaris CIFS environment as
described in zfs.1m. However, the CIFS features are not supported in the Solaris 10 release.
Therefore, these properties are set to read-only values. If you attempt to reset the CIFS-related
properties, you will see a message similar to the following:
* The Solaris 10 10/08 release identifies cache device support is available by using the zpool
upgrade -v command. For example:
# zpool upgrade -v
VER DESCRIPTION
--- --------------------------------------------------------
4 zpool history
8 Delegated administration
10 Cache devices
* If you attempt to add a cache device to a ZFS storage pool after the pool is created, the
following message is displayed:
* 768 Mbytes is the minimum amount of memory required to install a ZFS root file system
* Due to an existing boot limitation, disks intended for a bootable ZFS root pool must be created
with disk slices and must be labeled with a VTOC (SMI) disk label.
* If you relabel EFI-labeled disks with VTOC labels, be sure that the desired disk space for the
root pool is in the disk slice that will be used to create the bootable ZFS pool.
* For the OpenSolaris 2008.05 release, a ZFS root file system is installed by default and there is
no option to choose another type of root file system.
* For the SXCE and Solaris 10 10/08 releases, you can only install a ZFS root file system from the
text installer.
* You cannot use a Flash install or the standard upgrade option to install or migrate to a ZFS root
file system. Stay tuned, more work is in progress on improving installation.
* For the SXCE and Solaris 10 10/08 releases, you can use LiveUpgrade to migrate a UFS root file
system to a ZFS root file system.
* On SPARC based system, use the following syntax from the Solaris installation DVD or the
network:
* On an x86 based system, select the text-mode install option when presented.
# DISPLAY=
# export DISPLAY
# install-solaris
* During an initial installation, select two disks to create a mirrored root pool.
* Or, you can also attach a disk to create a mirrored root pool after installation. See the ZFS
Administration Guide for details.
* Solaris VTOC labels are required for disks in the root pool which should be configured using a
slice specification. EFI labeled disks do not work. Several factors are at work here, including BIOS
support for booting from EFI labeled disks.
* Note: If you mirror the boot disk later, make sure you specify a bootable slice and not the
whole disk because the latter may try to install an EFI label.
* You cannot use a RAID-Z configuration for a root pool. Only single-disk pools or pools with
mirrored disks are supported. You will see the following message if you attempt to use an
unsupported pool for the root pool:
cannot add to 'rpool': root pool can not have multiple vdevs or separate logs
* The lzjb compression property is supported for root pools but the other compression types are
not supported.
* Keep a second ZFS BE for recovery purposes. You can boot from alternate BE if the primary BE
fails. For example:
# lucreate -n ZFS2BE
* Keep root pool snapshots on a remote system. See the steps below for details.
Solaris Live Upgrade Migration Scenarios
* You can use the Solaris Live Upgrade feature to migrate a UFS root file system to a ZFS root file
system.
* You can't use Solaris Live Upgrade to migrate a ZFS boot environment (BE) to UFS BE.
* You can't use Solaris Live Upgrade to migrate non-root or shared UFS file systems to
Review LU Requirements
* You must be running the SXCE, build 90 release or the Solaris 10 10/08 release to use LU to
migrate a UFS root file system to a ZFS root file system.
* You must create a ZFS storage pool that contains disk slices before the LU migration.
* The pool must exist either on a disk slice or on disk slices that are mirrored, but not on a RAID-
Z configuration or on a nonredundant configuration of multiple disks. If you attempt to use an
unsupported pool configuration during a Live Upgrade migration, you will see a message similar
to the following:
* If you see this message, then either the pool doesn't exist or its an unsupported configuration.
Live Upgrade Issues
* The Solaris installation GUI's standard-upgrade option is not available for migrating from a UFS
to a ZFS root file system. To migrate from a UFS file system, you must use Solaris Live Upgrade.
* You cannot use Solaris Live Upgrade to create a UFS BE from a ZFS BE.
* Do not rename your ZFS BEs with the zfs rename command because the Solaris Live Upgrade
feature is unaware of the name change. Subsequent commands, such as ludelete, will fail. In
fact, do not rename your ZFS pools or file systems if you have existing BEs that you want to
continue to use.
* Solaris Live Upgrade creates the datasets for the BE and ZFS volumes for the swap area and
dump device but does not account for any existing dataset property modifications. Thus, if you
want a dataset property enabled in the new BE, you must set the property before the lucreate
operation. For example:
* When creating an alternative BE that is a clone of the primary BE, you cannot use the -f, -x, -y,
-Y, and -z options to include or exclude files from the primary BE. You can still use the inclusion
and exclusion option set in the following cases:
UFS -> UFS UFS -> ZFS ZFS -> ZFS (different pool)
* Although you can use Solaris Live Upgrade to upgrade your UFS root file system to a ZFS root
file system, you cannot use Solaris Live Upgrade to upgrade non-root or shared file systems.
Live Upgrade with Zones
Review the following supported ZFS and zones configurations. These configurations are
upgradeable and patchable.
Migrate a UFS Root File System with Zones Installed to a ZFS Root File System
1. Upgrade the system to the Solaris 10 10/08 release if it is running a previous Solaris 10
release.
3. Confirm that the zones from the UFS environment are booted.
# lucreate S10BE3
# luactivate S10BE3
# init 6
8. Resolve any potential mount point problems, due to a Solaris Live Upgrade bug.
1. Review the zfs list output and look for any temporary mount points.
3.
4. NAME MOUNTPOINT
5. rpool/ROOT/s10u6 /.alt.tmp.b-VP.mnt/
6. rpool/ROOT/s10u6/zones /.alt.tmp.b-VP.mnt//zones
rpool/ROOT/s10u6/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA
7. Reset the mount points for the ZFS BE and its datasets.
9. Reboot the system. When the option is presented to boot a specific boot environment, either
in the GRUB menu or at the OpenBoot Prom prompt, select the boot environment whose mount
points were just corrected.
Set up a ZFS root file system and ZFS zone root configuration that can be upgraded or patched. In
this configuration, the ZFS zone roots are created as ZFS datasets.
1. Install the system with a ZFS root, either by using the interactive initial installation method or
the Solaris JumpStart installation method.
2. Boot the system from the newly-created root pool.
Setting the noauto value for the canmount property prevents the dataset from being mounted
other than by the explicit action of Solaris Live Upgrade and system startup code.
Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS
Upgrade or patch a ZFS root file system with zone roots on ZFS. These updates can either be a
system upgrade or the application of patches.
The existing boot environment, including all the zones, are cloned. New datasets are created for
each dataset in the original boot environment. The new datasets are created in the same pool as
the current root pool.
2. Select one of the following to upgrade the system or apply patches to the new boot
environment.
3. Activate the new boot environment after the updates to the new boot environment are
complete.
# luactivate newBE
# init 6
5. Resolve any potential mount point problems, due to a Solaris Live Upgrade bug.
1. Review the zfs list output and look for any temporary mount points.
3.
4. NAME MOUNTPOINT
5. rpool/ROOT/newBE /.alt.tmp.b-VP.mnt/
6. rpool/ROOT/newBE/zones /.alt.tmp.b-VP.mnt//zones
7. rpool/ROOT/newBE/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA
8. Reset the mount points for the ZFS BE and its datasets.
10. Reboot the system. When the option is presented to boot a specific boot environment,
either in the GRUB menu or at the OpenBoot Prom prompt, select the boot environment whose
mount points were just corrected.
§ If you use ludelete to remove an unwanted BE and it fails with messages similar to the
following:
$ ludelete -f c0t1d0s0
ERROR: Failed to copy file </boot/grub/menu.lst> to top level dataset for BE <c0t1d0s0>
ERROR: Unable to delete GRUB menu entry for deleted boot environment <c0t1d0s0>.
§ You might be running into the following bugs: 6718038, 6715220, 6743529
§ The workaround is as follows:
14. Edit /usr/lib/lu/lulib and in line 2934, replace the following text:
§ CR 6704717 – Do not place offline the primary disk in a mirrored ZFS root configuration. If you
do need to offline or detach a mirrored root disk for replacement, then boot from another
mirrored disk in the pool.
§ CR 6668666 - If you attach a disk to create a mirrored root pool after an initial installation, you
will need to apply the boot blocks to the secondary disks. For example:
§ CR 2164779 - Ignore the following krtld messages from the boot -Z command. They are
harmless:
The best way to change the active boot environment is to use the luactivate command. If
booting the active environment fails, due to a bad patch or a configuration error, the only way to
boot a different environment is by selecting that environment at boot time. You can select an
alternate BE from the GRUB menu on an x86 based system or by booting it explicitly from the
PROM on an SPARC based system.
Due to a bug in the Live Upgrade feature, the non-active boot environment might fail to boot
because the ZFS datasets or the zone's ZFS dataset in the boot environment has an invalid mount
point.
The same bug also prevents the BE from mounting if it has a separate /var dataset.
23. Review the zfs list output after the pool is imported, looking for incorrect temporary mount
points. For example:
25.
rpool/ROOT/s10u6/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA
31. Reboot the system. When the option is presented to boot a specific boot environment,
either in the GRUB menu or at the OpenBoot Prom prompt,
You can boot from different devices in a mirrored ZFS root pool.
Identify the device pathnames for the alternate disks in the mirrored root pool by reviewing the
zpool status output. In the example output, disks are c0t0d0s0 and c0t1d0s0.
# zpool status
pool: rpool
state: ONLINE
scrub: resilver completed after 0h6m with 0 errors on Thu Sep 11 10:55:28 2008
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0
§ If you attached the second disk in the mirror configuration after an initial installation, apply
the bootblocks. For example, on a SPARC system:
§ Depending on the hardware configuration, you might need to update the OpenBoot PROM
configuration or the BIOS to specify a different boot device. For example, on a SPARC system:
ok boot
ZFS root pool disks must contain a VTOC label. Starting in build 101a, you will be warned about
adding a disk with an EFI label to the root pool.
See the steps below to detach and relabel the disk with a VTOC label. These steps are also
applicable to the Solaris Nevada (SXCE) and Solaris 10 releases.
41.format> label
46.format> quit
The section describes how to create and restore root pool snapshots.
56.# share
63.rpool@1016 0 - 94K -
65.rpool/ROOT@1016 0 - 18K -
67.rpool/ROOT/s10s_u6wos_07a@1016 0 - 4.64G -
69.rpool/dump@1016 0 - 1.00G -
71.rpool/export@1016 0 - 20K -
73.rpool/export/home@1016 0 - 18K -
74.rpool/swap 524M 27.6G 11.6M -
75.rpool/swap@1016 0 - 11.6M -
§ ZFS root pool snapshots are stored on a remote system and shared over NFS
ok boot net
or
ok boot cdrom
83. Restore the root pool snapshots. This step might take some time. For example:
88.rpool@1016 0 - 94K -
89.rpool/ROOT 4.64G 27.6G 18K legacy
90.rpool/ROOT@1016 0 - 18K -
98.rpool/export/home@1016 0 - 18K -
# init 6
This procedure assumes that existing root pool snapshots are available. In this example, the root
pool snapshots are available on the local system.
# zfs list
rpool/ROOT@1013 0 - 18K -
105. Multiple OS instances were found. To check and mount one of them
106. read-write under /a, select it from the following list. To not mount
108.
111.
112. Please select a device to be mounted (q for none) [?,??,q]: 2
# init 6
§ If the primary disk in the pool fails, you will need to boot from the secondary disk by
specifying the boot path. For example, on a SPARC system, a devalias is available to boot from
the second disk as disk1.
ok boot disk1
§ While booted from a secondary disk, physically replace the primary disk. For example,
c0t0d0s0.
§ Let ZFS know the primary disk was physically replaced at the same location.
# zpool replace rpool c0t0d0s0
§ If the zpool replace step fails, detach and attach the primary mirror disk:
During an initial installation or a Solaris Live Upgrade from a UFS file system, a swap area is
created on a ZFS volume in the ZFS root pool. The swap area size is based on half the size of
physical memory, but no more than 2 Gbytes and no less than 512 Mbytes. During an initial
installation or a Solaris Live Upgrade from a UFS file system, a dump device is created on a ZFS
volume in the ZFS root pool. The dump device size is based on half the size of physical memory,
but no more than 2 Gbytes and no less than 512 Mbytes.
# zfs list
§ You can adjust the size of your swap and dump volumes during an initial installation.
§ You can create and size your swap and dump volumes before you do a Solaris Live Upgrade
operation. ZFS dump volume performance is better when the volume is created with a 128-Kbyte
block size. In SXCE, build 102, ZFS dump volumes are automatically created with a 128-Kbyte
block size (CR 6725698). For example:
§ Solaris Live Upgrade does not resize existing swap and dump volumes. You can reset the
volsize property of the swap and dump devices after a system is installed. For example:
rpool/dump volsize 2G -
§ You can adjust the size of the swap and dump volumes in a JumpStart profile by using profile
syntax similar to the following:
install_type initial_install
cluster SUNWCXall
In this profile, the 2g and 2g entries set the size of the swap area and dump device as 2 Gbytes
and 2 Gbytes, respectively.
§ You can adjust the size of your dump volume, but it might take some time, depending on the
size of the dump volume. For example:
rpool/dump volsize 2G -
If you need to adjust the size of the swap volume after installation on an active system, review
the following steps. See CR 6765386 for more information.
14. If your swap device is in use, then you might not be able to delete it. Check to see if the swap
area is in use. For example:
15.# swap -l
16.swapfile dev swaplo blocks free
In the above output, blocks == free, so the swap device is not actually being used.
17. If the swap area is not is use, remove the swap area. For example:
# swap -d /dev/zvol/dsk/rpool/swap
19.# swap -l
20. Recreate the swap volume, resetting the size. For example:
If you want to destroy a ZFS root pool that is no longer needed, but it still has an active dump
device and swap area, you'll need to use the dumpadm and swap commands to remove the
dump device and swap area. Then, use these commands to establish a new dump device and
swap area.
# dumpadm -d swap
# dumpadm -d none
# swap -a <device-name>
# dumpadm -d swap