Disk Management
Disk Management
Disk Management
disk devices
This chapter teaches you how to locate and recognise hard disk devices. This prepares you
for the next chapter, where we put partitions on these devices.
31
disk devices
4.1. terminology
4.1.1. platter, head, track, cylinder, sector
Data is commonly stored on magnetic or optical disk platters. The platters are rotated (at
high speeds). Data is read by heads, which are very close to the surface of the platter, without
touching it! The heads are mounted on an arm (sometimes called a comb or a fork).
Data is written in concentric circles called tracks. Track zero is (usually) on the outside.
The time it takes to position the head over a certain track is called the seek time. Often
the platters are stacked on top of each other, hence the set of tracks accessible at a certain
position of the comb forms a cylinder. Tracks are divided into 512 byte sectors, with more
unused space (gap) between the sectors on the outside of the platter.
When you break down the advertised access time of a hard drive, you will notice that most
of that time is taken by movement of the heads (about 65%) and rotational latency (about
30%).
4.1.3. ata
An ata controller allows two devices per bus, one master and one slave. Unless your
controller and devices support cable select, you have to set this manually with jumpers.
With the introduction of sata (serial ata), the original ata was renamed to parallel ata.
Optical drives often use atapi, which is an ATA interface using the SCSI communication
protocol.
4.1.4. scsi
A scsi controller allows more than two devices. When using SCSI (small computer system
interface), each device gets a unique scsi id. The scsi controller also needs a scsi id, do not
use this id for a scsi-attached device.
Older 8-bit SCSI is now called narrow, whereas 16-bit is wide. When the bus speeds was
doubled to 10Mhz, this was known as fast SCSI. Doubling to 20Mhz made it ultra SCSI.
Take a look at http://en.wikipedia.org/wiki/SCSI for more SCSI standards.
32
disk devices
A block device has the letter b to denote the file type in the output of ls -l.
[root@centos65 ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 0 Apr 19 10:12 /dev/sda
brw-rw----. 1 root disk 8, 1 Apr 19 10:12 /dev/sda1
brw-rw----. 1 root disk 8, 2 Apr 19 10:12 /dev/sda2
brw-rw----. 1 root disk 8, 16 Apr 19 10:12 /dev/sdb
brw-rw----. 1 root disk 8, 32 Apr 19 10:12 /dev/sdc
Old hard disks (and floppy disks) use cylinder-head-sector addressing to access a sector
on the disk. Most current disks use LBA (Logical Block Addressing).
In this book we will use the following pictograms for spindle disks (in brown) and solid
state disks (in blue).
33
disk devices
It is possible to have only /dev/hda and /dev/hdd. The first one is a single ata hard disk, the
second one is the cdrom (by default configured as slave).
Below a sample of how scsi devices on a Linux can be named. Adding a scsi disk or raid
controller with a lower scsi address will change the naming scheme (shifting the higher scsi
addresses one letter further in the alphabet).
A modern Linux system will use /dev/sd* for scsi and sata devices, and also for sd-cards,
usb-sticks, (legacy) ATA/IDE devices and solid state drives.
34
disk devices
And here an example of sata and scsi disks on a server with CentOS. Remember that sata
disks are also presented to you with the scsi /dev/sd* notation.
[root@centos65 ~]# fdisk -l | grep 'Disk /dev/sd'
Disk /dev/sda: 42.9 GB, 42949672960 bytes
Disk /dev/sdb: 77.3 GB, 77309411328 bytes
Disk /dev/sdc: 154.6 GB, 154618822656 bytes
Disk /dev/sdd: 154.6 GB, 154618822656 bytes
Here is an overview of disks on a RHEL4u3 server with two real 72GB scsi disks. This
server is attached to a NAS with four NAS disks of half a terabyte. On the NAS disks, four
LVM (/dev/mdx) software RAID devices are configured.
[root@tsvtl1 ~]# fdisk -l | grep Disk
Disk /dev/sda: 73.4 GB, 73407488000 bytes
Disk /dev/sdb: 73.4 GB, 73407488000 bytes
Disk /dev/sdc: 499.0 GB, 499036192768 bytes
Disk /dev/sdd: 499.0 GB, 499036192768 bytes
Disk /dev/sde: 499.0 GB, 499036192768 bytes
Disk /dev/sdf: 499.0 GB, 499036192768 bytes
Disk /dev/md0: 271 MB, 271319040 bytes
Disk /dev/md2: 21.4 GB, 21476081664 bytes
Disk /dev/md3: 21.4 GB, 21467889664 bytes
Disk /dev/md1: 21.4 GB, 21476081664 bytes
You can also use fdisk to obtain information about one specific hard disk device.
[root@centos65 ~]# fdisk -l /dev/sdc
Later we will use fdisk to do dangerous stuff like creating and deleting partitions.
35
disk devices
4.3.2. dmesg
Kernel boot messages can be seen after boot with dmesg. Since hard disk devices are
detected by the kernel during boot, you can also use dmesg to find information about disk
devices.
[root@centos65 ~]# dmesg | grep 'sd[a-z]' | head
sd 0:0:0:0: [sda] 83886080 512-byte logical blocks: (42.9 GB/40.0 GiB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support \
DPO or FUA
sda: sda1 sda2
sd 0:0:0:0: [sda] Attached SCSI disk
sd 3:0:0:0: [sdb] 150994944 512-byte logical blocks: (77.3 GB/72.0 GiB)
sd 3:0:0:0: [sdb] Write Protect is off
sd 3:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support \
DPO or FUA
36
disk devices
4.3.3. /sbin/lshw
The lshw tool will list hardware. With the right options lshw can show a lot of information
about disks (and partitions).
Redhat and CentOS do not have this tool (unless you add a repository).
37
disk devices
4.3.4. /sbin/lsscsi
The lsscsi command provides a nice readable output of all scsi (and scsi emulated devices).
This first screenshot shows lsscsi on a SPARC system.
root@shaka:~# lsscsi
[0:0:0:0] disk Adaptec RAID5 V1.0 /dev/sda
[1:0:0:0] disk SEAGATE ST336605FSUN36G 0438 /dev/sdb
root@shaka:~#
Below a screenshot of lsscsi on a QNAP NAS (which has four 750GB disks and boots from
a usb stick).
lroot@debian6~# lsscsi
[0:0:0:0] disk SanDisk Cruzer Edge 1.19 /dev/sda
[1:0:0:0] disk ATA ST3750330AS SD04 /dev/sdb
[2:0:0:0] disk ATA ST3750330AS SD04 /dev/sdc
[3:0:0:0] disk ATA ST3750330AS SD04 /dev/sdd
[4:0:0:0] disk ATA ST3750330AS SD04 /dev/sde
38
disk devices
4.3.5. /proc/scsi/scsi
Another way to locate scsi (or sd) devices is via /proc/scsi/scsi.
Here we run cat /proc/scsi/scsi on the QNAP from above (with Debian Linux).
root@debian6~# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: SanDisk Model: Cruzer Edge Rev: 1.19
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST3750330AS Rev: SD04
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST3750330AS Rev: SD04
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST3750330AS Rev: SD04
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST3750330AS Rev: SD04
Type: Direct-Access ANSI SCSI revision: 05
Note that some recent versions of Debian have this disabled in the kernel. You can enable
it (after a kernel compile) using this entry:
# CONFIG_SCSI_PROC_FS is not set
Redhat and CentOS have this by default (if there are scsi devices present).
[root@centos65 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: VBOX HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: VBOX HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: VBOX HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
39
disk devices
Although technically the /sbin/badblocks tool is meant to look for bad blocks, you can use
it to completely erase all data from a disk. Since this is really writing to every sector of the
disk, it can take a long time!
root@RHELv4u2:~# badblocks -ws /dev/sdb
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
The previous screenshot overwrites every sector of the disk four times. Erasing once with
a tool like dd is enough to destroy all data.
Warning, this screenshot shows how to permanently destroy all data on a block device.
[root@rhel65 ~]# dd if=/dev/zero of=/dev/sdb
40
disk devices
hdparm can be used to display or set information and parameters about an ATA (or SATA)
hard disk device. The -i and -I options will give you even more information about the
physical properties of the device.
root@laika:~# hdparm /dev/sdb
/dev/sdb:
IO_support = 0 (default 16-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 12161/255/63, sectors = 195371568, start = 0
/dev/hdd:
multcount = 0 (off)
IO_support = 0 (default)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 24321/255/63, sectors = 390721968, start = 0
41
disk devices
It is adviced to attach three 1GB disks and three 2GB disks to the virtual machine. This will
allow for some freedom in the practices of this chapter as well as the next chapters (raid,
lvm, iSCSI).
2. Use fdisk to find the total size of all hard disk devices on your system.
3. Stop a virtual machine, add three virtual 1 gigabyte scsi hard disk devices and one virtual
400 megabyte ide hard disk device. If possible, also add another virtual 400 megabyte ide
disk.
4. Use dmesg to verify that all the new disks are properly detected at boot-up.
6. Use fdisk (with grep and /dev/null) to display the total size of the new disks.
8. Look at /proc/scsi/scsi.
9. If possible, install lsscsi, lshw and use them to list the disks.
42
disk devices
2. Use fdisk to find the total size of all hard disk devices on your system.
fdisk -l
3. Stop a virtual machine, add three virtual 1 gigabyte scsi hard disk devices and one virtual
400 megabyte ide hard disk device. If possible, also add another virtual 400 megabyte ide
disk.
This exercise happens in the settings of vmware or VirtualBox.
4. Use dmesg to verify that all the new disks are properly detected at boot-up.
See 1.
ATA: ls -l /dev/hd*
6. Use fdisk (with grep and /dev/null) to display the total size of the new disks.
root@rhel53 ~# fdisk -l 2>/dev/null | grep [MGT]B
Disk /dev/hda: 21.4 GB, 21474836480 bytes
Disk /dev/hdb: 1073 MB, 1073741824 bytes
Disk /dev/sda: 2147 MB, 2147483648 bytes
Disk /dev/sdb: 2147 MB, 2147483648 bytes
Disk /dev/sdc: 2147 MB, 2147483648 bytes
8. Look at /proc/scsi/scsi.
root@rhel53 ~# cat /proc/scsi/scsi
43
disk devices
Attached devices:
Host: scsi0 Channel: 00 Id: 02 Lun: 00
Vendor: VBOX Model: HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 03 Lun: 00
Vendor: VBOX Model: HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 06 Lun: 00
Vendor: VBOX Model: HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
9. If possible, install lsscsi, lshw and use them to list the disks.
Debian,Ubuntu: aptitude install lsscsi lshw
root@rhel53 ~# lsscsi
[0:0:2:0] disk VBOX HARDDISK 1.0 /dev/sda
[0:0:3:0] disk VBOX HARDDISK 1.0 /dev/sdb
[0:0:6:0] disk VBOX HARDDISK 1.0 /dev/sdc
44
Chapter 5. disk partitions
This chapter continues on the hard disk devices from the previous one. Here we will put
partitions on those devices.
This chapter prepares you for the next chapter, where we put file systems on our partitions.
45
disk partitions
A partition's geometry and size is usually defined by a starting and ending cylinder
(sometimes by sector). Partitions can be of type primary (maximum four), extended
(maximum one) or logical (contained within the extended partition). Each partition has a
type field that contains a code. This determines the computers operating system or the
partitions file system.
The picture below shows two (spindle) disks with partitions. Note that an extended partition
is a container holding logical drives.
46
disk partitions
5.2.2. /proc/partitions
The /proc/partitions file contains a table with major and minor number of partitioned
devices, their number of blocks and the device name in /dev. Verify with /proc/devices to
link the major number to the proper device.
3 0 524288 hda
3 64 734003 hdb
8 0 8388608 sda
8 1 104391 sda1
8 2 8281507 sda2
8 16 1048576 sdb
8 32 1048576 sdc
8 48 1048576 sdd
253 0 7176192 dm-0
253 1 1048576 dm-1
The major number corresponds to the device type (or driver) and can be found in /proc/
devices. In this case 3 corresponds to ide and 8 to sd. The major number determines the
device driver to be used with this device.
The minor number is a unique identification of an instance of this device type. The
devices.txt file in the kernel tree contains a full list of major and minor numbers.
47
disk partitions
parted is recommended by some Linux distributions for handling storage with gpt instead
of mbr.
(parted)
48
disk partitions
49
disk partitions
We can now issue p again to verify our changes, but they are not yet written to disk. This
means we can still cancel this operation! But it looks good, so we use w to write the changes
to disk, and then quit the fdisk tool.
Command (m for help): p
root@RHELv4u2:~# fdisk -l
50
disk partitions
This example copies the master boot record from the first SCSI hard disk.
dd if=/dev/sda of=/SCSIdisk.mbr bs=512 count=1
The same tool can also be used to wipe out all information about partitions on a disk. This
example writes zeroes over the master boot record.
dd if=/dev/zero of=/dev/sda bs=512 count=1
5.4.2. partprobe
Don't forget that after restoring a master boot record with dd, that you need to force the
kernel to reread the partition table with partprobe. After running partprobe, the partitions
can be used again.
[root@RHEL5 ~]# partprobe
[root@RHEL5 ~]#
This example shows how to backup all partition and logical drive information to a file.
sfdisk -d /dev/sda > parttable.sda.sfdisk
The following example copies the mbr and all logical drive info from /dev/sda to /dev/sdb.
sfdisk -d /dev/sda | sfdisk /dev/sdb
51
disk partitions
Since 2010 gpt is a part of the uefi specification, but it is also used on bios systems.
Newer versions of fdisk work fine with gpt, but most production servers today (mid 2015)
still have an older fdisk.. You can use parted instead.
Each command also has built-in help. For example help mklabel will list all supported
labels. Note that we only discussed mbr(msdos) and gpt in this book.
(parted) help mklabel
mklabel,mktable LABEL-TYPE create a new disklabel (partition table)
LABEL-TYPE is one of: aix, amiga, bsd, dvh, gpt, mac, msdos, pc98, sun, loop
(parted)
52
disk partitions
(parted)
This example shows how to create two primary partitions of equal size.
(parted) mkpart primary 0 50%
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? I
(parted) mkpart primary 50% 100%
(parted)
Verify with print and exit with quit. Since parted works directly on the disk, there is no
need to w(rite) like in fdisk.
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
(parted) quit
Information: You may need to update /etc/fstab.
[root@rhel71 ~]#
53
disk partitions
5. Create a 400MB primary partition and two 300MB logical drives on a big disk.
7. Compare the output again of fdisk and df. Do both commands display the new partitions ?
8. Create a backup with dd of the mbr that contains your 200MB primary partition.
9. Take a backup of the partition table containing your 400MB primary and 300MB logical
drives. Make sure the logical drives are in the backup.
10. (optional) Remove all your partitions with fdisk. Then restore your backups.
54
disk partitions
5. Create a 400MB primary partition and two 300MB logical drives on a big disk.
Choose one of the disks you added (this example uses /dev/sdb)
fdisk /dev/sdb
inside fdisk : n p 1 +400m enter --- n e 2 enter enter --- n l +300m (twice)
7. Compare the output again of fdisk and df. Do both commands display the new partitions ?
The newly created partitions are visible with fdisk.
8. Create a backup with dd of the mbr that contains your 200MB primary partition.
dd if=/dev/sdc of=bootsector.sdc.dd count=1 bs=512
9. Take a backup of the partition table containing your 400MB primary and 300MB logical
drives. Make sure the logical drives are in the backup.
sfdisk -d /dev/sdb > parttable.sdb.sfdisk
55
Chapter 6. file systems
When you are finished partitioning the hard disk, you can put a file system on each partition.
This chapter builds on the partitions from the previous chapter, and prepares you for the
next one where we will mount the filesystems.
56
file systems
The properties (length, character set, ...) of filenames are determined by the file system you
choose. Directories are usually implemented as files, you will have to learn how this is
implemented! Access control in file systems is tracked by user ownership (and group owner-
and membership) in combination with one or more access control lists.
6.1.1. man fs
The manual page about filesystems is accessed by typing man fs.
[root@rhel65 ~]# man fs
6.1.2. /proc/filesystems
The Linux kernel will inform you about currently loaded file system drivers in /proc/
filesystems.
root@rhel53 ~# cat /proc/filesystems | grep -v nodev
ext2
iso9660
ext3
6.1.3. /etc/filesystems
The /etc/filesystems file contains a list of autodetected filesystems (in case the mount
command is used without the -t option.
57
file systems
ext2 was being replaced by ext3 on most Linux machines. They are essentially the same,
except for the journaling which is only present in ext3.
Journaling means that changes are first written to a journal on the disk. The journal is
flushed regularly, writing the changes in the file system. Journaling keeps the file system
in a consistent state, so you don't need a file system check after an unclean shutdown or
power failure.
You can convert an ext2 to ext3 with tune2fs -j. You can mount an ext3 file system as ext2,
but then you lose the journaling. Do not forget to run mkinitrd if you are booting from this
device.
6.2.3. ext4
The newest incarnation of the ext file system is named ext4 and is available in the Linux
kernel since 2008. ext4 supports larger files (up to 16 terabyte) and larger file systems than
ext3 (and many more features).
Development started by making ext3 fully capable for 64-bit. When it turned out the changes
were significant, the developers decided to name it ext4.
6.2.4. xfs
Redhat Enterprise Linux 7 will have XFS as the default file system. This is a highly scalable
high-performance file system.
xfs was created for Irix and for a couple of years it was also used in FreeBSD. It is supported
by the Linux kernel, but rarely used in dsitributions outside of the Redhat/CentOS realm.
58
file systems
6.2.5. vfat
The vfat file system exists in a couple of forms : fat12 for floppy disks, fat16 on ms-dos, and
fat32 for larger disks. The Linux vfat implementation supports all of these, but vfat lacks a
lot of features like security and links. fat disks can be read by every operating system, and
are used a lot for digital cameras, usb sticks and to exchange data between different OS'ses
on a home user's computer.
6.2.7. udf
Most optical media today (including cd's and dvd's) use udf, the Universal Disk Format.
6.2.8. swap
All things considered, swap is not a file system. But to use a partition as a swap partition
it must be formatted and mounted as swap space.
6.2.9. gfs
Linux clusters often use a dedicated cluster filesystem like GFS, GFS2, ClusterFS, ...
59
file systems
6.2.11. /proc/filesystems
The /proc/filesystems file displays a list of supported file systems. When you mount a file
system without explicitly defining one, then mount will first try to probe /etc/filesystems
and then probe /proc/filesystems for all the filesystems without the nodev label. If /etc/
filesystems ends with a line containing only an asterisk (*) then both files are probed.
paul@RHELv4u4:~$ cat /proc/filesystems
nodev sysfs
nodev rootfs
nodev bdev
nodev proc
nodev sockfs
nodev binfmt_misc
nodev usbfs
nodev usbdevfs
nodev futexfs
nodev tmpfs
nodev pipefs
nodev eventpollfs
nodev devpts
ext2
nodev ramfs
nodev hugetlbfs
iso9660
nodev relayfs
nodev mqueue
nodev selinuxfs
ext3
nodev rpc_pipefs
nodev vmware-hgfs
nodev autofs
paul@RHELv4u4:~$
60
file systems
It is time for you to read the manual pages of mkfs and mke2fs. In the example below,
you see the creation of an ext2 file system on /dev/sdb1. In real life, you might want to use
options like -m0 and -j.
root@RHELv4u2:~# mke2fs /dev/sdb1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
28112 inodes, 112420 blocks
5621 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
14 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
61
file systems
This example changes this value to ten percent. You can use tune2fs while the file system
is active, even if it is the root file system (as in this example).
[root@rhel4 ~]# tune2fs -m10 /dev/sda1
tune2fs 1.35 (28-Feb-2004)
Setting reserved blocks percentage to 10 (10430 blocks)
[root@rhel4 ~]# tune2fs -l /dev/sda1 | grep -i "block count"
Block count: 104388
Reserved block count: 10430
[root@rhel4 ~]#
62
file systems
The last column in /etc/fstab is used to determine whether a file system should be checked
at boot-up.
[paul@RHEL4b ~]$ grep ext /etc/fstab
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
[paul@RHEL4b ~]$
check aborted.
But after unmounting fsck and e2fsck can be used to check an ext2 file system.
[root@RHEL4b ~]# fsck /boot
fsck 1.35 (28-Feb-2004)
e2fsck 1.35 (28-Feb-2004)
/boot: clean, 44/26104 files, 17598/104388 blocks
[root@RHEL4b ~]# fsck -p /boot
fsck 1.35 (28-Feb-2004)
/boot: clean, 44/26104 files, 17598/104388 blocks
[root@RHEL4b ~]# e2fsck -p /dev/sda1
/boot: clean, 44/26104 files, 17598/104388 blocks
63
file systems
5. Set the reserved space for root on the ext3 filesystem to 0 percent.
64
file systems
cat /proc/filesystems
5. Set the reserved space for root on the ext3 filesystem to 0 percent.
tune2fs -m 0 /dev/sdb5
65
Chapter 7. mounting
Once you've put a file system on a partition, you can mount it. Mounting a file system
makes it available for use, usually as a directory. We say mounting a file system instead
of mounting a partition because we will see later that we can also mount file systems that
do not exists on partitions.
On all Unix systems, every file and every directory is part of one big file tree. To access
a file, you need to know the full path starting from the root directory. When adding a file
system to your computer, you need to make it available somewhere in the file tree. The
directory where you make a file system available is called a mount point.
66
mounting
7.1.2. mount
When the mount point is created, and a file system is present on the partition, then mount
can mount the file system on the mount point directory.
root@RHELv4u2:~# mount -t ext2 /dev/sdb1 /home/project42/
7.1.3. /etc/filesystems
Actually the explicit -t ext2 option to set the file system is not always necessary. The mount
command is able to automatically detect a lot of file systems.
When mounting a file system without specifying explicitly the file system, then mount will
first probe /etc/filesystems. Mount will skip lines with the nodev directive.
paul@RHELv4u4:~$ cat /etc/filesystems
ext3
ext2
nodev proc
nodev devpts
iso9660
vfat
hfs
7.1.4. /proc/filesystems
When /etc/filesystems does not exist, or ends with a single * on the last line, then mount
will read /proc/filesystems.
[root@RHEL52 ~]# cat /proc/filesystems | grep -v ^nodev
ext2
iso9660
ext3
7.1.5. umount
You can unmount a mounted file system using the umount command.
root@pasha:~# umount /home/reet
67
mounting
7.2.1. mount
The simplest and most common way to view all mounts is by issuing the mount command
without any arguments.
root@RHELv4u2:~# mount | grep /dev/sdb
/dev/sdb1 on /home/project42 type ext2 (rw)
7.2.2. /proc/mounts
The kernel provides the info in /proc/mounts in file form, but /proc/mounts does not exist
as a file on any hard disk. Looking at /proc/mounts is looking at information that comes
directly from the kernel.
root@RHELv4u2:~# cat /proc/mounts | grep /dev/sdb
/dev/sdb1 /home/project42 ext2 rw 0 0
7.2.3. /etc/mtab
The /etc/mtab file is not updated by the kernel, but is maintained by the mount command.
Do not edit /etc/mtab manually.
root@RHELv4u2:~# cat /etc/mtab | grep /dev/sdb
/dev/sdb1 /home/project42 ext2 rw 0 0
68
mounting
7.2.4. df
A more user friendly way to look at mounted file systems is df. The df (diskfree) command
has the added benefit of showing you the free space on each mounted disk. Like a lot of
Linux commands, df supports the -h switch to make the output more human readable.
root@RHELv4u2:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
11707972 6366996 4746240 58% /
/dev/sda1 101086 9300 86567 10% /boot
none 127988 0 127988 0% /dev/shm
/dev/sdb1 108865 1550 101694 2% /home/project42
root@RHELv4u2:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
12G 6.1G 4.6G 58% /
/dev/sda1 99M 9.1M 85M 10% /boot
none 125M 0 125M 0% /dev/shm
/dev/sdb1 107M 1.6M 100M 2% /home/project42
7.2.5. df -h
In the df -h example below you can see the size, free space, used gigabytes and percentage
and mount point of a partition.
root@laika:~# df -h | egrep -e "(sdb2|File)"
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 92G 83G 8.6G 91% /media/sdb2
7.2.6. du
The du command can summarize disk usage for files and directories. By using du on a
mount point you effectively get the disk space used on a file system.
While du can go display each subdirectory recursively, the -s option will give you a total
summary for the parent directory. This option is often used together with -h. This means du
-sh on a mount point gives the total amount used by the file system in that partition.
root@debian6~# du -sh /boot /srv/wolf
6.2M /boot
1.1T /srv/wolf
69
mounting
(parted) quit
[root@centos65 ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
4702208 inodes, 18798592 blocks
939929 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
574 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
( output truncated )
...
[root@centos65 ~]# mount /dev/sdb1 /mnt
[root@centos65 ~]# mount | grep mnt
/dev/sdb1 on /mnt type ext4 (rw)
[root@centos65 ~]# df -h | grep mnt
/dev/sdb1 71G 180M 67G 1% /mnt
[root@centos65 ~]# du -sh /mnt
20K /mnt
[root@centos65 ~]# umount /mnt
70
mounting
7.4.1. /etc/fstab
The file system table located in /etc/fstab contains a list of file systems, with an option to
automtically mount each of them at boot time.
By adding the following line, we can automate the mounting of a file system.
/dev/sdb1 /home/project42 ext2 defaults 0 0
71
mounting
7.5.1. ro
The ro option will mount a file system as read only, preventing anyone from writing.
root@rhel53 ~# mount -t ext2 -o ro /dev/hdb1 /home/project42
root@rhel53 ~# touch /home/project42/testwrite
touch: cannot touch `/home/project42/testwrite': Read-only file system
7.5.2. noexec
The noexec option will prevent the execution of binaries and scripts on the mounted file
system.
root@rhel53 ~# mount -t ext2 -o noexec /dev/hdb1 /home/project42
root@rhel53 ~# cp /bin/cat /home/project42
root@rhel53 ~# /home/project42/cat /etc/hosts
-bash: /home/project42/cat: Permission denied
root@rhel53 ~# echo echo hello > /home/project42/helloscript
root@rhel53 ~# chmod +x /home/project42/helloscript
root@rhel53 ~# /home/project42/helloscript
-bash: /home/project42/helloscript: Permission denied
7.5.3. nosuid
The nosuid option will ignore setuid bit set binaries on the mounted file system.
Note that you can still set the setuid bit on files.
root@rhel53 ~# mount -o nosuid /dev/hdb1 /home/project42
root@rhel53 ~# cp /bin/sleep /home/project42/
root@rhel53 ~# chmod 4555 /home/project42/sleep
root@rhel53 ~# ls -l /home/project42/sleep
-r-sr-xr-x 1 root root 19564 Jun 24 17:57 /home/project42/sleep
root@rhel53 ~# su - paul
[paul@rhel53 ~]$ /home/project42/sleep 500 &
[1] 2876
[paul@rhel53 ~]$ ps -f 2876
UID PID PPID C STIME TTY STAT TIME CMD
paul 2876 2853 0 17:58 pts/0 S 0:00 /home/project42/sleep 500
[paul@rhel53 ~]$
7.5.4. noacl
To prevent cluttering permissions with acl's, use the noacl option.
root@rhel53 ~# mount -o noacl /dev/hdb1 /home/project42
72
mounting
Connecting to a Samba server (or to a Microsoft computer) is also done with the mount
command.
This example shows how to connect to the 10.0.0.42 server, to a share named data2.
[root@centos65 ~]# mount -t cifs -o user=paul //10.0.0.42/data2 /home/data2
Password:
[root@centos65 ~]# mount | grep cifs
//10.0.0.42/data2 on /home/data2 type cifs (rw)
7.6.2. nfs
Unix servers often use nfs (aka the network file system) to share directories over the network.
Setting up an nfs server is discussed later. Connecting as a client to an nfs server is done
with mount, and is very similar to connecting to local storage.
This command shows how to connect to the nfs server named server42, which is sharing
the directory /srv/data. The mount point at the end of the command (/home/data) must
already exist.
[root@centos65 ~]# mount -t nfs server42:/srv/data /home/data
[root@centos65 ~]#
If this server42 has ip-address 10.0.0.42 then you can also write:
[root@centos65 ~]# mount -t nfs 10.0.0.42:/srv/data /home/data
[root@centos65 ~]# mount | grep data
10.0.0.42:/srv/data on /home/data type nfs (rw,vers=4,addr=10.0.0.42,clienta\
ddr=10.0.0.33)
The soft+bg options combined guarantee the fastest client boot if there are NFS problems.
retrans=X Try X times to connect (over udp).
tcp Force tcp (default and supported)
udp Force udp (unsupported)
73
mounting
2. Mount the big 400MB primary partition on /mnt, the copy some files to it (everything in /
etc). Then umount, and mount the file system as read only on /srv/nfs/salesnumbers. Where
are the files you copied ?
3. Verify your work with fdisk, df and mount. Also look in /etc/mtab and /proc/mounts.
5. What happens when you mount a file system on a directory that contains some files ?
6. What happens when you mount two file systems on the same mount point ?
7. (optional) Describe the difference between these commands: find, locate, updatedb,
makewhatis, whereis, apropos, which and type.
74
mounting
2. Mount the big 400MB primary partition on /mnt, the copy some files to it (everything in /
etc). Then umount, and mount the file system as read only on /srv/nfs/salesnumbers. Where
are the files you copied ?
mount /dev/sdb1 /mnt
cp -r /etc /mnt
ls -l /mnt
umount /mnt
ls -l /mnt
mkdir -p /srv/nfs/salesnumbers
mount /dev/sdb1 /srv/nfs/salesnumbers
3. Verify your work with fdisk, df and mount. Also look in /etc/mtab and /proc/mounts.
fdisk -l
df -h
mount
All three the above commands should show your mounted partitions.
5. What happens when you mount a file system on a directory that contains some files ?
The files are hidden until umount.
6. What happens when you mount two file systems on the same mount point ?
Only the last mounted fs is visible.
75
mounting
7. (optional) Describe the difference between these commands: find, locate, updatedb,
makewhatis, whereis, apropos, which and type.
man find
man locate
...
76
Chapter 8. troubleshooting tools
This chapter introduces some tools that go beyond df -h and du -sh. Tools that will enable
you to troubleshoot a variety of issues with file systems and storage.
77
troubleshooting tools
8.1. lsof
List open files with lsof.
When invoked without options, lsof will list all open files. You can see the command (init in
this case), its PID (1) and the user (root) has openend the root directory and /sbin/init. The
FD (file descriptor) columns shows that / is both the root directory (rtd) and current working
directory (cwd) for the /sbin/init command. The FD column displays rtd for root directory,
cwd for current directory and txt for text (both including data and code).
root@debian7:~# lsof | head -4
COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 254,0 4096 2 /
init 1 root rtd DIR 254,0 4096 2 /
init 1 root txt REG 254,0 36992 130856 /sbin/init
Other options in the FD column besides w for writing, are r for reading and u for both reading
and writing. You can look at open files for a process id by typing lsof -p PID. For init this
would look like this:
lsof -p 1
The screenshot below shows basic use of lsof to prove that vi keeps a .swp file open (even
when stopped in background) on our freshly mounted file system.
[root@RHEL65 ~]# df -h | grep sdb
/dev/sdb1 541M 17M 497M 4% /srv/project33
[root@RHEL65 ~]# vi /srv/project33/busyfile.txt
[1]+ Stopped vi /srv/project33/busyfile.txt
[root@RHEL65 ~]# lsof /srv/*
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
vi 3243 root 3u REG 8,17 4096 12 /srv/project33/.busyfile.txt.swp
Here we see that rsyslog has a couple of log files open for writing (the FD column).
root@debian7:~# lsof /var/log/*
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rsyslogd 2013 root 1w REG 254,0 454297 1308187 /var/log/syslog
rsyslogd 2013 root 2w REG 254,0 419328 1308189 /var/log/kern.log
rsyslogd 2013 root 5w REG 254,0 116725 1308200 /var/log/debug
rsyslogd 2013 root 6w REG 254,0 309847 1308201 /var/log/messages
rsyslogd 2013 root 7w REG 254,0 17591 1308188 /var/log/daemon.log
rsyslogd 2013 root 8w REG 254,0 101768 1308186 /var/log/auth.log
You can specify a specific user with lsof -u. This example shows the current working
directory for a couple of command line programs.
[paul@RHEL65 ~]$ lsof -u paul | grep home
bash 3302 paul cwd DIR 253,0 4096 788024 /home/paul
lsof 3329 paul cwd DIR 253,0 4096 788024 /home/paul
grep 3330 paul cwd DIR 253,0 4096 788024 /home/paul
lsof 3331 paul cwd DIR 253,0 4096 788024 /home/paul
The -u switch of lsof also supports the ^ character meaning 'not'. To see all open files, but
not those open by root:
lsof -u^root
78
troubleshooting tools
8.2. fuser
The fuser command will display the 'user' of a file system.
In this example we still have a vi process in background and we use fuser to find the process
id of the process using this file system.
[root@RHEL65 ~]# jobs
[1]+ Stopped vi /srv/project33/busyfile.txt
[root@RHEL65 ~]# fuser -m /srv/project33/
/srv/project33/: 3243
You can quickly kill all processes that are using a specific file (or directory) with the -k
switch.
[root@RHEL65 ~]# fuser -m -k -u /srv/project33/
/srv/project33/: 3243(root)
[1]+ Killed vi /srv/project33/busyfile.txt
[root@RHEL65 ~]# fuser -m -u /srv/project33/
[root@RHEL65 ~]#
This example shows all processes that are using the current directory (bash and vi in this
case).
root@debian7:~/test42# vi file42
The last example shows how to find the process that is accessing a specific file.
[root@RHEL65 ~]# vi /srv/project33/busyfile.txt
79
troubleshooting tools
8.3. chroot
The chroot command creates a shell with an alternate root directory. It effectively hides
anything outside of this directory.
In the example below we assume that our system refuses to start (maybe because there is a
problem with /etc/fstab or the mounting of the root file system).
We start a live system (booted from cd/dvd/usb) to troubleshoot our server. The live system
will not use our main hard disk as root device
root@livecd:~# df -h | grep root
rootfs 186M 11M 175M 6% /
/dev/loop0 807M 807M 0 100% /lib/live/mount/rootfs/filesystem.squashfs
root@livecd:~# mount | grep root
/dev/loop0 on /lib/live/mount/rootfs/filesystem.squashfs type squashfs (ro)
First we mount the root file system from the disk (which is on lvm so we use /dev/mapper
instead of /dev/sda5).
root@livecd:~# mount /dev/mapper/packer--debian--7-root /mnt
Our test files (file42 and dir42) are not visible because they are out of the chrooted
environment.
Note that the hostname of the chrooted environment is identical to the existing hostname.
80
troubleshooting tools
8.4. iostat
iostat reports IO statitics every given period of time. It also includes a small cpu usage
summary. This example shows iostat running every ten seconds with /dev/sdc and /dev/sde
showing a lot of write activity.
[root@RHEL65 ~]# iostat 10 3
Linux 2.6.32-431.el6.x86_64 (RHEL65) 06/16/2014 _x86_64_ (1 CPU)
[root@RHEL65 ~]#
Other options are to specify the disks you want to monitor (every 5 seconds here):
iostat sdd sde sdf 5
81
troubleshooting tools
8.5. iotop
iotop works like the top command but orders processes by input/output instead of by CPU.
By default iotop will show all processes. This example uses iotop -o to only display
processes with actual I/O.
[root@RHEL65 ~]# iotop -o
Total DISK READ: 8.63 M/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
15000 be/4 root 2.43 M/s 0.00 B/s 0.00 % 14.60 % tar cjf /srv/di...
25000 be/4 root 6.20 M/s 0.00 B/s 0.00 % 6.15 % tar czf /srv/di...
24988 be/4 root 0.00 B/s 7.21 M/s 0.00 % 0.00 % gzip
25003 be/4 root 0.00 B/s 1591.19 K/s 0.00 % 0.00 % gzip
25004 be/4 root 0.00 B/s 193.51 K/s 0.00 % 0.00 % bzip2
Use the -b switch to create a log of iotop output (instead of the default interactive view).
[root@RHEL65 ~]# iotop -bod 10
Total DISK READ: 12.82 M/s | Total DISK WRITE: 5.69 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
25153 be/4 root 2.05 M/s 0.00 B/s 0.00 % 7.81 % tar cjf /srv/di...
25152 be/4 root 10.77 M/s 0.00 B/s 0.00 % 2.94 % tar czf /srv/di...
25144 be/4 root 408.54 B/s 0.00 B/s 0.00 % 0.05 % python /usr/sbi...
12516 be/3 root 0.00 B/s 1491.33 K/s 0.00 % 0.04 % [jbd2/sdc1-8]
12522 be/3 root 0.00 B/s 45.48 K/s 0.00 % 0.01 % [jbd2/sde1-8]
25158 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [flush-8:64]
25155 be/4 root 0.00 B/s 493.12 K/s 0.00 % 0.00 % bzip2
25156 be/4 root 0.00 B/s 2.81 M/s 0.00 % 0.00 % gzip
25159 be/4 root 0.00 B/s 528.63 K/s 0.00 % 0.00 % [flush-8:32]
This is an example of iotop to track disk I/O every ten seconds for one user named vagrant
(and only one process of this user, but this can be omitted). The -a switch accumulates I/
O over time.
[root@RHEL65 ~]# iotop -q -a -u vagrant -b -p 5216 -d 10 -n 10
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
5216 be/4 vagrant 0.00 B 0.00 B 0.00 % 0.00 % gzip
Total DISK READ: 818.22 B/s | Total DISK WRITE: 20.78 M/s
5216 be/4 vagrant 0.00 B 213.89 M 0.00 % 0.00 % gzip
Total DISK READ: 2045.95 B/s | Total DISK WRITE: 23.16 M/s
5216 be/4 vagrant 0.00 B 430.70 M 0.00 % 0.00 % gzip
Total DISK READ: 1227.50 B/s | Total DISK WRITE: 22.37 M/s
5216 be/4 vagrant 0.00 B 642.02 M 0.00 % 0.00 % gzip
Total DISK READ: 818.35 B/s | Total DISK WRITE: 16.44 M/s
5216 be/4 vagrant 0.00 B 834.09 M 0.00 % 0.00 % gzip
Total DISK READ: 6.95 M/s | Total DISK WRITE: 8.74 M/s
5216 be/4 vagrant 0.00 B 920.69 M 0.00 % 0.00 % gzip
Total DISK READ: 21.71 M/s | Total DISK WRITE: 11.99 M/s
82
troubleshooting tools
8.6. vmstat
While vmstat is mainly a memory monitoring tool, it is worth mentioning here for its
reporting on summary I/O data for block devices and swap space.
This example shows some disk activity (underneath the -----io---- column), without
swapping.
[root@RHEL65 ~]# vmstat 5 10
procs ----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 5420 9092 14020 340876 7 12 235 252 77 100 2 1 98 0 0
2 0 5420 6104 13840 338176 0 0 7401 7812 747 1887 38 12 50 0 0
2 0 5420 10136 13696 336012 0 0 11334 14 1725 4036 76 24 0 0 0
0 0 5420 14160 13404 341552 0 0 10161 9914 1174 1924 67 15 18 0 0
0 0 5420 14300 13420 341564 0 0 0 16 28 18 0 0 100 0 0
0 0 5420 14300 13420 341564 0 0 0 0 22 16 0 0 100 0 0
...
[root@RHEL65 ~]#
You can benefit from vmstat's ability to display memory in kilobytes, megabytes or even
kibibytes and mebibytes using -S (followed by k K m or M).
[root@RHEL65 ~]# vmstat -SM 5 10
procs ----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 5 14 11 334 0 0 259 255 79 107 2 1 97 0 0
0 0 5 14 11 334 0 0 0 2 21 18 0 0 100 0 0
0 0 5 15 11 334 0 0 6 0 35 31 0 0 100 0 0
2 0 5 6 11 336 0 0 17100 7814 1378 2945 48 21 31 0 0
2 0 5 6 11 336 0 0 13193 14 1662 3343 78 22 0 0 0
2 0 5 13 11 330 0 0 11656 9781 1419 2642 82 18 0 0 0
2 0 5 9 11 334 0 0 10705 2716 1504 2657 81 19 0 0 0
1 0 5 14 11 336 0 0 6467 3788 765 1384 43 9 48 0 0
0 0 5 14 11 336 0 0 0 13 28 24 0 0 100 0 0
0 0 5 14 11 336 0 0 0 0 20 15 0 0 100 0 0
[root@RHEL65 ~]#
83
troubleshooting tools
1. Read the theory on fuser and explore its man page. Use this command to find files that
you open yourself.
2. Read the theory on lsof and explore its man page. Use this command to find files that
you open yourself.
3. Boot a live image on an existing computer (virtual or real) and chroot into to it.
4. Start one or more disk intensive jobs and monitor them with iostat and iotop (compare
to vmstat).
84
troubleshooting tools
1. Read the theory on fuser and explore its man page. Use this command to find files that
you open yourself.
2. Read the theory on lsof and explore its man page. Use this command to find files that
you open yourself.
3. Boot a live image on an existing computer (virtual or real) and chroot into to it.
4. Start one or more disk intensive jobs and monitor them with iostat and iotop (compare
to vmstat).
85
Chapter 9. introduction to uuid's
A uuid or universally unique identifier is used to uniquely identify objects. This 128bit
standard allows anyone to create a unique uuid.
86
introduction to uuid's
Red Hat Enterprise Linux 5 puts vol_id in /lib/udev/vol_id, which is not in the $PATH. The
syntax is also a bit different from Debian/Ubuntu/Mint.
root@rhel53 ~# /lib/udev/vol_id -u /dev/hda1
48a6a316-9ca9-4214-b5c6-e7b33a77e860
9.2. tune2fs
Use tune2fs to find the uuid of a file system.
[root@RHEL5 ~]# tune2fs -l /dev/sda1 | grep UUID
Filesystem UUID: 11cfc8bc-07c0-4c3f-9f64-78422ef1dd5c
[root@RHEL5 ~]# /lib/udev/vol_id -u /dev/sda1
11cfc8bc-07c0-4c3f-9f64-78422ef1dd5c
9.3. uuid
There is more information in the manual of uuid, a tool that can generate uuid's.
[root@rhel65 ~]# yum install uuid
(output truncated)
[root@rhel65 ~]# man uuid
87
introduction to uuid's
Then we check that it is properly added to /etc/fstab, the uuid replaces the variable
devicename /dev/sdc1.
[root@RHEL5 ~]# grep UUID /etc/fstab
UUID=7626d73a-2bb6-4937-90ca-e451025d64e8 /home/pro42 ext3 defaults 0 0
Now we can mount the volume using the mount point defined in /etc/fstab.
[root@RHEL5 ~]# mount /home/pro42
[root@RHEL5 ~]# df -h | grep 42
/dev/sdc1 397M 11M 366M 3% /home/pro42
The real test now, is to remove /dev/sdb from the system, reboot the machine and see what
happens. After the reboot, the disk previously known as /dev/sdc is now /dev/sdb.
[root@RHEL5 ~]# tune2fs -l /dev/sdb1 | grep UUID
Filesystem UUID: 7626d73a-2bb6-4937-90ca-e451025d64e8
And thanks to the uuid in /etc/fstab, the mountpoint is mounted on the same disk as before.
[root@RHEL5 ~]# df -h | grep sdb
/dev/sdb1 397M 11M 366M 3% /home/pro42
88
introduction to uuid's
The screenshot above contains only four lines. The line starting with root= is the
continuation of the kernel line.
89
introduction to uuid's
2. Use this uuid in /etc/fstab and test that it works with a simple mount.
3. (optional) Test it also by removing a disk (so the device name is changed). You can edit
settings in vmware/Virtualbox to remove a hard disk.
4. Display the root= directive in /boot/grub/menu.lst. (We see later in the course how to
maintain this file.)
90
introduction to uuid's
2. Use this uuid in /etc/fstab and test that it works with a simple mount.
tail -1 /etc/fstab
UUID=60926898-2c78-49b4-a71d-c1d6310c87cc /home/pro42 ext3 defaults 0 0
3. (optional) Test it also by removing a disk (so the device name is changed). You can edit
settings in vmware/Virtualbox to remove a hard disk.
4. Display the root= directive in /boot/grub/menu.lst. (We see later in the course how to
maintain this file.)
paul@deb503:~$ grep ^[^#] /boot/grub/menu.lst | grep root=
kernel /boot/vmlinuz-2.6.26-2-686 root=/dev/hda1 ro selinux=1 quiet
kernel /boot/vmlinuz-2.6.26-2-686 root=/dev/hda1 ro selinux=1 single
91
Chapter 10. introduction to raid
10.1. hardware or software
Redundant Array of Independent (originally Inexpensive) Disks or RAID can be set up using
hardware or software. Hardware RAID is more expensive, but offers better performance.
Software RAID is cheaper and easier to manage, but it uses your CPU and your memory.
Where ten years ago nobody was arguing about the best choice being hardware RAID, this
has changed since technologies like mdadm, lvm and even zfs focus more on managability.
The workload on the cpu for software RAID used to be high, but cpu's have gotten a lot
faster.
92
introduction to raid
10.2.2. jbod
jbod uses two or more disks, and is often called concatenating (spanning, spanned set, or
spanned volume). Data is written to the first disk, until it is full. Then data is written to the
second disk... The main advantage of jbod (Just a Bunch of Disks) is that you can create
larger drives. JBOD offers no redundancy.
10.2.3. raid 1
raid 1 uses exactly two disks, and is often called mirroring (or mirror set, or mirrored
volume). All data written to the array is written on each disk. The main advantage of raid 1
is redundancy. The main disadvantage is that you lose at least half of your available disk
space (in other words, you at least double the cost).
10.2.5. raid 5
raid 5 uses three or more disks, each divided into chunks. Every time chunks are written
to the array, one of the disks will receive a parity chunk. Unlike raid 4, the parity chunk
will alternate between all disks. The main advantage of this is that raid 5 will allow for full
data recovery in case of one hard disk failure.
10.2.6. raid 6
raid 6 is very similar to raid 5, but uses two parity chunks. raid 6 protects against two hard
disk failures. Oracle Solaris zfs calls this raidz2 (and also had raidz3 with triple parity).
93
introduction to raid
10.2.9. raid 50
raid 5+0 is a stripe(0) of raid 5 arrays. Suppose you have nine disks of 100GB, then you
can create three raid 5 arrays of 200GB each. You can then combine them into one large
stripe set.
94
introduction to raid
95
introduction to raid
The command below is split on two lines to fit this print, but you should type it on one line,
without the backslash (\).
[root@rhel6c ~]# mdadm --create /dev/md0 --chunk=64 --level=5 --raid-\
devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
We could use this software raid 5 array in the next topic: lvm.
10.3.5. /proc/mdstat
The status of the raid devices can be seen in /proc/mdstat. This example shows a raid 5
in the process of rebuilding.
96
introduction to raid
Layout : left-symmetric
Chunk Size : 64K
97
introduction to raid
2. Create a software raid 5 on the three disks. (It is not necessary to put a filesystem on it)
98
introduction to raid
2. Create a software raid 5 on the three disks. (It is not necessary to put a filesystem on it)
99
Chapter 11. logical volume
management
Most lvm implementations support physical storage grouping, logical volume resizing
and data migration.
Physical storage grouping is a fancy name for grouping multiple block devices (hard disks,
but also iSCSI etc) into a logical mass storage device. To enlarge this physical group, block
devices (including partitions) can be added at a later time.
The size of lvm volumes on this physical group is independent of the individual size of the
components. The total size of the group is the limit.
One of the nice features of lvm is the logical volume resizing. You can increase the size of
an lvm volume, sometimes even without any downtime. Additionally, you can migrate data
away from a failing hard disk device, create mirrors and create snapshots.
100
logical volume management
In the example above, consider the options when you want to enlarge the space available
for /srv/project42. What can you do ? The solution will always force you to unmount the
file system, take a backup of the data, remove and recreate partitions, and then restore the
data and remount the file system.
101
logical volume management
102
logical volume management
First thing to do, is create physical volumes that can join the volume group with pvcreate.
This command makes a disk or partition available for use in Volume Groups. The screenshot
shows how to present the SCSI Disk device to LVM.
root@RHEL4:~# pvcreate /dev/sdc
Physical volume "/dev/sdc" successfully created
Note: lvm will work fine when using the complete device, but another operating system on the
same computer (or on the same SAN) will not recognize lvm and will mark the block device
as being empty! You can avoid this by creating a partition that spans the whole device, then
run pvcreate on the partition instead of the disk.
Then vgcreate creates a volume group using one device. Note that more devices could be
added to the volume group.
root@RHEL4:~# vgcreate vg /dev/sdc
Volume group "vg" successfully created
103
logical volume management
The logical volume /dev/vg/lvol0 can now be formatted with ext3, and mounted for normal
use.
root@RHELv4u2:~# mke2fs -m0 -j /dev/vg/lvol0
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
128016 inodes, 512000 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
63 block groups
8192 blocks per group, 8192 fragments per group
2032 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
A logical volume is very similar to a partition, it can be formatted with a file system, and
can be mounted so users can access it.
104
logical volume management
The fdisk command shows us newly added scsi-disks that will serve our lvm volume. This
volume will then be extended. First, take a look at these disks.
[root@RHEL5 ~]# fdisk -l | grep sd[bc]
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdb: 1181 MB, 1181115904 bytes
Disk /dev/sdc: 429 MB, 429496320 bytes
You already know how to partition a disk, below the first disk is partitioned (in one big
primary partition), the second disk is left untouched.
[root@RHEL5 ~]# fdisk -l | grep sd[bc]
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdb: 1181 MB, 1181115904 bytes
/dev/sdb1 1 143 1148616 83 Linux
Disk /dev/sdc: 429 MB, 429496320 bytes
You also know how to prepare disks for lvm with pvcreate, and how to create a volume
group with vgcreate. This example adds both the partitioned disk and the untouched disk
to the volume group named vg2.
[root@RHEL5 ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created
[root@RHEL5 ~]# pvcreate /dev/sdc
Physical volume "/dev/sdc" successfully created
[root@RHEL5 ~]# vgcreate vg2 /dev/sdb1 /dev/sdc
Volume group "vg2" successfully created
You can use pvdisplay to verify that both the disk and the partition belong to the volume
group.
[root@RHEL5 ~]# pvdisplay | grep -B1 vg2
PV Name /dev/sdb1
VG Name vg2
--
PV Name /dev/sdc
VG Name vg2
And you are familiar both with the lvcreate command to create a small logical volume and
the mke2fs command to put ext3 on it.
[root@RHEL5 ~]# lvcreate --size 200m vg2
Logical volume "lvol0" created
[root@RHEL5 ~]# mke2fs -m20 -j /dev/vg2/lvol0
...
105
logical volume management
As you see, we end up with a mounted logical volume that according to df is almost 200
megabyte in size.
[root@RHEL5 ~]# mkdir /home/resizetest
[root@RHEL5 ~]# mount /dev/vg2/lvol0 /home/resizetest/
[root@RHEL5 ~]# df -h | grep resizetest
194M 5.6M 149M 4% /home/resizetest
But as you can see, there is a small problem: it appears that df is not able to display the
extended volume in its full size. This is because the filesystem is only set for the size of the
volume before the extension was added.
[root@RHEL5 ~]# df -h | grep resizetest
194M 5.6M 149M 4% /home/resizetest
With lvdisplay however we can see that the volume is indeed extended.
[root@RHEL5 ~]# lvdisplay /dev/vg2/lvol0 | grep Size
LV Size 300.00 MB
To finish the extension, you need resize2fs to span the filesystem over the full size of the
logical volume.
[root@RHEL5 ~]# resize2fs /dev/vg2/lvol0
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/vg2/lvol0 is mounted on /home/resizetest; on-line re\
sizing required
Performing an on-line resize of /dev/vg2/lvol0 to 307200 (1k) blocks.
The filesystem on /dev/vg2/lvol0 is now 307200 blocks long.
106
logical volume management
Now we can use pvcreate to create the Physical Volume, followed by pvs to verify the
creation.
[root@RHEL5 ~]# pvcreate /dev/sde1
Physical volume "/dev/sde1" successfully created
[root@RHEL5 ~]# pvs | grep sde1
/dev/sde1 lvm2 -- 99.98M 99.98M
[root@RHEL5 ~]#
The next step is to use fdisk to enlarge the partition (actually deleting it and then recreating /
dev/sde1 with more cylinders).
[root@RHEL5 ~]# fdisk /dev/sde
107
logical volume management
When we now use fdisk and pvs to verify the size of the partition and the Physical Volume,
then there is a size difference. LVM is still using the old size.
[root@RHEL5 ~]# fdisk -l 2>/dev/null | grep sde1
/dev/sde1 1 200 204784 83 Linux
[root@RHEL5 ~]# pvs | grep sde1
/dev/sde1 lvm2 -- 99.98M 99.98M
[root@RHEL5 ~]#
Executing pvresize on the Physical Volume will make lvm aware of the size change of the
partition. The correct size can be displayed with pvs.
[root@RHEL5 ~]# pvresize /dev/sde1
Physical volume "/dev/sde1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
[root@RHEL5 ~]# pvs | grep sde1
/dev/sde1 lvm2 -- 199.98M 199.98M
[root@RHEL5 ~]#
108
logical volume management
Then we create the Volume Group and verify again with pvs. Notice how the three physical
volumes now belong to vg33, and how the size is rounded down (in steps of the extent size,
here 4MB).
[root@RHEL5 ~]# vgcreate vg33 /dev/sdb /dev/sdc /dev/sdd
Volume group "vg33" successfully created
[root@RHEL5 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup00 lvm2 a- 15.88G 0
/dev/sdb vg33 lvm2 a- 408.00M 408.00M
/dev/sdc vg33 lvm2 a- 408.00M 408.00M
/dev/sdd vg33 lvm2 a- 408.00M 408.00M
[root@RHEL5 ~]#
The last step is to create the Logical Volume with lvcreate. Notice the -m 1 switch to create
one mirror. Notice also the change in free space in all three Physical Volumes!
[root@RHEL5 ~]# lvcreate --size 300m -n lvmir -m 1 vg33
Logical volume "lvmir" created
[root@RHEL5 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup00 lvm2 a- 15.88G 0
/dev/sdb vg33 lvm2 a- 408.00M 108.00M
/dev/sdc vg33 lvm2 a- 408.00M 108.00M
/dev/sdd vg33 lvm2 a- 408.00M 404.00M
You can see the copy status of the mirror with lvs. It currently shows a 100 percent copy.
[root@RHEL5 ~]# lvs vg33/lvmir
LV VG Attr LSize Origin Snap% Move Log Copy%
lvmir vg33 mwi-ao 300.00M lvmir_mlog 100.00
109
logical volume management
You can see with lvs that the snapshot snapLV is indeed a snapshot of bigLV. Moments
after taking the snapshot, there are few changes to bigLV (0.02 percent).
[root@RHEL5 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
bigLV vg42 owi-a- 200.00M
snapLV vg42 swi-a- 100.00M bigLV 0.02
[root@RHEL5 ~]#
But after using bigLV for a while, more changes are done. This means the snapshot volume
has to keep more original data (10.22 percent).
[root@RHEL5 ~]# lvs | grep vg42
bigLV vg42 owi-ao 200.00M
snapLV vg42 swi-a- 100.00M bigLV 10.22
[root@RHEL5 ~]#
You can now use regular backup tools (dump, tar, cpio, ...) to take a backup of the snapshot
Logical Volume. This backup will contain all data as it existed on bigLV at the time the
snapshot was taken. When the backup is done, you can remove the snapshot.
[root@RHEL5 ~]# lvremove vg42/snapLV
Do you really want to remove active logical volume "snapLV"? [y/n]: y
Logical volume "snapLV" successfully removed
[root@RHEL5 ~]#
110
logical volume management
11.8.2. pvs
The easiest way to verify whether devices are known to lvm is with the pvs command. The
screenshot below shows that only /dev/sda2 is currently known for use with LVM. It shows
that /dev/sda2 is part of Volgroup00 and is almost 16GB in size. It also shows /dev/sdc and /
dev/sdd as part of vg33. The device /dev/sdb is knwon to lvm, but not linked to any Volume
Group.
[root@RHEL5 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup00 lvm2 a- 15.88G 0
/dev/sdb lvm2 -- 409.60M 409.60M
/dev/sdc vg33 lvm2 a- 408.00M 408.00M
/dev/sdd vg33 lvm2 a- 408.00M 408.00M
[root@RHEL5 ~]#
11.8.3. pvscan
The pvscan command will scan all disks for existing Physical Volumes. The information is
similar to pvs, plus you get a line with total sizes.
[root@RHEL5 ~]# pvscan
PV /dev/sdc VG vg33 lvm2 [408.00 MB / 408.00 MB free]
PV /dev/sdd VG vg33 lvm2 [408.00 MB / 408.00 MB free]
PV /dev/sda2 VG VolGroup00 lvm2 [15.88 GB / 0 free]
PV /dev/sdb lvm2 [409.60 MB]
Total: 4 [17.07 GB] / in use: 3 [16.67 GB] / in no VG: 1 [409.60 MB]
[root@RHEL5 ~]#
111
logical volume management
11.8.4. pvdisplay
Use pvdisplay to get more information about physical volumes. You can also use pvdisplay
without an argument to display information about all physical (lvm) volumes.
[root@RHEL5 ~]# pvdisplay /dev/sda2
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 15.90 GB / not usable 20.79 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 508
Free PE 0
Allocated PE 508
PV UUID TobYfp-Ggg0-Rf8r-xtLd-5XgN-RSPc-8vkTHD
[root@RHEL5 ~]#
112
logical volume management
11.9.2. vgscan
The vgscan command will scan all disks for existing Volume Groups. It will also update the
/etc/lvm/.cache file. This file contains a list of all current lvm devices.
[root@RHEL5 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
[root@RHEL5 ~]#
LVM will run the vgscan automatically at boot-up, so if you add hot swap devices, then you
will need to run vgscan to update /etc/lvm/.cache with the new devices.
11.9.3. vgdisplay
The vgdisplay command will give you more detailed information about a volume group (or
about all volume groups if you omit the argument).
[root@RHEL5 ~]# vgdisplay VolGroup00
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 15.88 GB
PE Size 32.00 MB
Total PE 508
Alloc PE / Size 508 / 15.88 GB
Free PE / Size 0 / 0
VG UUID qsXvJb-71qV-9l7U-ishX-FobM-qptE-VXmKIg
[root@RHEL5 ~]#
113
logical volume management
11.10.2. lvscan
The lvscan command will scan all disks for existing Logical Volumes.
[root@RHEL5 ~]# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [14.88 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [1.00 GB] inherit
[root@RHEL5 ~]#
11.10.3. lvdisplay
More detailed information about logical volumes is available through the lvdisplay(1)
command.
[root@RHEL5 ~]# lvdisplay VolGroup00/LogVol01
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID RnTGK6-xWsi-t530-ksJx-7cax-co5c-A1KlDp
LV Write Access read/write
LV Status available
# open 1
LV Size 1.00 GB
Current LE 32
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
[root@RHEL5 ~]#
114
logical volume management
You can also add multiple disks or partitions as target to pvcreate. This example adds three
disks to lvm.
[root@RHEL5 ~]# pvcreate /dev/sde /dev/sdf /dev/sdg
Physical volume "/dev/sde" successfully created
Physical volume "/dev/sdf" successfully created
Physical volume "/dev/sdg" successfully created
[root@RHEL5 ~]#
11.11.2. pvremove
Use the pvremove command to remove physical volumes from lvm. The devices may not
be in use.
[root@RHEL5 ~]# pvremove /dev/sde /dev/sdf /dev/sdg
Labels on physical volume "/dev/sde" successfully wiped
Labels on physical volume "/dev/sdf" successfully wiped
Labels on physical volume "/dev/sdg" successfully wiped
[root@RHEL5 ~]#
11.11.3. pvresize
When you used fdisk to resize a partition on a disk, then you must use pvresize to make lvm
recognize the new size of the physical volume that represents this partition.
[root@RHEL5 ~]# pvresize /dev/sde1
Physical volume "/dev/sde1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
115
logical volume management
11.11.4. pvchange
With pvchange you can prevent the allocation of a Physical Volume in a new Volume Group
or Logical Volume. This can be useful if you plan to remove a Physical Volume.
[root@RHEL5 ~]# pvchange -xn /dev/sdd
Physical volume "/dev/sdd" changed
1 physical volume changed / 0 physical volumes not changed
[root@RHEL5 ~]#
To revert your previous decision, this example shows you how te re-enable the Physical
Volume to allow allocation.
[root@RHEL5 ~]# pvchange -xy /dev/sdd
Physical volume "/dev/sdd" changed
1 physical volume changed / 0 physical volumes not changed
[root@RHEL5 ~]#
11.11.5. pvmove
With pvmove you can move Logical Volumes from within a Volume Group to another
Physical Volume. This must be done before removing a Physical Volume.
[root@RHEL5 ~]# pvs | grep vg1
/dev/sdf vg1 lvm2 a- 816.00M 0
/dev/sdg vg1 lvm2 a- 816.00M 816.00M
[root@RHEL5 ~]# pvmove /dev/sdf
/dev/sdf: Moved: 70.1%
/dev/sdf: Moved: 100.0%
[root@RHEL5 ~]# pvs | grep vg1
/dev/sdf vg1 lvm2 a- 816.00M 816.00M
/dev/sdg vg1 lvm2 a- 816.00M 0
116
logical volume management
11.12.2. vgextend
Use the vgextend command to extend an existing volume group with a physical volume.
[root@RHEL5 ~]# vgextend vg42 /dev/sdg
Volume group "vg42" successfully extended
[root@RHEL5 ~]#
11.12.3. vgremove
Use the vgremove command to remove volume groups from lvm. The volume groups may
not be in use.
[root@RHEL5 ~]# vgremove vg42
Volume group "vg42" successfully removed
[root@RHEL5 ~]#
11.12.4. vgreduce
Use the vgreduce command to remove a Physical Volume from the Volume Group.
The following example adds Physical Volume /dev/sdg to the vg1 Volume Group using
vgextend. And then removes it again using vgreduce.
[root@RHEL5 ~]# pvs | grep sdg
/dev/sdg lvm2 -- 819.20M 819.20M
[root@RHEL5 ~]# vgextend vg1 /dev/sdg
Volume group "vg1" successfully extended
[root@RHEL5 ~]# pvs | grep sdg
/dev/sdg vg1 lvm2 a- 816.00M 816.00M
[root@RHEL5 ~]# vgreduce vg1 /dev/sdg
Removed "/dev/sdg" from volume group "vg1"
[root@RHEL5 ~]# pvs | grep sdg
/dev/sdg lvm2 -- 819.20M 819.20M
117
logical volume management
11.12.5. vgchange
Use the vgchange command to change parameters of a Volume Group.
This example shows how to prevent Physical Volumes from being added or removed to the
Volume Group vg1.
[root@RHEL5 ~]# vgchange -xn vg1
Volume group "vg1" successfully changed
[root@RHEL5 ~]# vgextend vg1 /dev/sdg
Volume group vg1 is not resizable.
You can also use vgchange to change most other properties of a Volume Group. This
example changes the maximum number of Logical Volumes and maximum number of
Physical Volumes that vg1 can serve.
[root@RHEL5 ~]# vgdisplay vg1 | grep -i max
MAX LV 0
Max PV 0
[root@RHEL5 ~]# vgchange -l16 vg1
Volume group "vg1" successfully changed
[root@RHEL5 ~]# vgchange -p8 vg1
Volume group "vg1" successfully changed
[root@RHEL5 ~]# vgdisplay vg1 | grep -i max
MAX LV 16
Max PV 8
11.12.6. vgmerge
Merging two Volume Groups into one is done with vgmerge. The following example merges
vg2 into vg1, keeping all the properties of vg1.
[root@RHEL5 ~]# vgmerge vg1 vg2
Volume group "vg2" successfully merged into "vg1"
[root@RHEL5 ~]#
118
logical volume management
As you can see, lvm automatically names the Logical Volume lvol0. The next example
creates a 200MB Logical Volume named MyLV in Volume Group vg42.
[root@RHEL5 ~]# lvcreate -L200M -nMyLV vg42
Logical volume "MyLV" created
[root@RHEL5 ~]#
The next example does the same thing, but with different syntax.
[root@RHEL5 ~]# lvcreate --size 200M -n MyLV vg42
Logical volume "MyLV" created
[root@RHEL5 ~]#
This example creates a Logical Volume that occupies 10 percent of the Volume Group.
[root@RHEL5 ~]# lvcreate -l 10%VG -n MyLV2 vg42
Logical volume "MyLV2" created
[root@RHEL5 ~]#
This example creates a Logical Volume that occupies 30 percent of the remaining free space
in the Volume Group.
[root@RHEL5 ~]# lvcreate -l 30%FREE -n MyLV3 vg42
Logical volume "MyLV3" created
[root@RHEL5 ~]#
11.13.2. lvremove
Use the lvremove command to remove Logical Volumes from a Volume Group. Removing
a Logical Volume requires the name of the Volume Group.
[root@RHEL5 ~]# lvremove vg42/MyLV
Do you really want to remove active logical volume "MyLV"? [y/n]: y
Logical volume "MyLV" successfully removed
[root@RHEL5 ~]#
Removing multiple Logical Volumes will request confirmation for each individual volume.
[root@RHEL5 ~]# lvremove vg42/MyLV vg42/MyLV2 vg42/MyLV3
Do you really want to remove active logical volume "MyLV"? [y/n]: y
Logical volume "MyLV" successfully removed
Do you really want to remove active logical volume "MyLV2"? [y/n]: y
Logical volume "MyLV2" successfully removed
Do you really want to remove active logical volume "MyLV3"? [y/n]: y
Logical volume "MyLV3" successfully removed
[root@RHEL5 ~]#
119
logical volume management
11.13.3. lvextend
Extending the volume is easy with lvextend. This example extends a 200MB Logical
Volume with 100 MB.
[root@RHEL5 ~]# lvdisplay /dev/vg2/lvol0 | grep Size
LV Size 200.00 MB
[root@RHEL5 ~]# lvextend -L +100 /dev/vg2/lvol0
Extending logical volume lvol0 to 300.00 MB
Logical volume lvol0 successfully resized
[root@RHEL5 ~]# lvdisplay /dev/vg2/lvol0 | grep Size
LV Size 300.00 MB
The next example creates a 100MB Logical Volume, and then extends it to 500MB.
[root@RHEL5 ~]# lvcreate --size 100M -n extLV vg42
Logical volume "extLV" created
[root@RHEL5 ~]# lvextend -L 500M vg42/extLV
Extending logical volume extLV to 500.00 MB
Logical volume extLV successfully resized
[root@RHEL5 ~]#
11.13.4. lvrename
Renaming a Logical Volume is done with lvrename. This example renames extLV to bigLV
in the vg42 Volume Group.
[root@RHEL5 ~]# lvrename vg42/extLV vg42/bigLV
Renamed "extLV" to "bigLV" in volume group "vg42"
[root@RHEL5 ~]#
120
logical volume management
2. Create two logical volumes (a small one and a bigger one) in this volumegroup. Format
them wih ext3, mount them and copy some files to them.
3. Verify usage with fdisk, mount, pvs, vgs, lvs, pvdisplay, vgdisplay, lvdisplay and df. Does
fdisk give you any information about lvm?
4. Enlarge the small logical volume by 50 percent, and verify your work!
5. Take a look at other commands that start with vg* , pv* or lv*.
9. Create a snapshot of a Logical Volume, take a backup of the snapshot. Then delete some
files on the Logical Volume, then restore your backup.
10. Move your volume group to another disk (keep the Logical Volumes mounted).
11. If time permits, split a Volume Group with vgsplit, then merge it again with vgmerge.
121
logical volume management
122
logical volume management
2. Create two logical volumes (a small one and a bigger one) in this volumegroup. Format
them wih ext3, mount them and copy some files to them.
root@rhel65:~# lvcreate --size 200m --name LVsmall VG42
Logical volume "LVsmall" created
root@rhel65:~# lvcreate --size 600m --name LVbig VG42
Logical volume "LVbig" created
root@rhel65:~# ls -l /dev/mapper/VG42-LVsmall
lrwxrwxrwx. 1 root root 7 Apr 20 20:41 /dev/mapper/VG42-LVsmall -> ../dm-2
root@rhel65:~# ls -l /dev/VG42/LVsmall
lrwxrwxrwx. 1 root root 7 Apr 20 20:41 /dev/VG42/LVsmall -> ../dm-2
root@rhel65:~# ls -l /dev/dm-2
brw-rw----. 1 root disk 253, 2 Apr 20 20:41 /dev/dm-2
123
logical volume management
3. Verify usage with fdisk, mount, pvs, vgs, lvs, pvdisplay, vgdisplay, lvdisplay and df. Does
fdisk give you any information about lvm?
Run all those commands (only two are shown below), then answer 'no'.
root@rhel65:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
6.7G 1.4G 5.0G 21% /
tmpfs 246M 0 246M 0% /dev/shm
/dev/sda1 485M 77M 383M 17% /boot
/dev/mapper/VG42-LVsmall
194M 30M 154M 17% /srv/LVsmall
/dev/mapper/VG42-LVbig
591M 20M 541M 4% /srv/LVbig
root@rhel65:~# mount | grep VG42
/dev/mapper/VG42-LVsmall on /srv/LVsmall type ext3 (rw)
/dev/mapper/VG42-LVbig on /srv/LVbig type ext3 (rw)
4. Enlarge the small logical volume by 50 percent, and verify your work!
root@rhel65:~# lvextend VG42/LVsmall -l+50%LV
Extending logical volume LVsmall to 300.00 MiB
Logical volume LVsmall successfully resized
root@rhel65:~# resize2fs /dev/mapper/VG42-LVsmall
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/VG42-LVsmall is mounted on /srv/LVsmall; on-line res\
izing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/mapper/VG42-LVsmall to 307200 (1k) blocks.
The filesystem on /dev/mapper/VG42-LVsmall is now 307200 blocks long.
124
logical volume management
5. Take a look at other commands that start with vg* , pv* or lv*.
9. Create a snapshot of a Logical Volume, take a backup of the snapshot. Then delete some
files on the Logical Volume, then restore your backup.
10. Move your volume group to another disk (keep the Logical Volumes mounted).
11. If time permits, split a Volume Group with vgsplit, then merge it again with vgmerge.
125
Chapter 12. iSCSI devices
This chapter teaches you how to setup an iSCSI target server and an iSCSI initiator client.
126
iSCSI devices
The computer holding the physical storage hardware is called the iSCSI Target. Each
individual addressable iSCSI device on the target server will get a LUN number.
The iSCSI client computer that is connecting to the Target server is called an Initiator. An
initiator will send SCSI commands over IP instead of directly to the hardware. The Initiator
will connect to the Target.
The standard local port for iSCSI Target is 3260, in case of doubt you can verify this with
netstat.
[root@server1 tgt]# netstat -ntpl | grep tgt
tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 1670/tgtd
tcp 0 0 :::3260 :::* LISTEN 1670/tgtd
127
iSCSI devices
The tgt-admin -s command should now give you a nice overview of the three LUN's (and
also LUN 0 for the controller).
[root@server1 tgt]# tgt-admin -s
Target 1: iqn.2014-04.be.linux-training:server1.target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: VB9f23197b-af6cfb60
Size: 1074 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/sdb
Backing store flags:
LUN: 2
Type: disk
SCSI ID: IET 00010002
SCSI SN: VB8f554351-a1410828
Size: 1074 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/sdc
Backing store flags:
LUN: 3
Type: disk
SCSI ID: IET 00010003
SCSI SN: VB1035d2f0-7ae90b49
Size: 1074 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/sdd
Backing store flags:
Account information:
ACL information:
ALL
128
iSCSI devices
Then ask the iSCSI target server to send you the target names.
[root@server2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.95:3260
Starting iscsid: [ OK ]
192.168.1.95:3260,1 iqn.2014-04.be.linux-training:centos65.target1
We received iqn.2014-04.be.linux-training:centos65.target1.
We use this iqn to configure the username and the password (paul and hunter2) that we set
on the target server.
[root@server2 iscsi]# iscsiadm -m node --targetname iqn.2014-04.be.linux-tra\
ining:centos65.target1 --portal "192.168.1.95:3260" --op=update --name node.\
session.auth.username --value=paul
[root@server2 iscsi]# iscsiadm -m node --targetname iqn.2014-04.be.linux-tra\
ining:centos65.target1 --portal "192.168.1.95:3260" --op=update --name node.\
session.auth.password --value=hunter2
[root@server2 iscsi]# iscsiadm -m node --targetname iqn.2014-04.be.linux-tra\
ining:centos65.target1 --portal "192.168.1.95:3260" --op=update --name node.\
session.auth.authmethod --value=CHAP
129
iSCSI devices
A restart of the iscsi service will add three new devices to our system.
[root@server2 iscsi]# fdisk -l | grep Disk
Disk /dev/sda: 42.9 GB, 42949672960 bytes
Disk identifier: 0x0004f229
Disk /dev/sdb: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_root: 41.4 GB, 41448112128 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_swap: 973 MB, 973078528 bytes
Disk identifier: 0x00000000
[root@server2 iscsi]# service iscsi restart
Stopping iscsi: [ OK ]
Starting iscsi: [ OK ]
[root@server2 iscsi]# fdisk -l | grep Disk
Disk /dev/sda: 42.9 GB, 42949672960 bytes
Disk identifier: 0x0004f229
Disk /dev/sdb: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_root: 41.4 GB, 41448112128 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_swap: 973 MB, 973078528 bytes
Disk identifier: 0x00000000
Disk /dev/sdh: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdi: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdj: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
130
iSCSI devices
On Debian 6 you will also need aptitude install iscsitarget-dkms for the kernel modules,
on Debian 5 this is aptitude install iscsitarget-modules-`uname -a`. Ubuntu includes the
kernel modules in the main package.
131
iSCSI devices
This screenshot shows how to create three small files (100MB, 200MB and 300MB).
root@debby6:~# mkdir /iscsi
root@debby6:~# dd if=/dev/zero of=/iscsi/lun1.img bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.315825 s, 332 MB/s
root@debby6:~# dd if=/dev/zero of=/iscsi/lun2.img bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 1.08342 s, 194 MB/s
root@debby6:~# dd if=/dev/zero of=/iscsi/lun3.img bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 1.36209 s, 231 MB/s
We need to declare these three files as iSCSI targets in /etc/iet/ietd.conf (used to be /etc/
ietd.conf).
root@debby6:/etc/iet# cp ietd.conf ietd.conf.original
root@debby6:/etc/iet# > ietd.conf
root@debby6:/etc/iet# vi ietd.conf
root@debby6:/etc/iet# cat ietd.conf
Target iqn.2010-02.be.linux-training:storage.lun1
IncomingUser isuser hunter2
OutgoingUser
Lun 0 Path=/iscsi/lun1.img,Type=fileio
Alias LUN1
Target iqn.2010-02.be.linux-training:storage.lun2
IncomingUser isuser hunter2
OutgoingUser
Lun 0 Path=/iscsi/lun2.img,Type=fileio
Alias LUN2
Target iqn.2010-02.be.linux-training:storage.lun3
IncomingUser isuser hunter2
OutgoingUser
Lun 0 Path=/iscsi/lun3.img,Type=fileio
Alias LUN3
132
iSCSI devices
133
iSCSI devices
Now we can connect to the Target server and use iscsiadm to discover the devices it offers:
root@ubu1104:/etc/iscsi# iscsiadm -m discovery -t st -p 192.168.1.31
192.168.1.31:3260,1 iqn.2010-02.be.linux-training:storage.lun2
192.168.1.31:3260,1 iqn.2010-02.be.linux-training:storage.lun1
192.168.1.31:3260,1 iqn.2010-02.be.linux-training:storage.lun3
134
iSCSI devices
135
iSCSI devices
136
iSCSI devices
Complete!
[root@centos7 ~]#
The targetcli tool is interactive and represents the configuration fo the target in a structure
that resembles a directory tree with several files. Although this is explorable inside targetcli
with ls, cd and pwd, this are not files on the file system.
This tool also has tab-completion, which is very handy for the iqn names.
[root@centos7 ~]# targetcli
targetcli shell version 2.1.fb37
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> cd backstores/
/backstores> ls
o- backstores ............................................................ [...]
o- block ................................................ [Storage Objects: 0]
o- fileio ............................................... [Storage Objects: 0]
o- pscsi ................................................ [Storage Objects: 0]
o- ramdisk .............................................. [Storage Objects: 0]
/backstores> cd block
/backstores/block> ls
o- block .................................................. [Storage Objects: 0]
/backstores/block> create server1.disk1 /dev/sdb
Created block storage object server1.disk1 using /dev/sdb.
/backstores/block> ls
o- block .................................................. [Storage Objects: 1]
o- server1.disk1 .................. [/dev/sdb (2.0GiB) write-thru deactivated]
/backstores/block> cd /iscsi
/iscsi> create iqn.2015-04.be.linux:iscsi1
Created target iqn.2015-04.be.linux:iscsi1.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd /iscsi/iqn.2015-04.be.linux:iscsi1/tpg1/acls
/iscsi/iqn.20...si1/tpg1/acls> create iqn.2015-04.be.linux:server2
Created Node ACL for iqn.2015-04.be.linux:server2
/iscsi/iqn.20...si1/tpg1/acls> cd iqn.2015-04.be.linux:server2
/iscsi/iqn.20...linux:server2> set auth userid=paul
Parameter userid is now 'paul'.
/iscsi/iqn.20...linux:server2> set auth password=hunter2
Parameter password is now 'hunter2'.
/iscsi/iqn.20...linux:server2> cd /iscsi/iqn.2015-04.be.linux:iscsi1/tpg1/luns
/iscsi/iqn.20...si1/tpg1/luns> create /backstores/block/server1.disk1
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.2015-04.be.linux:server2
s/scsi/iqn.20...si1/tpg1/luns> cd /iscsi/iqn.2015-04.be.linux:iscsi1/tpg1/portals
/iscsi/iqn.20.../tpg1/portals> create 192.168.1.128
Using default IP port 3260
Could not create NetworkPortal in configFS.
137
iSCSI devices
/iscsi/iqn.20.../tpg1/portals> cd /
/> ls
o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- block .............................................. [Storage Objects: 1]
| | o- server1.disk1 ................ [/dev/sdb (2.0GiB) write-thru activated]
| o- fileio ............................................. [Storage Objects: 0]
| o- pscsi .............................................. [Storage Objects: 0]
| o- ramdisk ............................................ [Storage Objects: 0]
o- iscsi ........................................................ [Targets: 1]
| o- iqn.2015-04.be.linux:iscsi1 ................................... [TPGs: 1]
| o- tpg1 ........................................... [no-gen-acls, no-auth]
| o- acls ...................................................... [ACLs: 1]
| | o- iqn.2015-04.be.linux:server2 ..................... [Mapped LUNs: 1]
| | o- mapped_lun0 ..................... [lun0 block/server1.disk1 (rw)]
| o- luns ...................................................... [LUNs: 1]
| | o- lun0 ............................. [block/server1.disk1 (/dev/sdb)]
| o- portals ................................................ [Portals: 1]
| o- 0.0.0.0:3260 ................................................. [OK]
o- loopback ..................................................... [Targets: 0]
/> saveconfig
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
[root@centos7 ~]#
Depending on your organisations policy, you may need to configure firewall and SELinux.
The screenshot belows adds a firewall rule to allow all traffic over port 3260, and disables
SELinux.
[root@centos7 ~]# firewall-cmd --permanent --add-port=3260/tcp
[root@centos7 ~]# firewall-cmd --reload
[root@centos7 ~]# setenforce 0
/> ls
o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- block .............................................. [Storage Objects: 1]
| | o- server1.disk1 ................ [/dev/sdb (2.0GiB) write-thru activated]
| o- fileio ............................................. [Storage Objects: 0]
| o- pscsi .............................................. [Storage Objects: 0]
| o- ramdisk ............................................ [Storage Objects: 0]
o- iscsi ........................................................ [Targets: 1]
| o- iqn.2015-04.be.linux:iscsi1 ................................... [TPGs: 1]
| o- tpg1 ........................................... [no-gen-acls, no-auth]
| o- acls ...................................................... [ACLs: 1]
138
iSCSI devices
Dependency Installed:
iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.873-29.el7
Complete!
139
iSCSI devices
140
iSCSI devices
2. Set up an iSCSI Target and Initiator on two CentOS7/RHEL7 computers with the
following information:
141
iSCSI devices
This solution was done on Debian/ubuntu/Mint. For RHEL/CentOS check the theory.
Decide (with a partner) on a computer to be the Target and another computer to be the
Initiator.
First install iscsitarget using the standard tools for installing software in your distribution.
Then use your knowledge from the previous chapter to setup a logical volume (/dev/vg/
lvol0) and use the RAID chapter to setup /dev/md0. Then perform the following step:
vi /etc/default/iscsitarget (set enable to true)
Now start the iscsitarget daemon and move over to the Initiator.
Then use iscsiadm -m discovery -t st -p 'target-ip' to see the iscsi devices on the Target.
Edit the files /etc/iscsi/nodes/ as shown in the book. Then restart the iSCSI daemon and
rund fdisk -l to see the iSCSI devices.
142
iSCSI devices
2. Set up an iSCSI Target and Initiator on two CentOS7/RHEL7 computers with the
following information:
/> cd /backstores/block
/backstores/block> ls
o- block .................................................. [Storage Objects: 0]
/backstores/block> create target.disk1 /dev/sdb
Created block storage object target.disk1 using /dev/sdb.
/backstores/block> create target.disk2 /dev/sdc
Created block storage object target.disk2 using /dev/sdc.
/backstores/block> create target.disk3 /dev/sdd
Created block storage object target.disk3 using /dev/sdd.
/backstores/block> ls
o- block .................................................. [Storage Objects: 3]
o- target.disk1 ................... [/dev/sdb (8.0GiB) write-thru deactivated]
o- target.disk2 ................... [/dev/sdc (8.0GiB) write-thru deactivated]
o- target.disk3 ................... [/dev/sdd (8.0GiB) write-thru deactivated]
/backstores/block> cd /iscsi
/iscsi> create iqn.2015-04.be.linux:target
Created target iqn.2015-04.be.linux:target.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> cd /iscsi/iqn.2015-04.be.linux:target/tpg1/acls
/iscsi/iqn.20...get/tpg1/acls> create iqn.2015-04.be.linux:initiator
Created Node ACL for iqn.2015-04.be.linux:initiator
/iscsi/iqn.20...get/tpg1/acls> cd iqn.2015-04.be.linux:initiator
/iscsi/iqn.20...nux:initiator> pwd
/iscsi/iqn.2015-04.be.linux:target/tpg1/acls/iqn.2015-04.be.linux:initiator
/iscsi/iqn.20...nux:initiator> set auth userid=paul
Parameter userid is now 'paul'.
/iscsi/iqn.20...nux:initiator> set auth password=hunter2
Parameter password is now 'hunter2'.
/iscsi/iqn.20...nux:initiator> cd /iscsi/iqn.2015-04.be.linux:target/tpg1/
/iscsi/iqn.20...x:target/tpg1> ls
o- tpg1 ................................................. [no-gen-acls, no-auth]
o- acls ............................................................ [ACLs: 1]
| o- iqn.2015-04.be.linux:initiator ......................... [Mapped LUNs: 0]
143
iSCSI devices
On the Initiator:
[root@centos7 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2015-04.be.linux:initiator
[root@centos7 ~]# vi /etc/iscsi/iscsid.conf
[root@centos7 ~]# grep ^node.session.au /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = paul
node.session.auth.password = hunter2
[root@centos7 ~]# fdisk -l 2>/dev/null | grep sd
Disk /dev/sda: 22.0 GB, 22038806528 bytes, 43044544 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
144
iSCSI devices
145
Chapter 13. introduction to
multipathing
146
introduction to multipathing
147
introduction to multipathing
13.3. network
This example uses three networks, make sure the iSCSI Target is connected to all three
networks.
[root@server1 tgt]# ifconfig | grep -B1 192.168
eth1 Link encap:Ethernet HWaddr 08:00:27:4E:AB:8E
inet addr:192.168.1.98 Bcast:192.168.1.255 Mask:255.255.255.0
--
eth2 Link encap:Ethernet HWaddr 08:00:27:3F:A9:D1
inet addr:192.168.2.98 Bcast:192.168.2.255 Mask:255.255.255.0
--
eth3 Link encap:Ethernet HWaddr 08:00:27:94:52:26
inet addr:192.168.3.98 Bcast:192.168.3.255 Mask:255.255.255.0
Test the triple discovery in three networks (screenshot newer than above).
[root@centos7 ~]# iscsiadm -m discovery -t st -p 192.168.1.150
192.168.1.150:3260,1 iqn.2015-04.be.linux:target1
[root@centos7 ~]# iscsiadm -m discovery -t st -p 192.168.2.150
192.168.2.150:3260,1 iqn.2015-04.be.linux:target1
[root@centos7 ~]# iscsiadm -m discovery -t st -p 192.168.3.150
192.168.3.150:3260,1 iqn.2015-04.be.linux:target1
148
introduction to multipathing
This shows fdisk output when leaving the default friendly_names option to yes. The bottom
three are the multipath devices to use.
[root@server2 ~]# fdisk -l | grep Disk
Disk /dev/sda: 42.9 GB, 42949672960 bytes
Disk identifier: 0x0004f229
Disk /dev/sdb: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_root: 41.4 GB, 41448112128 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_swap: 973 MB, 973078528 bytes
Disk identifier: 0x00000000
Disk /dev/sdh: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdi: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdj: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdl: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdn: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdk: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdm: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdp: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/sdo: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/mpathh: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/mpathi: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/mpathj: 1073 MB, 1073741824 bytes
Disk identifier: 0x00000000
[root@server2 ~]#
149
introduction to multipathing
The IET (iSCSI Enterprise Target) ID should match the ones you see on the Target server.
[root@server1 ~]# tgt-admin -s | grep -e LUN -e IET -e dev
LUN information:
LUN: 0
SCSI ID: IET 00010000
LUN: 1
SCSI ID: IET 00010001
Backing store path: /dev/sdb
LUN: 2
SCSI ID: IET 00010002
Backing store path: /dev/sdc
LUN: 3
SCSI ID: IET 00010003
Backing store path: /dev/sdd
150
introduction to multipathing
151
introduction to multipathing
2. Uncomment the big 'defaults' section in /etc/multipath.conf and disable friendly names.
Verify that multipath can work. You may need to check the manual for /lib/dev/scsi_id and
for multipath.conf.
152
introduction to multipathing
2. Uncomment the big 'defaults' section in /etc/multipath.conf and disable friendly names.
Verify that multipath can work. You may need to check the manual for /lib/dev/scsi_id and
for multipath.conf.
vi multipath.conf
defaults {
udev_dir /dev
polling_interval 10
path_selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id --whitelisted --replace\
-whitespace --device=/dev/%n"
prio const
path_checker readsector0
rr_min_io 100
max_fds 8192
rr_weight priorities
failback immediate
no_path_retry fail
user_friendly_names no
}
153
introduction to multipathing
154