CHAPTER 3. Disk Management Rev
CHAPTER 3. Disk Management Rev
CHAPTER 3. Disk Management Rev
•
# mount
• Options of the mount command
3.1.3 Some Unix FS types
• Read the functionalities and limitations of
•
Ext2
•
Ext3
•
Ext4
•
XFS
•
Btrfs
•
Jfs
•
Reiserfs
3.3 Disk Controllers and Interfaces
Few popular standards
IDE (integrated device electronics)
ATA (AT attachment interface)
SCSI (small computer systems interface)
Fibre Channel
Differences
Performance
Parallelism
1
3.3.1 SCSCI
Many versions
SCSI-1 (1986) 8-bits, 5MB/s
SCSI-2 (1990) added command
queuing, DMA, more
Fast SCSI-2 8-bits, 10MB/s
Fast/wide SCSI-2 16-bits, 20MB/s
Ultra SCSI 8 bits, 20MB/s
Wide Ultra SCSI 16bits,40MB/s
Wide Ultra2 SCSI 16bits, 80MB/s
Wide Ultra3 SCSI 16bits, 160MB/s
Ultra-320, Ultra-640 SCSI
1
3.3.2 IDE a.k.a ATA
Integrated Drive Electronics / AT Attachment
Very short cable lengths (18in!)
ATA-2 added DMA and LBA
(get beyond BIOS 504MB limit)
ATA-3 added power management, self-monitoring (16MB/s)
Ultra-ATA added Ultra DMA/33, /66, and /133 modes (33-
133MB/s)
Hard disks with this interface were last produced in 2013
ATAPI interface allows non-ATA devices to connect like CD-ROMS
1
3.3.2 SATA
Now standard equipment
Fast: 150-600MB/s (16GBit/s now available)
Software compatible with parallel ATA
One drive per controller
Thin cables
1
3.3.3 SCSCI vs
SCSI traditionally beats SATA technically, but may not be worth
the price premium
In single-user systems, SATA will provide 85%, cheaply
For best possible performance, SCSI is often better
e.g., in servers and multi-user systems
handles multiple simultaneous reqs + more devices better
higher-end equipment (faster, better warranty, etc.)
SATA technology is quite good
Usually better price/performance than SCSI
Still subject to much debate
1
3.3.4 Controller
A controller card is a device that sits between a host system and the storage
network and allows the two to communicate with each other. Host adapters
can be integrated in the motherboard or be on a separate expansion card
There are two types of controller cards: Host bus Adapters and RAID
Controller Cards.
HBA is an expansion card that plugs into a slot (PCI-e) and provides fast
reliable non-RAID I/O between the host and storage devices.
HBA is low cost, high connectivity, limited functionality, best performance
RAID is similar to HBA but adds redundancy, optimize performance reduce
latency.
1
3.3.4 Controller cards
RAID (Redundant Array of Independent Disks) is a data storage
structure that allows a system to combine two or more physical storage
devices into a logical unit that is seen by the attached system as a single
drive.
RAID can be hardware based or software implemented.
Hardware RAID resides on PCI-X or PCI-e, or on a motherboard – integrated
RAID-on-Chip (ROC).
Better performance than software RAID, RAID
cards can easily be swapped out for
replacement and upgrades. Data can be backed up to prevent loss in a power failure.
More expensive than software Raid
Software Raid runs entirely on the CPU of the host system
Lower cost
Lower RAID performance as
CPU also powers the OS and Apps, no data backup
1
3.3.5
Different levels of RAID implementation.
RAID 0 also known as striped volume. Its spreads data over
multiple drives to enhance performance There is no
redundancy and hence no data protection. It provides
highest performance
1
3.3.5
RAID 1 creates an exact copy (mirror) of some data or a disk.
This layout is useful when read performance or reliability is more
important than write performance or the resulting data storage
capacity.
1
3.3.5
RAID 2 stripes data at the bit level (rather than block level)
and uses Hamming codes for error correction.
A separate disk is used for parity.
It is rarely used in practice
2
3.3.5
RAID 3 stripes data at the byte level with a dedicated parity
disk.
It is rarely used in practice
2
3.3.5
RAID 4 stripes data at the byte level with a
dedicated parity disk.
It is rarely used in practice
2
3.3.5
RAID5 Combines data striping for enhanced performance with
distributed parity for data protection to provide a recovery path in
case of failure.
Best cost/performance balance for multi-drive environment.
2
3.3.5
RAID 6 Provides double redundancy and the ability to sustain
two drive failures. Data is striped across at least 4 physical
drives. A second parity scheme is used to store and recover
data.
2
3.3.5 RAID Combinations
RAID 10 Combines RAID 0 (data striping) and RAID 1 (disk
mirroring).Ithas the highest performance with highest data
protection.
RAID 50 combines multiple RAID 5 sets with data striping
(RAID 0) to increase capacity and performance without
adding disks to each RAID 5 array. Increased capacity and
performance for multi array RAID5 environments.
RAID 60 Combines multiple RAID 6 sets with data
striping (RAID 0) to increase capacity and performance
without adding disks to each RAID 6 array. It has the
highest Fault tolerance and the highest data protection.
2
3.3.5 Configuring RAID Linu
1. Check your version of linux (#uname –r)
2. Check to see if your system contains the /proc/mdstat file
ls /proc/mdstat
3. Use the modprobe command to check if you can load the
different RAID kernel modules
modprobe raid6
4. Check to see if your system has the Multiple Disk or Device
Administration (mdadm) utility installed.
dpkg –s mdadm
2
3.3.5 Configuring RAID Linu
5. Getting a drive ready for RAID membership
•
Partition the disk first using fdisk or parted
•
Use the lsblk command to view the drives on the system
•
Implement the raid array
•
#mdadm
#mdadm [--mode] raid-device [options]
component-devices
6) Create the RAID membership
# mdadm -C /dev/md0 -l 6 -n 4 \
> /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
2
3.4 Adding a disk to Linux
Install new hardware
verify that hardware is recognized by BIOS or controller
– Boot, make certain device files already exist in /dev e.g., /dev/sdc
Use fdisk/parted (or similar) to partition the drive
Verify the system type on each partition
Use mke2fs (-t ext4) on each regular partition
To create (an ext4) filesystem
Use mkswap to initialize swap partitions
Add entries to /etc/fstab
Mount by hand, then reboot to verify everything
2
hdparm: test/set hd params
hdparm will do simple performance tests
sudo /sbin/hdparm –Tt /dev/sda
Read man hdparm
2
3.4 Disk
A partition is a logical subset of the physical disk, and information about
partitions are stored in a partition table. This table includes information
about the first and last sectors of the partition and its type, and further
details on each partition.
3
3.4 Disk
Drives are divided into one or more partitions that are treated
independently
Partitions make backups easier, confine damage
Typically have at least two or three
root partition (one)
everything needed to bring system up in single-user mode
(often copied onto another disk for emergencies)
swap partition (at least one)
stores virtual memory when physical memory is insufficient
user partition(s)
home directories, data files, etc.
boot partition - boot loader, kernel, etc.
3
3.4 Disk partitions (MBR and
There are two main ways of storing partition information on
hard disks. The first one is MBR (Master Boot Record), and
the second one is GPT (GUID Partition Table).
MBR
The partition table is stored on the first sector of a disk, called
the Boot Sector, along with a boot loader, which on Linux
systems is usually the GRUB bootloader.
But MBR has a series of limitations that hinder its use on
modern systems, like the inability to address disks of more
than 2 TB in size, and the limit of only 4 primary partitions per
disk.
3
3.4 Disk partitions (MBR and
GUID Partition Table (GPT)
There is no practical limit on disk size, and the maximum
number of partitions are limited only by the operating system
itself.
It is more commonly found on more modern machines that use
UEFI instead of the old PC BIOS.
3
3.4 Disk partitions (MBR and
Managing MBR partitions using fdisk
The standard utility for managing MBR
partitions on Linux is fdisk. This is an
interactive, menu-driven utility. To use it,
type fdisk followed by the device name
corresponding to the disk you want to edit
Ex sudo fdisk /dev/sda
You can create, edit or delete partitions at will,
but nothing will be written to disk unless you
use the write (w) command and q to exit fdisk
3
3.4 Disk partitions (MBR and
Managing MBR partitions using fdisk
fdisk commands
p: to print the partition table
n : to create a partition
F: to show unallocated space
d: to delete a partition
t: change partition type ( swap, etc)
3
3.4 Disk partitions (MBR and
Managing GUID partitions using gdisk
gdisk commands
p: to print the partition table
n : to create a partition
F: to show unallocated space
d: to delete a partition
s: to sort the partitions on a disk to
avoid gaps in the numbering
sequence
3
3.5 Tools to manage partitions
Tools for manipulating partitions are:
fdisk and its derivatives like cfdisk, sfdisk
parted and its variants like qtparted and gparted
3
3.5 Logical Volumes
Partitions are static and sometimes you would want to change
them
LVM (Linux Logical Volume Manager) lets you combine partitions
and drives to present an aggregate volume as a regular block
device (just like a disk or partition)
Use and allocate storage more efficiently
Move logical volumes among different physical devices
Grow and shrink logical volume sizes dynamically
Take “snapshots” of whole filesystems
Replace on-line drives without interrupting service
Similar systems are available for other OSes
3
3.6
Sample Organization
volumes
4
3.6
The configuration file of LVM is found in the directory /etc/lvm.
The file /etc/lvm/lvm.conf contains the global parameters.
4
3.7 Backup
Assignment:
Read the following backup types:
Full
Incremental
Differential
Snapshot
4
3.7 Backup
GUI and/or web-based solutions
Amanda
Bacula
Bareos
Duplicity
BAckupPC
4
3.7 Backup
Command line utilities
cpio
dd
dump/restore
rsync
star
tar