Create Centralized Secure Storage Using iSCSI Target

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 49
At a glance
Powered by AI
The key takeaways are that iSCSI can be used to combine local storage from multiple servers into a single storage domain in Red Hat Virtualization. It involves setting up an iSCSI target on RHEL to export a logical volume, and clients can connect to the target to access the storage.

To set up an iSCSI target on RHEL, a logical volume is used to back the target. TargetCLI is used to create the target, backstores, portals and expose the target. This makes the logical volume available over the network as an iSCSI target.

To connect an iSCSI client to a target, the target name and portal are discovered. iscsiadm is used in node mode to login to the target, which establishes a session and makes the target's LUNs available as SCSI devices on the client.

Setting up iSCSI Export on Red Hat

Enterprise Linux 7
April 24, 2019Kedar Vijay Kulkarni

Recently, when I was working with Red Hat Virtualization, I wanted to try to
combine the local storage domain of more than one server systems as a
Storage Domain in Red Hat Virtualization. After a lot of pondering, I came
across the fact that for an Internet Small Computer Systems Interface (iSCSI)
datastore I can use multiple backend block storage devices.

So I decided to set up our Red Hat Enterprise Linux (RHEL) server to expose
about 80% of its local disk over iSCSI to be used for the storage domain
backend on the Red Hat Virtualization. In this post, I will go over how I set up
iSCSI on RHEL. The steps in this article may apply to CentOS (and, maybe,
Fedora) as well.

In my experience, users tend to be more familiar with NFS than iSCSI. If you
haven’t worked with, or heard of it, we have more information on iSCSI in the
Red Hat Enterprise Linux 7 Installation Guide Appendix B. Ready? Let’s move
on to setup. Note that if you’re doing this on Fedora instead of RHEL, you
need to replace "yum" with "dnf".

Setup:
1. To start, you will need to have a hard disk partition or a logical volume
that you can use. This post assumes you already have a logical volume
that is unused and can be used for iSCSI. If you want to know more on
how to setup logical volumes, see "A Linux user’s guide to Logical
Volume Management" on OpenSource.com. Let’s assume the path to
your logical volume is /dev/vg1/lv_iscsi_1
2. The next thing you would want to do is to install targetcli. It is a package
that you need to install in order to setup iSCSI. To install it you may run
following command:
# yum install -y targetcli
For RHEL you may need a valid subscription to the relevant repositories. Also,
be sure to run it as root or with sudo access.

3. Once it is installed, you need to run targetcli to get the CLI:


[root@server1 ~]# targetcli

targetcli shell version 2.1.fb46

Copyright 2011-2013 by Datera, Inc and others.

For help on commands, type 'help'.

/iscsi>

4. Now using logical volume we will create block storage for iSCSI.

/iscsi> cd /backstores/block

/backstores/block> create iscsi_block_store_1 /dev/vg1/lv_iscsi_1

Created block storage object iscsi_block_store_1 using /dev/vg1/lv_iscsi_1.

5. Create an iSCSI Target:

/backstores/block> cd /iscsi

/iscsi> create iqn.2019-03.com.redhat:target1

Created target iqn.2019-03.com.redhat:target1.

Created TPG 1.

Global pref auto_add_default_portal=true

Created default portal listening on all IPs (0.0.0.0), port 3260.

/iscsi>

If required you may add additional portal with different IP_Port as follows:

/iscsi> cd iqn.2019-03.com.redhat:target1/tpg1/portals/

/iscsi/iqn.20.../tpg1/portals> ls

o-
portals ......................................................................
...................................... [Portals: 1]

o-
0.0.0.0:3260 .................................................................
............................................ [OK]
/iscsi/iqn.20.../tpg1/portals> create ip_port=3333

Binding to INADDR_ANY (0.0.0.0)

Created network portal 0.0.0.0:3333.

/iscsi/iqn.20.../tpg1/portals> ls

o-
portals ......................................................................
...................................... [Portals: 2]

o-
0.0.0.0:3260 .................................................................
............................................ [OK]

o-
0.0.0.0:3333 .................................................................
............................................ [OK]

/iscsi/iqn.20.../tpg1/portals>

Specifying ip_address= in create command above will set it to specified IP


address instead of default of 0.0.0.0

/iscsi/iqn.20.../tpg1/portals> create ip_address=10.8.197.253 ip_port=5555

Created network portal 10.8.197.253:5555.

/iscsi/iqn.20.../tpg1/portals> ls

o-
portals ......................................................................
...................................... [Portals: 3]

o-
0.0.0.0:3260 .................................................................
............................................ [OK]

o-
0.0.0.0:3333 .................................................................
............................................ [OK]

o-
10.8.197.253:5555 ............................................................
............................................ [OK]

6. Create an Access Control List (ACL) for client machines, which means
that you need to get iSCSI Initiator name add map it with this target.
Once it is done, then your client machine will be able to connect to this
iSCSI target.
For this part, go to your client machine. Usually the initiator name can be
found in /etc/iscsi/initiator.name if the iscsi-initiator-utils package is
installed. If it is not installed, it can be installed by running:
yum install -y iscsi-initiator-utils

[root@client1 ~]# cat /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.1994-05.com.redhat:39ee68f3cf5e

[root@client1 ~]#

Copy the InitiatorName from Client machine.

Once we have that, go back to the server machine, we can create the acl as
follows:

/iscsi> cd /iscsi/iqn.2019-03.com.redhat:target1/

/iscsi/iqn.20...edhat:target1> cd tpg1/acls

/iscsi/iqn.20...et1/tpg1/acls> create iqn.1994-05.com.redhat:39ee68f3cf5e

Created Node ACL for iqn.1994-05.com.redhat:39ee68f3cf5e

/iscsi/iqn.20...et1/tpg1/acls>

7. Now we need to create a LUN (Logical Unit Number) under this target:

/iscsi/iqn.20...et1/tpg1/acls>cd ../luns

/iscsi/iqn.20...et1/tpg1/luns> create /backstores/block/iscsi_block_store_1

Created LUN 0.

Created LUN 0->0 mapping in node ACL iqn.1994-05.com.redhat:39ee68f3cf5e

/iscsi/iqn.20...et1/tpg1/luns>cd /

8. Now that is created, we can verify target is configured correctly:


/> ls /iscsi/iqn.2019-03.com.redhat:target1/

The output should be similar to the screenshot here:


9. You should then save the config and exit out:

/> saveconfig

Configuration saved to /etc/target/saveconfig.json

/> exit

Global pref auto_save_on_exit=true

Last 10 configs saved in /etc/target/backup/.

Configuration saved to /etc/target/saveconfig.json

[root@server1 ~]#

10. Once this is done, we need to start the target service and make
sure we enable it so that it keeps running across reboot.

[root@server1 ~]# systemctl start target

[root@server1 ~]# systemctl enable target

And check the status using:

[root@server1 ~]# systemctl status target

11. If you are running firewalld or iptables, you need to make sure you
add port 3260/tcp as exception (allow it through firewall) so that
communication between client and iscsi datastore is not blocked. With
firewall you can do that as :
[root@server1 ~]# firewall-cmd --add-port=3260/tcp --permanent

success

[root@server1 ~]# firewall-cmd --reload

success

[root@server1 ~]# firewall-cmd --list-ports

3260/tcp

[root@server1 ~]#

12. If you have setup the iSCSI correctly on your server then you can
go to your client and run following command to discover the iSCSI
targets on the server as shown here:

[root@server1 ~]# iscsiadm -m discovery -t st -p 10.8.197.253

10.8.197.253:3260,1 iqn.2019-03.com.redhat:target1

In this command, we use -m to specify the mode in which command is being


executed. In discovery mode we discover available targets at the portal(can be
specified  as IP[:port] format) mentioned with -p and -t corresponds to type
which tells what type is used in this discovery. The st argument stands for
send targets.
SendTargets is a native iSCSI protocol which allows each iSCSI target to
send a list of available targets to the initiator.

Note: You may install iscsi-initiator-utils on the same machine where you


have set up targetcli and still be able to perform the previous step. You can
use IP address or localhost in the discovery and login commands.
13. Now that we discovered the target, we can log into it as follows:

[root@server1 ~]# iscsiadm -m node -T iqn.2019-03.com.redhat:target1 -p


10.8.197.253 -l

Logging in to [iface: default, target: iqn.2019-03.com.redhat:target1, portal:


10.8.197.253,3260] (multiple)

Login to [iface: default, target: iqn.2019-03.com.redhat:target1, portal:


10.8.197.253,3260] successful.

In this command -T stands for target name. And, -l stands for login, which,


in node mode, will only login to specified record, while in discovery mode it will
login to all discovered targets.
To find out what is the name of device iSCSI is connected as (only on RHEL
or CentOS), you can do :

[root@server1 ~]# cat /var/log/messages | grep Attached

Mar 11 21:33:14 dhcp-8-197-253 kernel: scsi 3:0:0:0: alua: Attached

Mar 11 21:33:14 dhcp-8-197-253 kernel: sd 3:0:0:0: Attached scsi generic sg3


type 0

Mar 11 21:33:14 dhcp-8-197-253 kernel: sd 3:0:0:0: [sdb] Attached SCSI disk

As you can see above, the iSCSI is connected as sdb so that means if you
run fdisk -l on that device it should be listed.

[root@server1 ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 33550336 bytes

[root@server1 ~]#

Now you can create a filesystem on it and mount it in your system. You may
want to specify the mount information in the /etc/fstab to correctly so that the
mount remains persistent across reboot. For more on creating a
filesystem, my article on OpenSource.com covers the steps.

Create Centralized Secure Storage


using iSCSI Target / Initiator on
RHEL/CentOS 7
Create Centralized Secure Storage
using iSCSI Target on
RHEL/CentOS/Fedora Part -I
Babin LonstonApril 4, 2015 CategoriesStorage 17 Comments

iSCSI is a block level Protocol for sharing RAW Storage Devices over TCP/IP


Networks, Sharing and accessing Storage over iSCSI, can be used with existing
IP and Ethernet networks such as NICs, Switched, Routers etc. iSCSI target is a
remote hard disk presented from an remote iSCSI server (or) target.

Ins
tall iSCSI Target in Linux
We don’t need a high resource for stable connectivity and performance in Client
side’s. iSCSI Server called as Target, this share’s the storage from server. iSCSI
Client’s called as Initiator, this will access the storage which shared from Target
Server. There are iSCSI adapter’s available in market for Large Storage services
such as SAN Storage’s.
Why we need a iSCSI adapter for Large storage Area?

Ethernet adapters (NIC) are designed to transfer packetized file level data among
systems, servers and storage devices like NAS storage’s, they are not capable
for transferring block level data over Internet.

Features of iSCSI Target


 Possible to run several iSCSI targets on a single machine.
 A single machine making multiple iscsi target available on the iSCSI SAN
 The target is the Storage and makes it available for initiator (Client) over
the network
 These Storage’s are Pooled together to make available to the network is
iSCSI LUNs (Logical Unit Number).
 iSCSI supports multiple connections within the same session
 iSCSI initiator discover the targets in network then authenticating and login
with LUNs, to get the remote storage locally.
 We can Install any Operating systems in those locally mounted LUNs as
what we used to install in our Base systems.
Why the need of iSCSI?
In Virtualization we need storage with high redundancy, stability, iSCSI provides
those all in low cost. Creating a SAN Storage in low price while comparing to
Fiber Channel SANs, We can use the standard equipment’s for building a SAN
using existing hardware such as NIC, Ethernet Switched etc..

Let start to get install and configure the centralized Secure Storage using iSCSI
Target. For this guide, I’ve used following setups.

 We need separate 1 systems to Setup the iSCSI Target Server and


Initiator (Client).
 Multiple numbers of Hard disk can be added in large storage environment,
But we here using only 1 additional drive except Base installation disk.
 Here we using only 2 drives, One for Base server installation, Other one
for Storage (LUNs) which we going to create in PART-II of this series.
Master Server Setup
 Operating System – CentOS release 6.5 (Final)
 iSCSI Target IP – 192.168.0.200
 Ports Used : TCP 860, 3260
 Configuration file : /etc/tgt/targets.conf
This series will be titled Preparation for the setting up Centralized Secure
Storage using iSCSI through Parts 1-3 and covers the following topics.
Part 1: Create Centralized Secure Storage using iSCSI Target
Part 2: How to Create and Setup LUNs using LVM in “iSCSI Target Server”
Part 3: Centralized Secure Storage (iSCSI) – “Initiator Client” Setup

Installing iSCSI Target


Open terminal and use yum command to search for the package name which
need to get install for iscsi target.

# yum search iscsi

Sample Output

========================== N/S matched: iscsi


=======================

iscsi-initiator-utils.x86_64 : iSCSI daemon and utility


programs

iscsi-initiator-utils-devel.x86_64 : Development files for


iscsi-initiator-utils

lsscsi.x86_64 : List SCSI devices (or hosts) and associated


information

scsi-target-utils.x86_64 : The SCSI target daemon and


utility programs

We got the search result as above, choose the Target package and install to


play around.
# yum install scsi-target-utils -y

Install iSCSI Utils


List the installed package to know the default config, service, and man page
location.

# rpm -ql scsi-target-utils.x86_64

List All iSCSI Files


Let’s start the iSCSI Service, and check the status of Service up and running,
iSCSI service named as tgtd.
# /etc/init.d/tgtd start

# /etc/init.d/tgtd status

Start iSCSI Service


Now we need to configure it to start Automatically while system start-up.

# chkconfig tgtd on

Next, verify that the run level configured correctly for the tgtd service.

# chkconfig --list tgtd

Enable iSCSI on Startup


Let’s use tgtadm to list what targets and LUNS we currently got configured in our
Server.
# tgtadm --mode target --op show

The tgtd installed up and running, but there is no Output from the above


command because we have not yet defined the LUNs in Target Server. For
manual page, Run ‘man‘ command.

# man tgtadm

i
SCSI Man Pages
Finally we need to add iptables rules for iSCSI if there is iptables deployed in
your target Server. First, find the Port number of iscsi target using following
netstat command, The target always listens on TCP port 3260.

# netstat -tulnp | grep tgtd

Find iSCSI Port


Next add the following rules to allow iptables to Broadcast the iSCSI target
discovery.

# iptables -A INPUT -i eth0 -p tcp --dport 860 -m state --


state NEW,ESTABLISHED -j ACCEPT

# iptables -A INPUT -i eth0 -p tcp --dport 3260 -m state --


state NEW,ESTABLISHED -j ACCEPT

Open iSCSI Ports

Add iSCSI Ports to Iptables


Note: Rule may vary according to your Default CHAIN Policy. Then save the
Iptables and restart the iptables.

# iptables-save

# /etc/init.d/iptables restart
Restart iptables
Here we have deployed a target server to share LUNs to any initiator which
authenticating with target over TCP/IP, This suitable for small to large scale
production environments too.

In my next upcoming articles, I will show you how to Create LUN’s using LVM in
Target Server and how to share LUN’s on Client machines, till then stay tuned to
TecMint for more such updates and don’t forget to give valuable comments.

iSCSI is a block level Protocol for managing storage devices over TCP/IP
Networks, specially over long distances. iSCSI target is a remote hard disk
presented from an remote iSCSI server (or) target. On the other hand, the iSCSI
client is called the Initiator, and will access the storage that is shared in
the Target machine.
The following machines have been used in this article:

Server (Target):

Operating System – Red Hat Enterprise Linux 7


iSCSI Target IP – 192.168.0.29
Ports Used : TCP 860, 3260
Client (Initiator):

Operating System – Red Hat Enterprise Linux 7


iSCSI Target IP – 192.168.0.30
Ports Used : TCP 3260

Step 1: Installing Packages on iSCSI Target


To install the packages needed for the target (we will deal with the client later),
do:

# yum install targetcli -y

When the installation completes, we will start and enable the service as follows:

# systemctl start target

# systemctl enable target

Finally, we need to allow the service in firewalld:

# firewall-cmd --add-service=iscsi-target

# firewall-cmd --add-service=iscsi-target --permanent

And last but not least, we must not forget to allow the iSCSI target discovery:

# firewall-cmd --add-port=860/tcp
# firewall-cmd --add-port=860/tcp --permanent

# firewall-cmd --reload

Step 2: Defining LUNs in Target Server


Before proceeding to defining LUNs in the Target, we need to create two logical
volumes as explained in Part 6 of RHCSA series (“Configuring system storage”).
This time we will name them vol_projects and vol_backups and place them inside
a volume group called vg00, as shown in Fig. 1. Feel free to choose the space
allocated to each LV:
Fig 1: Two Logical
Volumes Named vol_projects and vol_backups

After creating the LVs, we are ready to define the LUNs in the Target in order to


make them available for the client machine.
As shown in Fig. 2, we will open a targetcli shell and issue the following
commands, which will create two block backstores (local storage resources that
represent the LUN the initiator will actually use) and an Iscsi Qualified
Name (IQN), a method of addressing the target server.
Please refer to Page 32 of RFC 3720 for more details on the structure of the IQN.
In particular, the text after the colon character (:tgt1) specifies the name of the
target, while the text before (server:) indicates the hostname of the target inside
the domain.
# targetcli

# cd backstores

# cd block

# create server.backups /dev/vg00/vol_backups

# create server.projects /dev/vg00/vol_projects

# cd /iscsi

# create iqn.2016-02.com.tecmint.server:tgt1

Fi
g 2: Define LUNs in Target Server

With the above step, a new TPG (Target Portal Group) was created along with
the default portal (a pair consisting of an IP address and a port which is the way
initiators can reach the target) listening on port 3260 of all IP addresses.
If you want to bind your portal to a specific IP (the Target’s main IP, for example),
delete the default portal and create a new one as follows (otherwise, skip the
following targetcli commands. Note that for simplicity we have skipped them
as well):

# cd /iscsi/iqn.2016-02.com.tecmint.server:tgt1/tpg1/portals

# delete 0.0.0.0 3260

# create 192.168.0.29 3260

Now we are ready to proceed with the creation of LUNs. Note that we are using
the backstores we previously created (server.backups and server.projects). This
process is illustrated in Fig. 3:

# cd iqn.2016-02.com.tecmint.server:tgt1/tpg1/luns

# create /backstores/block/server.backups

# create /backstores/block/server.projects

Fig 3:
Create LUNs in iSCSI Target Server

The last part in the Target configuration consists of creating an Access Control
List to restrict access on a per-initiator basis. Since our client machine is
named “client”, we will append that text to the IQN. Refer to Fig. 4 for details:

# cd ../acls
# create iqn.2016-02.com.tecmint.server:client

Fi
g 4: Create Access Control List for Initiator

At this point we can the targetcli shell to show all configured resources, as we


can see in Fig. 5:

# targetcli

# cd /

# ls

Fig 5: User targetcli to Check Configured Resources


To quit the targetcli shell, simply type exit and press Enter. The configuration
will be saved automatically to /etc/target/saveconfig.json.
As you can see in Fig. 5 above, we have a portal listening on port 3260 of all IP
addresses as expected. We can verify that using netstat command (see Fig. 6):

# netstat -npltu | grep 3260

Fig 6: Verify iSCSI Target Server Port Listening

This concludes the Target configuration. Feel free to restart the system and


verify that all settings survive a reboot. If not, make sure to open the necessary
ports in the firewall configuration and to start the target service on boot. We are
now ready to set up the Initiator and to connect to the client.
Step 3: Setting up the Client Initiator
In the client we will need to install the iscsi-initiator-utils package, which
provides the server daemon for the iSCSI protocol (iscsid) as well as iscsiadm,
the administration utility:

# yum update && yum install iscsi-initiator-utils

Once the installation completes, open /etc/iscsi/initiatorname.iscsi and replace


the default initiator name (commented in Fig. 7) with the name that was
previously set in the ACL on the server (iqn.2016-
02.com.tecmint.server:client).
Then save the file and run iscsiadm in discovery mode pointing to the target. If
successful, this command will return the target information as shown in Fig. 7:

# iscsiadm -m discovery -t st -p 192.168.0.29


Fig 7: Setting Up Client Initiator

The next step consists in restarting and enabling the iscsid service:

# systemctl start iscsid

# systemctl enable iscsid

and contacting the target in node mode. This should result in kernel-
level messages, which when captured through dmesg show the device
identification that the remote LUNs have been given in the local system
(sde and sdf in Fig. 8):

# iscsiadm -m node -T iqn.2016-02.com.tecmint.server:tgt1 -p


192.168.0.29 -l

# dmesg | tail
Fig 8: Connecting to iSCSCI Target Server in Node Mode

From this point on, you can create partitions, or even LVs (and filesystems on top
of them) as you would do with any other storage device. For simplicity, we will
create a primary partition on each disk that will occupy its entire available space,
and format it with ext4.

Finally, let’s mount /dev/sde1 and /dev/sdf1 on /projects and /backups,


respectively (note that these directories must be created first):

# mount /dev/sde1 /projects

# mount /dev/sdf1 /backups

Additionally, you can add two entries in /etc/fstab in order for both filesystems to
be mounted automatically at boot using each filesystem’s UUID as returned
by blkid.
Note that the _netdev mount option must be used in order to defer the mounting
of these filesystems until the network service has been started:
Fig 9: Find Filesystem UUID

You can now use these devices as you would with any other storage media.

Summary
In this article we have covered how to set up and configure an iSCSI Target and
an Initiator in RHEL/CentOS 7 disitributions. Although the first task is not part of
the required competencies of the EX300 (RHCE) exam, it is needed in order to
implement the second topic.
Don’t hesitate to let us know if you have any questions or comments about this
article – feel free to drop us a line using the comment form below.

In my last article I shared the steps to configure LVM based HA cluster without GFS2 file
system. Now let me share the steps to configure iSCSI target and initiator on
RHEL/CentOS 7 and 8 Linux node. I am using Virtual Machines running on Oracle
VirtualBox installed on my Linux Server

iscsi is an acronym for Internet Small Computer System Interface. We can consider iscsi as a


block storage since storage is accessed at the block layer. So basically iSCSI is a block level
protocol for sharing RAW storage devices over an IP network. We also call it a SAN
technology i.e. iSCSI SAN. Since it operates over IP network, do not mix or confuse it with
NAS technologies like NFS or SMB. They also work over IP Network but they operate on File
System Layer. but in iSCSI we work on RAW blocks. In this article I will share the steps to
configure iscsi target and initiator on RHEL/CentOS 7 and 8.
Advertisement
Still installing Linux manually?

I would recommend to configure one click installation using Network PXE Boot Server.


Using PXE server you can install Oracle Virtual Machines or KVM based Virtual Machines or
any type of physical server without any manual intervention saving time and effort.

iSCSI SAN Architecture


When setting up an iSCSI SAN, you configure one server as the iSCSI target. This is the
server that offers access to the shared storage devices. When you configure RHEL or CentOS
7 as an iSCSI target, the shared storage devices typically are LVM logical volumes, but they
can be complete disks or partitions as well.

The other server is going to be used as the iSCSI initiator. This is the server that connects
to the SAN. After connecting to the SAN, the iSCSI initiator sees an additional disk device.
Now iSCSI initiator goes through the process of discovering targets on the network,
authenticating and logging in. Eventually accessing these iSCSI LUNs on localhost.

IMPORTANT NOTE:
When using a redundant network connection, the iSCSI initiator will see the SAN device
twice, once over each different path to the SAN. This might lead to a situation where the
same shared disk device is presented twice as well. To make sure that in such a setup where
redundant paths are available the SAN device is addressed correctly, the iSCSI initiator
should be configured to run the multipath driver.

iSCSI SAN Terminology

Item Description

The iSCSI qualified name. A unique name that is used for identifying targets as well as
IQN
initiators

Backend The storage devices on the iSCSI target that the iSCSI target component is providing access to
Item Description

Storage

Target The service on an iSCSi server that gives access to backend storage devices.

Initiator The iSCSi client that connects to a target and is identified by IQN

The access control list that is based on the iSCSI initiator IQN and used to provide access to
ACl
specific user

A Logical Unit Number. The backend storage devices that are shared through the target. This
LUN can be any device that supports read/write operations, such as disk, partitions, logical
volumes, files or tape drves

Portal The IP address and port that a target or initiator uses to establish connections

The Target Portal Group. This is the collection of the IP Address and TCP ports to which a
TPG
specific iSCSI target will listen.

The process whereby an initiator finds the targets that are configured on a portal and stores
Discovery the information locally for future reference. Discovery is done by using the iscsiadm
command

Authentication that gives an initiator access to LUNs on the target. After successful login, the
Login login information is stored on the initiator automatically. Login is performed using the
iscsiadm command

 
My Setup Details

Properties node1 (Initiator) storage1(target)

OS CentOS 7 CentOS 7

vCPU 2 2

Memory 4 GB 4 GB

Disk 20GB 20GB

Hostname node1 storage1

FQDN node1.example.com storage1.example.com

IP Address 10.0.2.20 10.0.2.13

Setting Up the iSCSI Target on RHEL/CentOS 7/8


Throughout different versions of Linux, different iSCSI target packages have been used. In
Red Hat Enterprise Linux 7 and 8, the LIO (Linux I/O) target is used. LIO is the standard
iSCSI target solution since Linux kernels 2.6.38, it has become an attractive storage solution
that has rapidly replaced alternative iSCSI target solutions in many Linux distributions. The
default interface to manage the LIO target is the  targetcli  command. This command uses
familiar Linux commands, such as  cd ,  ls ,  pwd , and set to configure the target.

Steps to setup iSCSI target


1. Create the backing storage devices.
2. Create the IQN and default target portal group (TPG).
3. Configure one or more ACLs for the TPG.
4. Create LUNs to provide access to the backing storage devices.
5. Create a portal to provide a network interface that iSCSI initiators can connect to.
6. Verify and commit the configuration.

 
Advertisement

1. Create backing storage device

Before we start working on our iSCSI target, we need a backend storage. On my node I have
added an additional disk mapped to  /dev/sdc . Below using  fdisk  I am creating a new
partition  /dev/sdc1  with 1GB size, which will be used to create my iSCSI target.

[root@storage1 ~]# fdisk /dev/sdc

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

Command (m for help): n

Partition type:

p primary (0 primary, 0 extended, 4 free)

e extended
Select (default p): p

Partition number (1-4, default 1):

First sector (2048-8388607, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-8388607, default 8388607):


+1G

Partition 1 of type Linux and of size 1 GiB is set

Command (m for help): p

Disk /dev/sdc: 4294 MB, 4294967296 bytes, 8388608 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x243b95e3


Device Boot Start End Blocks Id System

/dev/sdc1 2048 2099199 1048576 83 Linux

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

reload the partition table

Update the partition table

[root@storage1 ~]# partprobe

Validate the new partition

[root@storage1 ~]# ls -l /dev/sd*

brw-rw----. 1 root disk 8, 0 Dec 29 10:13 /dev/sda


brw-rw----. 1 root disk 8, 1 Dec 29 10:13 /dev/sda1

brw-rw----. 1 root disk 8, 2 Dec 29 10:13 /dev/sda2

brw-rw----. 1 root disk 8, 16 Dec 29 10:13 /dev/sdb

brw-rw----. 1 root disk 8, 17 Dec 29 10:13 /dev/sdb1

brw-rw----. 1 root disk 8, 32 Dec 29 10:13 /dev/sdc

brw-rw----. 1 root disk 8, 33 Dec 29 10:13 /dev/sdc1

2. Install targetcli rpm

To manage the kernel-based iSCSI Target service on RHEL/CentOS 7/8, we will need to
install the  targetcli  package, as shown in the following command:

NOTE:

On RHEL system you must have an active subscription to RHN or you can configure a local
offline repository using which "yum" package manager can install the provided rpm and it's
dependencies.

[root@storage1 ~]# yum -y install targetcli

Once successfully installed proceed with the steps to configure iSCSI target on your RHEL or
CentOS 7 Linux node.

3. Managing iSCSI targets with targetcli

The  targetcli  command is a shell to view, edit, save, and load the iSCSI target
configuration. When you look at the configuration, you will see that  targetcli  provides a
hierarchical structure in a similar way to a filesystem.
To invoke the  targetcli  shell, we will run this command as  root . You will see that on the
first run of the command, a preferences file is created. This is illustrated in the following
snippet
Advertisement

[root@storage1 ~]# targetcli

targetcli shell version 2.1.fb46

Copyright 2011-2013 by Datera, Inc and others.

For help on commands, type 'help'.

/>

As you can see in the preceding output, you can enter help to display a list of commands
that can be entered. To view the available configuration objects, we can use the ls
command. The output is shown in the following screenshot:

/> ls

o- / ....................................................................
.................................................. [...]

o-
backstores ..............................................................
............................................. [...]

| o-
block ...................................................................
............................ [Storage Objects: 0]
| o-
fileio ..................................................................
............................ [Storage Objects: 0]

| o-
pscsi ...................................................................
............................ [Storage Objects: 0]

| o-
ramdisk .................................................................
............................ [Storage Objects: 0]

o-
iscsi ...................................................................
...................................... [Targets: 0]

We will work with backstores objects to start with so that we can add it to the LVM block
device in the configuration in addition to the fileio backstore. As the name suggests, this will
be a file within the filesystem; we can share this to a network as a virtual disk.

4. Create block backstores

We will work from the root of the  targetcli  configuration; this should be exactly where
we are, but we can always use the  pwd  command to display our working directory. If
required, we can change it to the root of the configuration with  cd / .

TIP:
While using the  targetcli  command, we can use  CTRL + L  to clear the screen as we
would in Bash, but most importantly, the  Tab key  completion works, so we do not need to
type the complete name or path to objects and properties.

To create a new block, back store on the partition that we created earlier in this section.

[root@storage1 ~]# targetcli


targetcli shell version 2.1.fb46

Copyright 2011-2013 by Datera, Inc and others.

For help on commands, type 'help'.

/backstores/block> create dev=/dev/sdc1 name=sdc1

Created block storage object sdc1 using /dev/sdc1.

This will create the block backstore with a name called  sdc1 . Using the  ls  command
again will list the additional object within the hierarchy. In the following screenshot, we see
the creation of the backstore and the subsequent listing:

/backstores/block> ls

o-
block ...................................................................
................................... [Storage Objects: 1]

o-
sdc1 ....................................................................
......... [/dev/sdc1 (0 bytes) write-thru deactivated]

o-
alua ....................................................................
................................... [ALUA Groups: 1]

o-
default_tg_pt_gp ........................................................
................... [ALUA state: Active/optimized]
To go back to the home directory

/backstores/block> cd /

5. Creating iSCSI targets

The iSCSI objects that we see in the main list represents iSCSI targets and their properties.
Firstly, we will create a simple iSCSI target with default names.

/> cd iscsi

Here we will now create an iSCSI target by supplying a custom IQN. To perform this, we
create the object and specify the name that is usually written to contain the date and the
reversed DNS name. Here we have used a sample IQN

/iscsi> create wwn=iqn.2018-12.com.example:servers

Created target iqn.2018-12.com.example:servers.

Created TPG 1.

Global pref auto_add_default_portal=true

Created default portal listening on all IPs (0.0.0.0), port 3260.

NOTE:
IQN starts with  iqn,  which is followed by the year and month it was created and the
reverse DNS name. If you specify the month as one digit instead of two, for instance, you’ll
get a “ WWN not valid ” message, and creation will fail.

We can add the description of the target with the  :servers  at the end, indicating that this
is a target for the servers.
We can filter what is displayed using the ls command by adding the object hierarchy that we
want to list. For example, to list targets, we will use the  ls iscsi  command.
The output of this command is shown in the following screenshot:

/iscsi> ls

o-
iscsi ...................................................................
........................................... [Targets: 2]

o- iqn.2018-
12.com.example:servers ..................................................
................................... [TPGs: 1]

o-
tpg1 ....................................................................
............................. [no-gen-acls, no-auth]

o-
acls ....................................................................
........................................ [ACLs: 0]

o-
luns ....................................................................
........................................ [LUNs: 0]

o-
portals .................................................................
..................................... [Portals: 1]

o-
0.0.0.0:3260 ............................................................
........................................... [OK]
Now we have our customized name for the target, but we still have to add the LUNS or
logical units to make the SAN (Storage Area Network) effective.

6. Adding ACLs

To create an ACL, we limit the access from LUN to a given initiator name or names that we
mention in Access Control List (ACL). The initiator is the iSCSI client and will have a unique
client IQN configured on the initiator in the  /etc/iscsi/initiatorname.iscsi  file.

NOTE:
If this file is not present, you will need to install the  iscsi-initiator-utils  package on
the initiator node.

The filename used to configure the initiator name will be consistent for Linux clients, but will
differ for other operating systems. To add an ACL, we will remain with the current
configuration hierarchy:  /iscsi/iqn….:servers/tpg1  and issue the following command,
again written as a single line:

/iscsi> cd iqn.2018-12.com.example:servers/tpg1/acls

/iscsi/iqn.20...ers/tpg1/acls> create wwn=iqn.2018-12.com.example:node1

Created Node ACL for iqn.2018-12.com.example:node1

/iscsi/iqn.20...ers/tpg1/acls> cd /

Using the  ls  command from this location in the configuration hierarchy, we see the output
similar to the following screenshot, which also includes the command to create the ACL:

/> ls

o- / ....................................................................
..................................................... [...]
o-
backstores ..............................................................
................................................ [...]

| o-
block ...................................................................
............................... [Storage Objects: 1]

| | o-
sdc1 ....................................................................
..... [/dev/sdc1 (0 bytes) write-thru deactivated]

| | o-
alua ....................................................................
............................... [ALUA Groups: 1]

| | o-
default_tg_pt_gp ........................................................
............... [ALUA state: Active/optimized]

| o-
fileio ..................................................................
............................... [Storage Objects: 0]

| o-
pscsi ...................................................................
............................... [Storage Objects: 0]

| o-
ramdisk .................................................................
............................... [Storage Objects: 0]
o-
iscsi ...................................................................
......................................... [Targets: 2]

| o- iqn.2018-
12.com.example:servers ..................................................
................................. [TPGs: 1]

| o-
tpg1 ....................................................................
........................... [no-gen-acls, no-auth]

| o-
acls ....................................................................
...................................... [ACLs: 1]

| | o- iqn.2018-
12.com.example:node1 ....................................................
.................... [Mapped LUNs: 0]

| o-
luns ....................................................................
...................................... [LUNs: 0]

| o-
portals .................................................................
................................... [Portals: 1]

| o-
0.0.0.0:3260 ............................................................
......................................... [OK]
o-
loopback ................................................................
......................................... [Targets: 0]

IMPORTANT NOTE:
This ACL restricts access to the initiator listed within the ACL. Be careful if you ever change
the initiator name because the ACL will also need to be updated. The initiator is the iSCSI
client.

7. Adding LUNs to the iSCSI target

Staying with the  targetcli  shell, we will now move on to our target and TPG (Target
Portal Group) object. Similar to the filesystem, this is achieved using the  cd  command, as
shown in the following command:

/> cd iscsi/iqn.2018-12.com.example:servers/tpg1/luns

We have one portal that listens on all IPv4 interfaces on the TCP port 3260. Currently, we
have no acls or luns. To add a LUN, we will use the following command, which will utilize the
LVM block backstore:

/iscsi/iqn.20...ers/tpg1/luns> create /backstores/block/sdc1

Created LUN 0.

Created LUN 0->0 mapping in node ACL iqn.2018-12.com.example:node1

The iSCSI target is now configured. Once you exit the configuration will be saved
to  /etc/target/saveconfig.json  or you can optionally also run  saveconfig  on the
terminal.

/iscsi/iqn.20...ers/tpg1/luns> exit

Global pref auto_save_on_exit=true


Configuration saved to /etc/target/saveconfig.json

[root@storage1 ~]#

8. Update firewall

Now that the iSCSI target has been configured, you need to make sure that it can be
accessed through the firewall and that the service is started automatically.
To open port 3260 in the firewall, execute below commands

[root@storage1 ~]# firewall-cmd --add-port=3260/tcp --permanent

[root@storage1 ~]# firewall-cmd --reload

9. Start and enable target service

Now that the iSCSI target has been configured, we need to start and enable the target
service

[root@storage1 ~]# systemctl start target

[root@storage1 ~]# systemctl enable target

Setting Up the iSCSI Initiator


The iSCSI Initiator or client on RHEL/CentOS 7/8 is installed with the  iscsi-initiator-
utils  package; you can verify that this is installed on your system using
the  yum  command, as shown in the following example:

[root@node1 ~]# rpm -q iscsi-initiator-utils

iscsi-initiator-utils-6.2.0.874-7.el7.x86_64
and if not available you can install it using  yum

NOTE:

On RHEL system you must have an active subscription to RHN or you can configure a local
offline repository using which "yum" package manager can install the provided rpm and it's
dependencies.

[root@node1 ~]# yum -y install iscsi-initiator-utils

1. Setting the iSCSI Initiatorname

For the purpose of this exercise, we will use a separate RHEL 7 & 8 system as our initiator
and connect it to the existing target. We will need to edit
the  /etc/iscsi/initiatorname.iscsi  file on the new RHEL 7 & 8 system to ensure that
the name is set to match the name we added to the ACL in the earlier section of this article

[root@node1 ~]# vi /etc/iscsi/initiatorname.iscsi

[root@node1 ~]# cat /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2018-12.com.example:node1

So here we have manually updated the file with the ACL name we used on the iSCSI target.

Next restart the  iscsid  daemon

[root@node1 ~]# systemctl restart iscsid

2. Discover the LUNs

When using iSCSI discovery, you need three different arguments:


 --type sendtargets: This tells the discovery mode how to find the iSCSI targets.
 --portal: This argument tells the  iscsiadm  command which IP address and port to
address to perform the discovery. You can use an IP address or node name as the
argument, and optionally, you can specify a port as well. If no port is specified, the
default port 3260 is used.
 --discover: This argument tells the  iscsid  service to perform a discovery.

We will use the main client tool  iscsiadm  to discover the iSCSI LUNs on the target.

[root@node1 ~]# iscsiadm --mode discovery --type sendtargets --portal


10.0.2.13 --discover

10.0.2.13:3260,1 iqn.2018-12.com.example:servers

After the discovery below database is updated

[root@node1 ~]# ls -l /var/lib/iscsi/nodes

total 8

drw------- 3 root root 4096 Dec 29 19:56 iqn.2018-12.com.example:servers

[root@node1 ~]# ls -l /var/lib/iscsi/send_targets/10.0.2.13,3260/

total 12

lrwxrwxrwx 1 root root 69 Dec 29 19:56 iqn.2018-


12.com.example:servers,10.0.2.13,3260,1,default ->
/var/lib/iscsi/nodes/iqn.2018-12.com.example:servers/10.0.2.13,3260,1

-rw------- 1 root root 547 Dec 29 19:56 st_config

 
3. Making the connection

Now, we have seen that we can connect to the iSCSI target and have it sent us the
configured LUNS. We should now connect to this LUN and use the same command with the
following options:

[root@node1 ~]# iscsiadm --mode node --targetname iqn.2018-


12.com.example:servers --login

Logging in to [iface: default, target: iqn.2018-12.com.example:servers,


portal: 10.0.2.13,3260] (multiple)

Login to [iface: default, target: iqn.2018-12.com.example:servers,


portal: 10.0.2.13,3260] successful.

In this command, a few options are used:


 --mode node: This specifies iscsiadm to enter “ node ” mode. This is the mode in
which the actual connection with the target can be established.
 --targetname: This specifies the name of the target as discovered when using the
iSCSI discovery process.
 --portal: This is the IP address and port on which the target is listening.
 --login: This authenticates to the target and will store credentials as well to ensure
that on reboot the connection can be reestablished again.

After logging in, a session with the iSCSI target is established. Both the session and the
node connection can be monitored, using the  -P  option

[root@node1 ~]# iscsiadm --mode node -P 1

Target: iqn.2018-12.com.example:servers

Portal: 10.0.2.13:3260,1
Iface Name: default

After making the connection to the iSCSI target, you’ll see the new SCSI devices as offered
by the target. A convenient command to list these commands  is lsscsi

[root@node1 ~]# lsscsi

[1:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0

[2:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda

[3:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdb

[11:0:0:0] disk LIO-ORG sdc1 4.0 /dev/sdc

4. Managing iSCSI Connection Persistence

After logging in to an iSCSI target server, the connections are persistent automatically. That
means that on reboot, the  iscsid  and iscsi services are started on the iSCSI client, and
these services will read the iSCSI configuration that is locally stored to automatically
reconnect.

Therefore, there is no need to put anything in configuration files if you have


successfully connected once to the iSCSI server.

5. Removing the iSCSI connection

If you need an iSCSI connection not to be restored after reboot, you first have to log out to
disconnect the actual session by using below command

[root@node1 ~]# iscsiadm --mode node --targetname iqn.2018-


12.com.example:servers --portal 10.0.2.13 -u
Logging out of session [sid: 1, target: iqn.2018-12.com.example:servers,
portal: 10.0.2.13,3260]

Logout of [sid: 1, target: iqn.2018-12.com.example:servers, portal:


10.0.2.13,3260] successful.

Next you need to delete the corresponding IQN sub directory and all of its contents. You
can do this with the below command

[root@node1 ~]# iscsiadm --mode node --targetname iqn.2018-


12.com.example:servers --portal 10.0.2.13 -o delete

TIP:
Stop the  iscsi .service and remove all files under  /var/lib/iscsi/nodes  to clean up all
current configuration. After doing that, restart the  iscsi .service and start the discovery and
login again.

6. Mounting iSCSI Devices

To mount an iSCSI device, you need to take care of a few things. First, the iSCSI disk that
now appears as  /dev/sdc  might appear as a different device name the next time it is
connected due to a topology change in your SAN configuration. For that reason, it is not a
smart idea to put a reference to  /dev/sdc  in the  /etc/fstab  file. You should instead use
a file system UUID. Every file system automatically gets a UUID.

To request the value of that UUID, you can use the  blkid  command

[root@node1 ~]# blkid /dev/sdc

/dev/sdc: UUID="f87DLO-DXDO-jjJ5-3vgO-RfCE-oOCA-VGploa"
TYPE="LVM2_member"

You might also like