Create Centralized Secure Storage Using iSCSI Target
Create Centralized Secure Storage Using iSCSI Target
Create Centralized Secure Storage Using iSCSI Target
Enterprise Linux 7
April 24, 2019Kedar Vijay Kulkarni
Recently, when I was working with Red Hat Virtualization, I wanted to try to
combine the local storage domain of more than one server systems as a
Storage Domain in Red Hat Virtualization. After a lot of pondering, I came
across the fact that for an Internet Small Computer Systems Interface (iSCSI)
datastore I can use multiple backend block storage devices.
So I decided to set up our Red Hat Enterprise Linux (RHEL) server to expose
about 80% of its local disk over iSCSI to be used for the storage domain
backend on the Red Hat Virtualization. In this post, I will go over how I set up
iSCSI on RHEL. The steps in this article may apply to CentOS (and, maybe,
Fedora) as well.
In my experience, users tend to be more familiar with NFS than iSCSI. If you
haven’t worked with, or heard of it, we have more information on iSCSI in the
Red Hat Enterprise Linux 7 Installation Guide Appendix B. Ready? Let’s move
on to setup. Note that if you’re doing this on Fedora instead of RHEL, you
need to replace "yum" with "dnf".
Setup:
1. To start, you will need to have a hard disk partition or a logical volume
that you can use. This post assumes you already have a logical volume
that is unused and can be used for iSCSI. If you want to know more on
how to setup logical volumes, see "A Linux user’s guide to Logical
Volume Management" on OpenSource.com. Let’s assume the path to
your logical volume is /dev/vg1/lv_iscsi_1
2. The next thing you would want to do is to install targetcli. It is a package
that you need to install in order to setup iSCSI. To install it you may run
following command:
# yum install -y targetcli
For RHEL you may need a valid subscription to the relevant repositories. Also,
be sure to run it as root or with sudo access.
/iscsi>
4. Now using logical volume we will create block storage for iSCSI.
/iscsi> cd /backstores/block
/backstores/block> cd /iscsi
Created TPG 1.
/iscsi>
If required you may add additional portal with different IP_Port as follows:
/iscsi> cd iqn.2019-03.com.redhat:target1/tpg1/portals/
/iscsi/iqn.20.../tpg1/portals> ls
o-
portals ......................................................................
...................................... [Portals: 1]
o-
0.0.0.0:3260 .................................................................
............................................ [OK]
/iscsi/iqn.20.../tpg1/portals> create ip_port=3333
/iscsi/iqn.20.../tpg1/portals> ls
o-
portals ......................................................................
...................................... [Portals: 2]
o-
0.0.0.0:3260 .................................................................
............................................ [OK]
o-
0.0.0.0:3333 .................................................................
............................................ [OK]
/iscsi/iqn.20.../tpg1/portals>
/iscsi/iqn.20.../tpg1/portals> ls
o-
portals ......................................................................
...................................... [Portals: 3]
o-
0.0.0.0:3260 .................................................................
............................................ [OK]
o-
0.0.0.0:3333 .................................................................
............................................ [OK]
o-
10.8.197.253:5555 ............................................................
............................................ [OK]
6. Create an Access Control List (ACL) for client machines, which means
that you need to get iSCSI Initiator name add map it with this target.
Once it is done, then your client machine will be able to connect to this
iSCSI target.
For this part, go to your client machine. Usually the initiator name can be
found in /etc/iscsi/initiator.name if the iscsi-initiator-utils package is
installed. If it is not installed, it can be installed by running:
yum install -y iscsi-initiator-utils
InitiatorName=iqn.1994-05.com.redhat:39ee68f3cf5e
[root@client1 ~]#
Once we have that, go back to the server machine, we can create the acl as
follows:
/iscsi> cd /iscsi/iqn.2019-03.com.redhat:target1/
/iscsi/iqn.20...edhat:target1> cd tpg1/acls
/iscsi/iqn.20...et1/tpg1/acls>
7. Now we need to create a LUN (Logical Unit Number) under this target:
/iscsi/iqn.20...et1/tpg1/acls>cd ../luns
Created LUN 0.
/iscsi/iqn.20...et1/tpg1/luns>cd /
/> saveconfig
/> exit
[root@server1 ~]#
10. Once this is done, we need to start the target service and make
sure we enable it so that it keeps running across reboot.
11. If you are running firewalld or iptables, you need to make sure you
add port 3260/tcp as exception (allow it through firewall) so that
communication between client and iscsi datastore is not blocked. With
firewall you can do that as :
[root@server1 ~]# firewall-cmd --add-port=3260/tcp --permanent
success
success
3260/tcp
[root@server1 ~]#
12. If you have setup the iSCSI correctly on your server then you can
go to your client and run following command to discover the iSCSI
targets on the server as shown here:
10.8.197.253:3260,1 iqn.2019-03.com.redhat:target1
As you can see above, the iSCSI is connected as sdb so that means if you
run fdisk -l on that device it should be listed.
[root@server1 ~]#
Now you can create a filesystem on it and mount it in your system. You may
want to specify the mount information in the /etc/fstab to correctly so that the
mount remains persistent across reboot. For more on creating a
filesystem, my article on OpenSource.com covers the steps.
Ins
tall iSCSI Target in Linux
We don’t need a high resource for stable connectivity and performance in Client
side’s. iSCSI Server called as Target, this share’s the storage from server. iSCSI
Client’s called as Initiator, this will access the storage which shared from Target
Server. There are iSCSI adapter’s available in market for Large Storage services
such as SAN Storage’s.
Why we need a iSCSI adapter for Large storage Area?
Ethernet adapters (NIC) are designed to transfer packetized file level data among
systems, servers and storage devices like NAS storage’s, they are not capable
for transferring block level data over Internet.
Let start to get install and configure the centralized Secure Storage using iSCSI
Target. For this guide, I’ve used following setups.
Sample Output
# /etc/init.d/tgtd status
# chkconfig tgtd on
Next, verify that the run level configured correctly for the tgtd service.
# man tgtadm
i
SCSI Man Pages
Finally we need to add iptables rules for iSCSI if there is iptables deployed in
your target Server. First, find the Port number of iscsi target using following
netstat command, The target always listens on TCP port 3260.
# iptables-save
# /etc/init.d/iptables restart
Restart iptables
Here we have deployed a target server to share LUNs to any initiator which
authenticating with target over TCP/IP, This suitable for small to large scale
production environments too.
In my next upcoming articles, I will show you how to Create LUN’s using LVM in
Target Server and how to share LUN’s on Client machines, till then stay tuned to
TecMint for more such updates and don’t forget to give valuable comments.
iSCSI is a block level Protocol for managing storage devices over TCP/IP
Networks, specially over long distances. iSCSI target is a remote hard disk
presented from an remote iSCSI server (or) target. On the other hand, the iSCSI
client is called the Initiator, and will access the storage that is shared in
the Target machine.
The following machines have been used in this article:
Server (Target):
When the installation completes, we will start and enable the service as follows:
# firewall-cmd --add-service=iscsi-target
And last but not least, we must not forget to allow the iSCSI target discovery:
# firewall-cmd --add-port=860/tcp
# firewall-cmd --add-port=860/tcp --permanent
# firewall-cmd --reload
# cd backstores
# cd block
# cd /iscsi
# create iqn.2016-02.com.tecmint.server:tgt1
Fi
g 2: Define LUNs in Target Server
With the above step, a new TPG (Target Portal Group) was created along with
the default portal (a pair consisting of an IP address and a port which is the way
initiators can reach the target) listening on port 3260 of all IP addresses.
If you want to bind your portal to a specific IP (the Target’s main IP, for example),
delete the default portal and create a new one as follows (otherwise, skip the
following targetcli commands. Note that for simplicity we have skipped them
as well):
# cd /iscsi/iqn.2016-02.com.tecmint.server:tgt1/tpg1/portals
Now we are ready to proceed with the creation of LUNs. Note that we are using
the backstores we previously created (server.backups and server.projects). This
process is illustrated in Fig. 3:
# cd iqn.2016-02.com.tecmint.server:tgt1/tpg1/luns
# create /backstores/block/server.backups
# create /backstores/block/server.projects
Fig 3:
Create LUNs in iSCSI Target Server
The last part in the Target configuration consists of creating an Access Control
List to restrict access on a per-initiator basis. Since our client machine is
named “client”, we will append that text to the IQN. Refer to Fig. 4 for details:
# cd ../acls
# create iqn.2016-02.com.tecmint.server:client
Fi
g 4: Create Access Control List for Initiator
# targetcli
# cd /
# ls
and contacting the target in node mode. This should result in kernel-
level messages, which when captured through dmesg show the device
identification that the remote LUNs have been given in the local system
(sde and sdf in Fig. 8):
# dmesg | tail
Fig 8: Connecting to iSCSCI Target Server in Node Mode
From this point on, you can create partitions, or even LVs (and filesystems on top
of them) as you would do with any other storage device. For simplicity, we will
create a primary partition on each disk that will occupy its entire available space,
and format it with ext4.
Additionally, you can add two entries in /etc/fstab in order for both filesystems to
be mounted automatically at boot using each filesystem’s UUID as returned
by blkid.
Note that the _netdev mount option must be used in order to defer the mounting
of these filesystems until the network service has been started:
Fig 9: Find Filesystem UUID
You can now use these devices as you would with any other storage media.
Summary
In this article we have covered how to set up and configure an iSCSI Target and
an Initiator in RHEL/CentOS 7 disitributions. Although the first task is not part of
the required competencies of the EX300 (RHCE) exam, it is needed in order to
implement the second topic.
Don’t hesitate to let us know if you have any questions or comments about this
article – feel free to drop us a line using the comment form below.
In my last article I shared the steps to configure LVM based HA cluster without GFS2 file
system. Now let me share the steps to configure iSCSI target and initiator on
RHEL/CentOS 7 and 8 Linux node. I am using Virtual Machines running on Oracle
VirtualBox installed on my Linux Server
The other server is going to be used as the iSCSI initiator. This is the server that connects
to the SAN. After connecting to the SAN, the iSCSI initiator sees an additional disk device.
Now iSCSI initiator goes through the process of discovering targets on the network,
authenticating and logging in. Eventually accessing these iSCSI LUNs on localhost.
IMPORTANT NOTE:
When using a redundant network connection, the iSCSI initiator will see the SAN device
twice, once over each different path to the SAN. This might lead to a situation where the
same shared disk device is presented twice as well. To make sure that in such a setup where
redundant paths are available the SAN device is addressed correctly, the iSCSI initiator
should be configured to run the multipath driver.
Item Description
The iSCSI qualified name. A unique name that is used for identifying targets as well as
IQN
initiators
Backend The storage devices on the iSCSI target that the iSCSI target component is providing access to
Item Description
Storage
Target The service on an iSCSi server that gives access to backend storage devices.
Initiator The iSCSi client that connects to a target and is identified by IQN
The access control list that is based on the iSCSI initiator IQN and used to provide access to
ACl
specific user
A Logical Unit Number. The backend storage devices that are shared through the target. This
LUN can be any device that supports read/write operations, such as disk, partitions, logical
volumes, files or tape drves
Portal The IP address and port that a target or initiator uses to establish connections
The Target Portal Group. This is the collection of the IP Address and TCP ports to which a
TPG
specific iSCSI target will listen.
The process whereby an initiator finds the targets that are configured on a portal and stores
Discovery the information locally for future reference. Discovery is done by using the iscsiadm
command
Authentication that gives an initiator access to LUNs on the target. After successful login, the
Login login information is stored on the initiator automatically. Login is performed using the
iscsiadm command
My Setup Details
OS CentOS 7 CentOS 7
vCPU 2 2
Memory 4 GB 4 GB
Advertisement
Before we start working on our iSCSI target, we need a backend storage. On my node I have
added an additional disk mapped to /dev/sdc . Below using fdisk I am creating a new
partition /dev/sdc1 with 1GB size, which will be used to create my iSCSI target.
Changes will remain in memory only, until you decide to write them.
Partition type:
e extended
Select (default p): p
Syncing disks.
To manage the kernel-based iSCSI Target service on RHEL/CentOS 7/8, we will need to
install the targetcli package, as shown in the following command:
NOTE:
On RHEL system you must have an active subscription to RHN or you can configure a local
offline repository using which "yum" package manager can install the provided rpm and it's
dependencies.
Once successfully installed proceed with the steps to configure iSCSI target on your RHEL or
CentOS 7 Linux node.
The targetcli command is a shell to view, edit, save, and load the iSCSI target
configuration. When you look at the configuration, you will see that targetcli provides a
hierarchical structure in a similar way to a filesystem.
To invoke the targetcli shell, we will run this command as root . You will see that on the
first run of the command, a preferences file is created. This is illustrated in the following
snippet
Advertisement
/>
As you can see in the preceding output, you can enter help to display a list of commands
that can be entered. To view the available configuration objects, we can use the ls
command. The output is shown in the following screenshot:
/> ls
o- / ....................................................................
.................................................. [...]
o-
backstores ..............................................................
............................................. [...]
| o-
block ...................................................................
............................ [Storage Objects: 0]
| o-
fileio ..................................................................
............................ [Storage Objects: 0]
| o-
pscsi ...................................................................
............................ [Storage Objects: 0]
| o-
ramdisk .................................................................
............................ [Storage Objects: 0]
o-
iscsi ...................................................................
...................................... [Targets: 0]
We will work with backstores objects to start with so that we can add it to the LVM block
device in the configuration in addition to the fileio backstore. As the name suggests, this will
be a file within the filesystem; we can share this to a network as a virtual disk.
We will work from the root of the targetcli configuration; this should be exactly where
we are, but we can always use the pwd command to display our working directory. If
required, we can change it to the root of the configuration with cd / .
TIP:
While using the targetcli command, we can use CTRL + L to clear the screen as we
would in Bash, but most importantly, the Tab key completion works, so we do not need to
type the complete name or path to objects and properties.
To create a new block, back store on the partition that we created earlier in this section.
This will create the block backstore with a name called sdc1 . Using the ls command
again will list the additional object within the hierarchy. In the following screenshot, we see
the creation of the backstore and the subsequent listing:
/backstores/block> ls
o-
block ...................................................................
................................... [Storage Objects: 1]
o-
sdc1 ....................................................................
......... [/dev/sdc1 (0 bytes) write-thru deactivated]
o-
alua ....................................................................
................................... [ALUA Groups: 1]
o-
default_tg_pt_gp ........................................................
................... [ALUA state: Active/optimized]
To go back to the home directory
/backstores/block> cd /
The iSCSI objects that we see in the main list represents iSCSI targets and their properties.
Firstly, we will create a simple iSCSI target with default names.
/> cd iscsi
Here we will now create an iSCSI target by supplying a custom IQN. To perform this, we
create the object and specify the name that is usually written to contain the date and the
reversed DNS name. Here we have used a sample IQN
Created TPG 1.
NOTE:
IQN starts with iqn, which is followed by the year and month it was created and the
reverse DNS name. If you specify the month as one digit instead of two, for instance, you’ll
get a “ WWN not valid ” message, and creation will fail.
We can add the description of the target with the :servers at the end, indicating that this
is a target for the servers.
We can filter what is displayed using the ls command by adding the object hierarchy that we
want to list. For example, to list targets, we will use the ls iscsi command.
The output of this command is shown in the following screenshot:
/iscsi> ls
o-
iscsi ...................................................................
........................................... [Targets: 2]
o- iqn.2018-
12.com.example:servers ..................................................
................................... [TPGs: 1]
o-
tpg1 ....................................................................
............................. [no-gen-acls, no-auth]
o-
acls ....................................................................
........................................ [ACLs: 0]
o-
luns ....................................................................
........................................ [LUNs: 0]
o-
portals .................................................................
..................................... [Portals: 1]
o-
0.0.0.0:3260 ............................................................
........................................... [OK]
Now we have our customized name for the target, but we still have to add the LUNS or
logical units to make the SAN (Storage Area Network) effective.
6. Adding ACLs
To create an ACL, we limit the access from LUN to a given initiator name or names that we
mention in Access Control List (ACL). The initiator is the iSCSI client and will have a unique
client IQN configured on the initiator in the /etc/iscsi/initiatorname.iscsi file.
NOTE:
If this file is not present, you will need to install the iscsi-initiator-utils package on
the initiator node.
The filename used to configure the initiator name will be consistent for Linux clients, but will
differ for other operating systems. To add an ACL, we will remain with the current
configuration hierarchy: /iscsi/iqn….:servers/tpg1 and issue the following command,
again written as a single line:
/iscsi> cd iqn.2018-12.com.example:servers/tpg1/acls
/iscsi/iqn.20...ers/tpg1/acls> cd /
Using the ls command from this location in the configuration hierarchy, we see the output
similar to the following screenshot, which also includes the command to create the ACL:
/> ls
o- / ....................................................................
..................................................... [...]
o-
backstores ..............................................................
................................................ [...]
| o-
block ...................................................................
............................... [Storage Objects: 1]
| | o-
sdc1 ....................................................................
..... [/dev/sdc1 (0 bytes) write-thru deactivated]
| | o-
alua ....................................................................
............................... [ALUA Groups: 1]
| | o-
default_tg_pt_gp ........................................................
............... [ALUA state: Active/optimized]
| o-
fileio ..................................................................
............................... [Storage Objects: 0]
| o-
pscsi ...................................................................
............................... [Storage Objects: 0]
| o-
ramdisk .................................................................
............................... [Storage Objects: 0]
o-
iscsi ...................................................................
......................................... [Targets: 2]
| o- iqn.2018-
12.com.example:servers ..................................................
................................. [TPGs: 1]
| o-
tpg1 ....................................................................
........................... [no-gen-acls, no-auth]
| o-
acls ....................................................................
...................................... [ACLs: 1]
| | o- iqn.2018-
12.com.example:node1 ....................................................
.................... [Mapped LUNs: 0]
| o-
luns ....................................................................
...................................... [LUNs: 0]
| o-
portals .................................................................
................................... [Portals: 1]
| o-
0.0.0.0:3260 ............................................................
......................................... [OK]
o-
loopback ................................................................
......................................... [Targets: 0]
IMPORTANT NOTE:
This ACL restricts access to the initiator listed within the ACL. Be careful if you ever change
the initiator name because the ACL will also need to be updated. The initiator is the iSCSI
client.
Staying with the targetcli shell, we will now move on to our target and TPG (Target
Portal Group) object. Similar to the filesystem, this is achieved using the cd command, as
shown in the following command:
/> cd iscsi/iqn.2018-12.com.example:servers/tpg1/luns
We have one portal that listens on all IPv4 interfaces on the TCP port 3260. Currently, we
have no acls or luns. To add a LUN, we will use the following command, which will utilize the
LVM block backstore:
Created LUN 0.
The iSCSI target is now configured. Once you exit the configuration will be saved
to /etc/target/saveconfig.json or you can optionally also run saveconfig on the
terminal.
/iscsi/iqn.20...ers/tpg1/luns> exit
[root@storage1 ~]#
8. Update firewall
Now that the iSCSI target has been configured, you need to make sure that it can be
accessed through the firewall and that the service is started automatically.
To open port 3260 in the firewall, execute below commands
Now that the iSCSI target has been configured, we need to start and enable the target
service
iscsi-initiator-utils-6.2.0.874-7.el7.x86_64
and if not available you can install it using yum
NOTE:
On RHEL system you must have an active subscription to RHN or you can configure a local
offline repository using which "yum" package manager can install the provided rpm and it's
dependencies.
For the purpose of this exercise, we will use a separate RHEL 7 & 8 system as our initiator
and connect it to the existing target. We will need to edit
the /etc/iscsi/initiatorname.iscsi file on the new RHEL 7 & 8 system to ensure that
the name is set to match the name we added to the ACL in the earlier section of this article
InitiatorName=iqn.2018-12.com.example:node1
So here we have manually updated the file with the ACL name we used on the iSCSI target.
We will use the main client tool iscsiadm to discover the iSCSI LUNs on the target.
10.0.2.13:3260,1 iqn.2018-12.com.example:servers
total 8
total 12
3. Making the connection
Now, we have seen that we can connect to the iSCSI target and have it sent us the
configured LUNS. We should now connect to this LUN and use the same command with the
following options:
After logging in, a session with the iSCSI target is established. Both the session and the
node connection can be monitored, using the -P option
Target: iqn.2018-12.com.example:servers
Portal: 10.0.2.13:3260,1
Iface Name: default
After making the connection to the iSCSI target, you’ll see the new SCSI devices as offered
by the target. A convenient command to list these commands is lsscsi
After logging in to an iSCSI target server, the connections are persistent automatically. That
means that on reboot, the iscsid and iscsi services are started on the iSCSI client, and
these services will read the iSCSI configuration that is locally stored to automatically
reconnect.
If you need an iSCSI connection not to be restored after reboot, you first have to log out to
disconnect the actual session by using below command
Next you need to delete the corresponding IQN sub directory and all of its contents. You
can do this with the below command
TIP:
Stop the iscsi .service and remove all files under /var/lib/iscsi/nodes to clean up all
current configuration. After doing that, restart the iscsi .service and start the discovery and
login again.
To mount an iSCSI device, you need to take care of a few things. First, the iSCSI disk that
now appears as /dev/sdc might appear as a different device name the next time it is
connected due to a topology change in your SAN configuration. For that reason, it is not a
smart idea to put a reference to /dev/sdc in the /etc/fstab file. You should instead use
a file system UUID. Every file system automatically gets a UUID.
To request the value of that UUID, you can use the blkid command
/dev/sdc: UUID="f87DLO-DXDO-jjJ5-3vgO-RfCE-oOCA-VGploa"
TYPE="LVM2_member"