
In this tutorial you will learn how to install and configure iSCSI storage server on Ubuntu 24.04. Well, iSCSI, an acronym for Internet Small Computer System Interface, is a Storage Area Network protocol that is used by the organizations to facilitate online storage management. It relies on TCP/IP networks to send SCSI commands between the initiator (client) and the target (server) that provide block-level access to the storage devices which can either be LVM logical volumes, complete disks, files or partitions.
Table of Contents
Install and Configure iSCSI Storage Server on Ubuntu 24.04
Key Concepts in iSCSI
Below are key concepts used in iSCSI network storage;
- iSCSI Initiator: The client-side software that enables a device to connect to an iSCSI storage target and use its resources.
- iSCSI Target: The server-side software that presents the storage resources to the initiator as if they were local disks.
- LUN (Logical Unit Number): A unique identifier that represents a specific logical volume or portion of a physical disk on the iSCSI target.
- iSCSI Portal: The IP address and TCP port number combination that the iSCSI initiator uses to connect to the iSCSI target.
- Initiator IQN (iSCSI Qualified Name): The unique identifier assigned to the iSCSI initiator to establish a connection with the iSCSI target.
- CHAP (Challenge-Handshake Authentication Protocol): A security mechanism used for authentication between the iSCSI initiator and target to ensure that only authorized initiators can access the storage.
- SCSI (Small Computer System Interface): A standard protocol used by the operating system to communicate with storage devices, including iSCSI storage.
- MPIO (Multipath I/O): A technique used to create redundant paths between the initiator and target to ensure high availability and load balancing.
- Jumbo Frames: A technique used to increase the packet size in iSCSI networks to improve performance.
- Portal: A portal is a network interface on a target that listens for iSCSI initiator connection requests.
- TPG (Target Portal Group): a group of portals on the target side that share the same target portal group tag (TPGT). By grouping portals into a TPG, the target can present a single iSCSI target to initiators, even if there are multiple interfaces or network paths to the target. TPGs can be used to provide load balancing, failover, and increased throughput.
- ACL: Access Control List that lists iSCSI clients to be granted access to the storage device.
Read more on man targetcli
.
In our deployment, we will be using Ubuntu 24.04 server as the iSCSI target and Ubuntu 24.04 Desktop as the iSCSI initiator.
Install iSCSI Required Packages on Ubuntu 24.04
To set up an iSCSI target, we need to install an administration tool called targetcli which provides the default interface for managing the target.
sudo apt update
sudo apt -y install targetcli-fb
Configure iSCSI Target on Ubuntu 24.04
After installing targetcli, let us configure iSCSI target.
Create the backend storage devices
In our storage server, we have attached two disks and created two logical volumes;
lsblk
vdb 253:16 0 5G 0 disk
└─vol01-lv_lun01 252:0 0 5G 0 lvm
vdc 253:32 0 5G 0 disk
└─vol02-lv_lun02 252:1 0 5G 0 lvm
Create iSCSI Backstore/Block Storage
iSCSI backstore is a virtual disk or LUN (Logical Unit Number) that represents the storage space that is exported to the iSCSI initiators. There are several types of backstores that can be used in iSCSI, such as file-based backstores like iSCSI target files, or block-based backstores like LVM volumes or physical disks.
To create iSCSI target backstore, launch the targetcli utility by typing targetcli on terminal
sudo targetcli
This will open an interactive prompt;
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/>
Next, create new backstore for the iSCSI disk using logical volumes created above as the backend storage device.
/backstores/block create iscsi-lun-001 /dev/vol01/lv_lun01
What does the command do exactly?
- it creates a new iSCSI LUN named
iscsi-lun-001
that maps to the logical volume/dev/vol01/lv_lun01
. /dev/vol01/lv_lun01
logical volume will be presented as a block device to iSCSI initiators (clients) under the nameiscsi-lun-001
.
You can add additional LUN
/backstores/block create iscsi-lun-002 /dev/vol02/lv_lun02
If you run ls command, you should now be able to see created block storage;
ls
/> ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 2]
| | o- iscsi-lun-001 ....................................................... [/dev/vol01/lv_lun01 (5.0GiB) write-thru deactivated]
| | | o- alua ................................................................................................... [ALUA Groups: 1]
| | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| | o- iscsi-lun-002 ....................................................... [/dev/vol02/lv_lun02 (5.0GiB) write-thru deactivated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 0]
o- loopback ......................................................................................................... [Targets: 0]
o- vhost ............................................................................................................ [Targets: 0]
As you can see:
- We have two block storage objects:
iscsi-lun-001
: A 5.0 GiB LUN located at/dev/vol01/lv_lun01
(deactivated)iscsi-lun-002
: A 5.0 GiB LUN located at/dev/vol02/lv_lun02
(deactivated)
- Both LUNs:
- Are configured with write-through caching, meaning data is written simultaneously to both cache and storage device. This ensures data integrity but may provide slower performance than write-back caching (which writes data to cache first before flushing it to the disk) since each write operation must complete on the physical disk before being acknowledged.
- Are currently deactivated (not available to clients)
- Have ALUA (Asymmetric Logical Unit Access) configured with default settings in “Active/optimized” state. ALUA allows storage systems to communicate path preferences to clients. The “Active/optimized” state indicates these paths would provide the best performance for accessing the LUNs, typically through the storage controller that owns the LUN. This helps in multi-path environments where clients might have multiple possible routes to storage.
- The system has no active iSCSI targets configured yet, which is why the LUNs are deactivated
Create iSCSI File-based Backstore
If you want, you can also create a file-backed block device. To do this, navigate to fileio directory and create, for example, a 1GiB sized file residing on the home directory.
/backstores/fileio create iscsi_file01 /home/lun_file 1GiB
Confirm;
ls /backstores/fileio
o- fileio ..................................................................................................... [Storage Objects: 1]
o- iscsi_file01 ................................................................ [/home/lun_file (1.0GiB) write-back deactivated]
o- alua ....................................................................................................... [ALUA Groups: 1]
o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
/>
Create an IQN for the iSCSI target
Next, you need to create an IQN for the iSCSI targets. It takes the format;
iqn.YYYY-MM.<reversed-domain>[:<optional-identifier>]
For example:
/iscsi create iqn.2025-04.com.kifarunix-demo:target00
Where:
/iscsi create
: The base command to create a new iSCSI targetiqn.2025-04.com.kifarunix-demo:target00
: The IQN (iSCSI Qualified Name) for the target, which includes:iqn
: Standard prefix for iSCSI names2025-04
: Date code (April 2025)com.kifarunix-demo
: Reverse domain name of the organizationtarget00
: Specific target identifier. This is just an arbitrary identifier and can be anything that suits the description of the target.
When IQN is created, by default a Target Portal Group will be created.
Created target iqn.2025-04.com.kifarunix-demo:target00.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
As you can see, iSCSI target creates a portal that listens on all interfaces on port 3260/tcp.
Configure ACLs for the TPG
You can technically configure a working iSCSI target without either ACL or CHAP security features, but this would leave your storage completely exposed to unauthorized access. For any production environment or environment with sensitive data, implementing both ACLs and CHAP authentication is considered a security best practice. If you don’t want to enable these features, then I suggest you control access to Target portal via the system/network firewall.
Target Portal Group (TPG) Access Control List (ACL) defines which initiators (clients) are allowed to access the iSCSI storage resources exposed by the target (server). The TPG ACL is used to provide access control at the Target level. It specifies the initiator names that are allowed or denied access to the target.
When a new session is established, the initiator’s name is checked against the TPG ACL. If the initiator name is found in the ACL, access is granted, and the session is established. If the initiator name is not found in the ACL, access is denied, and the session is terminated.
To create an ACL for the TPG1 above;
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls create iqn.2025-04.com.kifarunix-demo:poc
The command:
- Creates an Access Control List (ACL) rule for the iSCSI target:
- Target Name:
iqn.2025-04.com.kifarunix-demo:target00
- Target Portal Group (TPG):
tpg1
- Target Name:
- Allows only a specific iSCSI initiator (client) to connect:
- Permitted Initiator IQN:
iqn.2025-04.com.kifarunix-demo:poc
- The client must exactly match this IQN in its configuration (
/etc/iscsi/initiatorname.iscsi
on Linux).
- Permitted Initiator IQN:
If you want to add multiple clients, specify them comma separated;
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls create iqn.2025-04.com.kifarunix-demo:poc,iqn.2025-04.com.kifarunix-demo:another-server-ID
If you want to use IPs as IQN IDs instead;
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls create iqn.2025-04.com.kifarunix-demo:192.168.1.100,iqn.2025-04.com.kifarunix-demo:192.168.1.101
Configure CHAP Authentication
As already mentioned, this is optional.
But if you want to you can configure CHAP Authentication by creating initiators’ users, that will be allowed to access backend storage, and their passwords.
There are two types of iSCSI CHAP authentication:
- One-Way CHAP: This is where only the initiator is authenticated by the target. The target does not authenticate itself to the initiator. This is the default configuration.
- Mutual CHAP: Both the target and initiator authenticate each other. The initiator proves its identity to the target (as in one-way CHAP). The target also proves its identity to the initiator (preventing man-in-the-middle attacks).
Enable CHAP Authentication
Verify whether CHAP authentication is enabled:
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1 get attribute authentication
Should return 1 if CHAP is enabled. Sample output for my setup;
authentication=0
This means, CHAP authentication is disabled. You can enable by running;
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1 set attribute authentication=1
Configure CHAP Authentication: One-Way
To setup the default One-Way CHAP authentication for example:
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls/iqn.2025-04.com.kifarunix-demo:poc set auth userid=kifarunix-admin
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls/iqn.2025-04.com.kifarunix-demo:poc set auth password=password
This means that any client that defines its initiator name as iqn.2025-04.com.kifarunix-demo:poc must provide username and password configured above to be able to access the storage.
Configure CHAP Authentication: Mutual
To setup the default Mutual CHAP authentication for example:
Create the client ACL:
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls create iqn.2025-04.com.kifarunix-demo:client-2
Set initiator credentials (as done for one-way auth above):
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls/iqn.2025-04.com.kifarunix-demo:client-2 set auth userid=client-2-user
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls/iqn.2025-04.com.kifarunix-demo:client-2 set auth password=client-2-pass
Set target credentials (for mutual auth):
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls/iqn.2025-04.com.kifarunix-demo:client-2 set auth mutual_userid=target-user
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/acls/iqn.2025-04.com.kifarunix-demo:client-2 set auth mutual_password=target-pass
Create LUNs for the iSCSI disk
Create the LUNs needed to associate a block device with a specific TPG. For our case, we will use iscsi-lun-001 block and iscsi_file01 file created above to create a LUN.
Any new LUN created will be mapped to each ACL that is associated with the TPG.
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/luns create /backstores/block/iscsi-lun-001
Output;
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.2025-04.com.kifarunix-demo:client-2
Created LUN 0->0 mapping in node ACL iqn.2025-04.com.kifarunix-demo:poc
File based LUN;
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/luns create /backstores/fileio/iscsi_file01
Output;
Created LUN 1.
Created LUN 1->1 mapping in node ACL iqn.2025-04.com.kifarunix-demo:client-2
Created LUN 1->1 mapping in node ACL iqn.2025-04.com.kifarunix-demo:poc
Create iSCSI Target Portal
Optionally, to configure a target to offer services on specific address, create a portal for that address. Remember the IP address used must be fixed.
As you saw above, the portal is configured to listen on all interfaces (0.0.0.0) on the target host on port 3260/tcp by default.
So, to configure it to listen on specific interface IP on the target, Navigate to portals and create it.
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/portals create <TARGET-IP>
E.g
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/portals create 192.168.122.100
If you get the error, Could not create NetworkPortal in configFS
, it is because you already have portal that listens on all interfaces.
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/portals ls
o- portals ............................................................................................................ [Portals: 1]
o- 0.0.0.0:3260 ............................................................................................................. [OK]
/>
Thus, to change this, delete the portal;
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/portals delete 0.0.0.0 3260
And re-create the portal;
/iscsi/iqn.2025-04.com.kifarunix-demo:target00/tpg1/portals create 192.168.122.100
Output;
Using default IP port 3260
Created network portal 192.168.122.100:3260.
Open iSCSI Portal on Firewall
Exit the targetcli utility
/> exit
Global pref auto_save_on_exit=true
Configuration saved to /etc/target/saveconfig.json
check whether port 3260 is listening;
sudo ss -altnp | grep 3260
LISTEN 0 256 192.168.122.100:3260 *:*
Open iSCSI portal on firewall;
sudo ufw allow 3260/tcp
Or use iptables or firewalld, whichever you are using on your target host.
Running iSCSI Target Service
Start iSCSI target and enable it to run when the system boots.
sudo systemctl enable --now target
Check status;
systemctl status target
Configure the iSCSI Initiator
Follow these simple steps to configure an iSCSI Initiator.
Install iSCSI Initiator Utilities
Run the command, install iSCSI Initiator utilities
sudo apt -y install open-iscsi
Set the iSCSI Initiator Name
Edit the file /etc/iscsi/initiatorname.iscsi
configuration and add the name of the initiator;
sudo vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2025-04.com.kifarunix-demo:poc
Save and exit the file
Configure Authentication: One-Way
Open the /etc/iscsi/iscsid.conf
config and update the iSCSI credentials created before, under CHAP settings section;
sudo vim /etc/iscsi/iscsid.conf
- Enable CHAP Authentication:
node.session.auth.authmethod = CHAP
- Configure One-Way authentication:
node.session.auth.username = USERNAME
node.session.auth.password = password
# *************
# CHAP Settings
# *************
# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
node.session.auth.authmethod = CHAP
# To configure which CHAP algorithms to enable set
# node.session.auth.chap_algs to a comma seperated list.
# The algorithms should be listen with most prefered first.
# Valid values are MD5, SHA1, SHA256
# The default is MD5.
#node.session.auth.chap_algs = SHA256,SHA1,MD5
# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
node.session.auth.username = kifarunix-admin
node.session.auth.password = password
Save the file and exit.
Configure Authentication: Mutual
For mutual authentication, you need to use these configurations:
- Enable CHAP Authentication:
node.session.auth.authmethod = CHAP
- Configure One-Way authentication:
node.session.auth.username = USERNAME
node.session.auth.password = password
- CHAP username and password for target:
- node.session.auth.username_in = target_username
- node.session.auth.password_in = target_password
...
node.session.auth.authmethod = CHAP
# To configure which CHAP algorithms to enable, set
# node.session.auth.chap_algs to a comma separated list.
# The algorithms should be listed in order of decreasing
# preference — in particular, with the most preferred algorithm first.
# Valid values are MD5, SHA1, SHA256, and SHA3-256.
# The default is MD5.
#node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5
# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
node.session.auth.username = client-2-user
node.session.auth.password = client-2-pass
# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
node.session.auth.username_in = target-user
node.session.auth.password_in = target-pass
...
Save and exit the file.
Restart the iscsid service
sudo systemctl restart iscsid open-iscsi
Perform iSCSI Target Discovery
You can discover available targets using the iscsiadm command. When iscsiadm is operating on discovery mode, three arguments are passed:
- sendtargets type — specifies how to find the targets.
- portal — tells the iscsiadm the IP address and port to address so as to perform discovery. Default port is 3260.
- discover — tells the iscsid service to perform a discovery.
sudo iscsiadm -m discovery -t st -p [IP address of the iSCSI server]
So, to perform an iSCSI discovery, from the initiator run the command:
sudo iscsiadm -m discovery -t sendtargets -p 192.168.122.100
Sample output;
192.168.122.100:3260,1 iqn.2025-04.com.kifarunix-demo:target00
View iSCSI Target Details
To view the iSCSI target details, run the following command:
sudo iscsiadm -m node -T [target IQN] -p [IP address of the iSCSI server] --login
Replace [target IQN]
with the IQN of the target and [IP address of the iSCSI server]
with the IP address of the iSCSI server.
sudo iscsiadm -m node -T iqn.2025-04.com.kifarunix-demo:target00 -p 192.168.122.100 --login
Logging in to [iface: default, target: iqn.2025-04.com.kifarunix-demo:target00, portal: 192.168.122.100,3260]
Login to [iface: default, target: iqn.2025-04.com.kifarunix-demo:target00, portal: 192.168.122.100,3260] successful.
Once the connection is established, both session and node details can be checked as follows.
iscsiadm -m session -o show
Output;
tcp: [7] 192.168.122.100:3260,1 iqn.2025-04.com.kifarunix-demo:target00 (non-flash)
To get more details:
iscsiadm --mode node -P 1
Target: iqn.2025-04.com.kifarunix-demo:target00
Portal: 192.168.122.100:3260,1
Iface Name: default
More verbose details:
iscsiadm --mode node -P 3
In case you want to log out of an active session;
iscsiadm -m session -u
To force log out;
iscsiadm -m session -u
Mounting the iSCSI Devices
List the available iSCSI devices using the lsscsi command;
lsscsi
[6:0:0:0] disk LIO-ORG iscsi-lun-001 4.0 /dev/sda
[6:0:0:1] disk LIO-ORG iscsi_file01 4.0 /dev/sdb
Our iSCSI device is denoted by /dev/sdb, /dev/sdc.
Create Filesystem on iSCSI Disk
As you can see that the that the block device and fileio targets shared are now available to the initiator as sdb and sdc respectively and can now be used as if they were locally mounted.
To make these devices usable, we need to partition them, create filesystems on them and mount them.
To partition the devices, you can use any partitioning system you are comfortable with. In our case we used parted in a scripted format as shown below.
parted -s /dev/sda "mklabel msdos"
parted -s /dev/sda "mkpart primary 0% 100%"
Create an EXT4 filesystem on the new iSCSI disk.
mkfs.ext4 /dev/sda1
Mount iSCSI Disk on Client
Create a mount point say at /mnt/ directory.
mkdir /mnt/iscsi_disk
Mount the backstore;
mount -t ext4 /dev/sda1 /mnt/iscsi_disk/
df -hT -P /dev/sda1
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 4.9G 24K 4.6G 1% /mnt/iscsi_disk
To be able to mount it on boot, add this entry on /etc/fstab
.
echo "/dev/sda1 /mnt/iscsi_disk ext4 _netdev 0 2" >> /etc/fstab
Big up! You have successfully configured an iSCSI target (server) and shared a block device to an iSCSI client.
That concludes our guide on how to installing and configuring iSCSI storage server on Ubuntu.
More information on;
- man iscsiadm
- man iscsi-target