iSCSI
it is possible to tunnel SCSI through TCP/IP – allowing adding ressources of storage-servers directly – as if it was a local harddisk or partition.
The isci-initiator (client) can “partition-and-format-over-network” the iscis-target (harddisks or files acting as aprtitions on the server) as if it was a local harddisk or partition – and even combine those network-attached-harddisks to raid-sets.
iSCSI Enterprise Target kernel module source – dkms version
iscsitarget package (server) currently (DATE: 2017-07-04 TIME: 12:04:16) only works on debian8 not debian9.
iSCSI Enterprise Target is for building an iSCSI storage system on Linux.
It is aimed at developing an iSCSI target satisfying enterprise requirements.
This package provides the source code for the iscsitarget kernel module.
The iscsitarget package is also required in order to make use of this module.
Kernel source or headers are required to compile this module.
This package contains the source to be built with dkms.
src: https://packages.debian.org/en/jessie/iscsitarget
which also needs: https://packages.debian.org/en/jessie/iscsitarget-dkms
there seems to be something wrong with the official debian repository – because no stretch and no sid version of this package exist – only the jessie (debian8) versions… but those won’t compile/install.
setup and install iscsi-target (server)
uname -a; # tested with Linux debian8 3.16.0-4-686-pae #1 SMP Debian 3.16.43-2+deb8u2 (2017-06-26) i686 GNU/Linux root@debian8:/home/user# apt-cache search iscsi|grep target iscsitarget - iSCSI Enterprise Target userland tools iscsitarget-dkms - iSCSI Enterprise Target kernel module source - dkms version istgt - iSCSI userspace target daemon for Unix-like operating systems tgt - Linux SCSI target user-space daemon and tools tgt-dbg - Linux SCSI target user-space daemon and tools - debug symbols tgt-glusterfs - Linux SCSI target user-space daemon and tools - GlusterFS support tgt-rbd - Linux SCSI target user-space daemon and tools - RBD support apt-get install iscsitarget; # will automatically install iscsitarget-dkms as well
is 150MByte a lot of stuff should go through without an error….
now config:
vim /etc/default/iscsitarget; # edit and modify this to true ISCSITARGET_ENABLE=true mkdir /targets; dd if=/dev/zero of=/targets/storage-lun0 count=0 obs=1 seek=10G; # prepare first target dd if=/dev/zero of=/targets/storage-lun1 count=0 obs=1 seek=10G; # prepare second target dd if=/dev/zero of=/targets/storage-lun2 count=0 obs=1 seek=10G; # prepare second target echo " Target iqn.2017-07.tld.domain:lun0 Lun 0 Path=/targets/storage-lun0,Type=fileio,ScsiId=lun0,ScsiSN=lun0 Target iqn.2017-07.tld.domain:lun1 Lun 1 Path=/targets/storage-lun1,Type=fileio,ScsiId=lun1,ScsiSN=lun1 Target iqn.2017-07.tld.domain:lun2 Lun 1 Path=/targets/storage-lun2,Type=fileio,ScsiId=lun2,ScsiSN=lun2 " >> /etc/iet/ietd.conf; /etc/init.d/iscsitarget start; # restart iscsi service # lun0, lun1 and lun2 are actually randomly chosen identifiers [ ok ] Starting iscsitarget (via systemctl): iscsitarget.service. apt-get install open-iscsi; # for testing purposes also install client on server # testing from the client root@debian8:/targets# iscsiadm -m discovery -t st -p localhost [::1]:3260,1 iqn.2017-07.tld.domain:lun2 [fe80::215:5dff:fe00:712]:3260,1 iqn.2017-07.tld.domain:lun2 [::1]:3260,1 iqn.2017-07.tld.domain:lun1 [fe80::215:5dff:fe00:712]:3260,1 iqn.2017-07.tld.domain:lun1 [::1]:3260,1 iqn.2017-07.tld.domain:lun0 [fe80::215:5dff:fe00:712]:3260,1 iqn.2017-07.tld.domain:lun0
install iscsi-initiator(client)
apt-get install open-iscsi vim /etc/iscsi/iscsid.conf; # change those lines # To request that the iscsi initd scripts startup a session set to "automatic". node.startup = automatic # # To manually startup the session set to "manual". The default is manual. # node.startup = manual ESC :wq (write and quit) root@debian9:~# iscsiadm -m discovery -t st -p 172.20.0.5; # ip of server 172.20.0.5:3260,1 iqn.2017-07.tld.domain:lun1 172.20.0.5:3260,1 iqn.2017-07.tld.domain:lun0 root@debian9:/# iscsiadm -m node --targetname "iqn.2017-07.tld.domain:lun0" --portal "172.20.0.5:3260" --login Logging in to [iface: default, target: iqn.2017-07.tld.domain:imagestorage, portal: 172.20.0.5,3260] (multiple) Login to [iface: default, target: iqn.2017-07.tld.domain:imagestorage, portal: 172.20.0.5,3260] successful. root@debian9:~# iscsiadm -m node --targetname "iqn.2017-07.tld.domain:lun1" --portal "172.20.0.5:3260" --login Logging in to [iface: default, target: iqn.2017-07.tld.domain:lun1, portal: 172.20.0.5,3260] (multiple) Login to [iface: default, target: iqn.2017-07.tld.domain:lun1, portal: 172.20.0.5,3260] successful. root@debian9:~# iscsiadm -m node --targetname "iqn.2017-07.tld.domain:lun2" --portal "172.20.0.5:3260" --login Logging in to [iface: default, target: iqn.2017-07.tld.domain:lun2, portal: 172.20.0.5,3260] (multiple) Login to [iface: default, target: iqn.2017-07.tld.domain:lun2, portal: 172.20.0.5,3260] successful. lsblk; # you should now have two three partitions root@debian9:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 127G 0 disk ├─sda1 8:1 0 126G 0 part / ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 1021M 0 part [SWAP] sdb 8:16 0 10G 0 disk sdc 8:32 0 10G 0 disk sdd 8:48 0 10G 0 disk # if things go wrong check out those log files ==> /var/log/kern.log <== ==> /var/log/syslog <==
manpage of the tool: iscsiadm.man.txt
building raid1 with 3 (iscsi) disks
software-raid has almost no disavantages anymore compared to hardware-raid – because CPUs have become so fast – they can easily handle the extra load of checksum caclulation for RAID5 and RAID6.
RAID1 (mirroring) does not even need checksums.
advantage of software raid: you can combine every partition available to the system.
while still on the server…
root@debian9:~# apt-get install mdadm; # install software raid software root@debian9:~# lsmod|grep md_mod md_mod 135168 0
to automate creation of partitions and partition tables and make it less manual let’s create a new script:
root@debian9:~# vim /scripts/create_partition_table.sh; # with that content #!/bin/bash # to create the partitions programatically (rather than manually) # we're going to simulate the manual input to fdisk # The sed script strips off all the comments so that we can # document what we're doing in-line with the actual commands # Note that a blank line (commented as "defualt" will send a empty # line terminated with a newline to take the fdisk default. sed -e 's/\s*\([\+0-9a-zA-Z]*\).*/\1/' << EOF | fdisk $1 o # clear the in memory partition table n # new partition p # primary partition 1 # partition number 1 # default - start at beginning of disk # default - end at maximum - use whole disk t # type of partition fd # should be LINUX RAID p # print the in-memory partition table w # write the partition table q # and we're done EOF
what you can do now is:
chmod +x /scripts/create_partition_table.sh; # mark it runnable /scripts/create_partition_table.sh /dev/sdb; # create new partition table and partition of type raid /scripts/create_partition_table.sh /dev/sdc; # create new partition table and partition of type raid /scripts/create_partition_table.sh /dev/sdd; # create new partition table and partition of type raid root@debian9:~# lsblk; # 3 new partitions NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 127G 0 disk ├─sda1 8:1 0 126G 0 part / ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 1021M 0 part [SWAP] sdb 8:16 0 10G 0 disk └─sdb1 8:17 0 10G 0 part sdc 8:32 0 10G 0 disk └─sdc1 8:33 0 10G 0 part sdd 8:48 0 10G 0 disk └─sdd1 8:49 0 10G 0 part mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: /dev/sdc1 appears to contain an ext2fs file system size=10484736K mtime=Tue Jul 4 15:08:52 2017 mdadm: size set to 10476544K Continue creating array? Continue creating array? (y/n) y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. root@debian9:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 127G 0 disk ├─sda1 8:1 0 126G 0 part / ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 1021M 0 part [SWAP] sdb 8:16 0 10G 0 disk └─sdb1 8:17 0 10G 0 part └─md0 9:0 0 10G 0 raid1 sdc 8:32 0 10G 0 disk └─sdc1 8:33 0 10G 0 part └─md0 9:0 0 10G 0 raid1 sdd 8:48 0 10G 0 disk └─sdd1 8:49 0 10G 0 part └─md0 9:0 0 10G 0 raid1 mkfs.ext4 /dev/md0; # format the newly created raid1-partition mkdir /mnt/md0; mount /dev/md0 /mnt/md0; cd /mnt/md0; root@debian9:/mnt/md0# touch 1 2 3; # test out the raid root@debian9:/mnt/md0# ll total 24K drwxr-xr-x 3 root root 4.0K Jul 4 16:40 . drwxr-xr-x 4 root root 4.0K Jul 4 16:40 .. -rw-r--r-- 1 root root 0 Jul 4 16:40 1 -rw-r--r-- 1 root root 0 Jul 4 16:40 2 -rw-r--r-- 1 root root 0 Jul 4 16:40 3 drwx------ 2 root root 16K Jul 4 16:39 lost+found root@debian9:/mnt/md0# rm 1 2 3
what is going on in the logs? what logs to watch?
==> /var/log/messages <== Jul 4 16:42:26 debian9 kernel: [ 8573.961863] md: md0: resync done. ==> /var/log/kern.log <== Jul 4 16:42:26 debian9 kernel: [ 8573.961863] md: md0: resync done. ==> /var/log/syslog <== Jul 4 16:42:26 debian9 kernel: [ 8573.961863] md: md0: resync done.
get status informations about the raid:
cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdd1[2] sdc1[1] sdb1[0] 10476544 blocks super 1.2 [3/3] [UUU] mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 4 16:37:54 2017 Raid Level : raid1 Array Size : 10476544 (9.99 GiB 10.73 GB) Used Dev Size : 10476544 (9.99 GiB 10.73 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Jul 4 16:42:26 2017 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Name : debian9:0 (local to host debian9) UUID : a5eafc5b:bca7253d:75e11b58:a2eba753 Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1
save the raid config to file
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
correct fstab entry
when you tested that manually mounting the device works:
mount /dev/md0 /mnt/md0
md0 can automount on boot via /etc/fstab
you need the _netdev option in fstab – or the auto-mount on boot via /etc/fstab fails as it hangs on boot maintenance mode if missing.
/dev/md0 /mnt/md0 ext3 defaults,auto,_netdev 0 0
what is cool
the iscsi mounted partitions will be automatically attached during boot.
[ 19.370550] scsi host4: iSCSI Initiator over TCP/IP [ 19.374031] scsi host5: iSCSI Initiator over TCP/IP [ 19.377198] scsi 4:0:0:0: Direct-Access IET VIRTUAL-DISK 0 PQ: 0 ANSI: 4 [ 19.378042] scsi host6: iSCSI Initiator over TCP/IP [ 19.384072] scsi 5:0:0:2: Direct-Access IET VIRTUAL-DISK 0 PQ: 0 ANSI: 4 [ 19.385126] scsi 6:0:0:1: Direct-Access IET VIRTUAL-DISK 0 PQ: 0 ANSI: 4 [ 19.408591] sd 4:0:0:0: Attached scsi generic sg2 type 0 [ 19.409944] sd 4:0:0:0: [sdb] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB) [ 19.410124] sd 4:0:0:0: [sdb] Write Protect is off [ 19.410126] sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08 [ 19.411183] sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 19.412710] sd 5:0:0:2: Attached scsi generic sg3 type 0 [ 19.414376] sd 5:0:0:2: [sdc] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB) [ 19.414630] sd 5:0:0:2: [sdc] Write Protect is off [ 19.414631] sd 5:0:0:2: [sdc] Mode Sense: 77 00 00 08 [ 19.414667] sdb: sdb1 [ 19.415028] sd 5:0:0:2: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 19.416608] sd 4:0:0:0: [sdb] Attached SCSI disk [ 19.417570] sdc: sdc1 [ 19.419580] sd 5:0:0:2: [sdc] Attached SCSI disk [ 19.432517] sd 6:0:0:1: Attached scsi generic sg4 type 0 [ 19.437049] sd 6:0:0:1: [sdd] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB) [ 19.437374] sd 6:0:0:1: [sdd] Write Protect is off [ 19.437376] sd 6:0:0:1: [sdd] Mode Sense: 77 00 00 08 [ 19.437779] sd 6:0:0:1: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 19.439139] sdd: sdd1 [ 19.440795] sd 6:0:0:1: [sdd] Attached SCSI disk [ 19.501576] md/raid1:md0: active with 3 out of 3 mirrors [ 19.501657] md0: detected capacity change from 0 to 10727981056 [ 19.614915] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities [ 64.480218] hv_utils: KVP IC version 4.0
encountered errors
iscsiadm: default: 1 session requested, but 1 already present. iscsiadm -m node -u all; # logout of all targets /etc/init.d/open-iscsi restart; # and restart service iscsiadm -m node -o delete; # logout of all targets
liked this article?
- only together we can create a truly free world
- plz support dwaves to keep it up & running!
- (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
- really really hate advertisement
- contribute: whenever a solution was found, blog about it for others to find!
- talk about, recommend & link to this blog and articles
- thanks to all who contribute!