Slow iSCSI performance on ZFS Volumes (zvol)

TL;DR: For reasons, don’t use ZVOLs for iSCSI volumes. Instead, just use a generic file.

I’ve been reorganizing my lab a bit to consolidate some storage and wanted to experiment with iSCSI. I thought “wow, what a great use-case for ZFS ZVOLS…”.

If you recall, ZFS has the ability to create block devices called ZVOLs. When you do this, you get a new device presented on the machine under /dev/zvol/<poolname>/ that you can use as you would any other disk. As part of my consolidation effort, I decided to use one and present it over iSCSI to my workstation. To my surprise, the performance was dismal, maxing out at around 30MB/s when writing to it over iSCSI.

Here are the steps I took to create the ZVOL and present over iSCSI. Note, I’m using FreeBSD as my storage server.

# zfs create -V 500gb zroot/luns/backup
# cat > /etc/ctl.conf <<EOF
portal-group pg0 {
	discovery-auth-group no-authentication
	listen 0.0.0.0
}
target iqn.2020-01.life.shaner:target0 {
	portal-group pg0
	lun 0 {
		path /dev/zvol/zroot/backup
		size 500G
	}
}
EOF
# sysrc ctld_enable=YES
# service ctld start
# ctladm lunlist
(7:1:0/0): <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-5 SCSI device

With this in place, we can move to the (Linux) client machine (initiator in iSCSI parlance) and initiate a connection to the iSCSI drive then format it.

# iscsiadm --mode discovery -t sendtargets --portal 192.168.1.10
192.168.1.10:3260,-1 iqn.2020-01.life.shaner:target0

# iscsiadm --mode node --targetname  iqn.2020-01.life.shaner:target0 --portal 192.168.1.10 --login
Logging in to [iface: default, target: iqn.2020-01.life.shaner:target0, portal: 192.168.1.10,3260]
Login to [iface: default, target: iqn.2020-01.life.shaner:target0, portal: 192.168.1.10,3260] successful.

# dmesg |tail
[117514.525034] sd 9:0:0:0: Attached scsi generic sg5 type 0
[117514.525245] sd 9:0:0:0: Power-on or device reset occurred
[117514.527424] sd 9:0:0:0: [sdg] 1048576000 512-byte logical blocks: (537 GB/500 GiB)
[117514.527428] sd 9:0:0:0: [sdg] 131072-byte physical blocks
[117514.527706] sd 9:0:0:0: [sdg] Write Protect is off
[117514.527709] sd 9:0:0:0: [sdg] Mode Sense: 7f 00 10 08
[117514.528159] sd 9:0:0:0: [sdg] Write cache: enabled, read cache: enabled, supports DPO and FUA
[117514.528750] sd 9:0:0:0: [sdg] Optimal transfer size 8388608 bytes
[117514.675486] sd 9:0:0:0: [sdg] Attached SCSI disk

# mkfs.ntfs -Q /dev/sdg 

Let’s format it then mount it. Note this drive will eventually be mounted by a Windows machine thus we’re formatting it with NTFS.

# mkfs.ntfs -Q /dev/sdg
# mount -t ntfs3 /dev/sdg /mnt

At this point I proceeded to copy data onto the drive where it maxed out at 35MB/s. Abysmal. So, I decided to switch from ZVOL to a plain file on disk and use that instead.

# zfs destroy zroot/luns/backup
# zfs create -o mountpoint=/luns/backup zroot/luns/backup
# cd /luns/backup
# truncate -s 500G disk.img
# sed -i 's/\/dev\/zvol\/zroot/backup/\/luns\/backup\/disk.img/g' /etc/ctl.conf
# service ctld restart

After setting it up this way I was maxing out my 1Gb connnection with writes speeds of over 100MB/s, a 2x improvement in speed.

Lesson learned.

Using zxfer to backup ZFS pools

I was recently looking for an easy way to backup some FreeBSD jails I have running various services. With the jails residing on top of ZFS (using iocage), a quick Google search turned up the usual zfs ‘send’ and ‘receive’ mixed with miscellaneous pipes and redirection. Having wrote several backup scripts in the past, they all felt sort of hack-ish and rushed (which they were). After thinking to myself “surely, someone has dealt with this problem before.” I finally came across zxfer.

I’m unsure of the original author and it was apparently abandoned several years ago around FreeBSD 8.2. Huge thanks to Allan Jude for maintaining the current port.

You can tell a lot of thought went into not just the program itself, but
the supporting documentation as well. I’m typically not one to judge a book by its cover, but with documentation like this, I feel it was a safe bet. It doesn’t just throw command line switches at you and set you on your way. Instead, nearly ever option explains why and when you might use it.

Goal:  Backup iocage jails to remote server (also running zfs).

Solution: Use iocage’s built-in snapshot management and zxfer to send those snapshots off-server and/or off-site.

Note, assume we’ve already got iocage setup and we’re running some jails. Also note, zxfer doesn’t perform any snap-shotting itself. Its up to you to setup a sensible snap-shotting regimen.

On the jail host, take a snapshot of all running jails

for j in $(iocage list | awk '/up/{print $4}'); do iocage snapshot ${j}; done 

Note, zxfer can be used in either a push or pull method, wherein the connection is initiated from the jail host or the backup server respectively. Here, I’ve decided to use the pull method.

On the backup server:

zxfer -dFkPv -g31 -O root@172.16.0.7 -R zroot/iocage/jailszroot/backups 

Assuming you’ve already setup SSH key authentication, from the backup server, we’re recursively sending all dataset snapshots under zroot/iocage/jails on the jail host (172.16.0.7) to our local zfs pool (zroot/backups), keeping the last 30 days of snapshots (on both servers).

After the initial sync, any further runs of the above command will send just the difference between the last two snapshots of the given datasets!