TrueNAS Scale / Proxmox / iSCSI Primer

Introduction

Notes:

On 4 April 2024 this article was updated to change all references of TrueNAS Core to TrueNAS Scale.

Original Introduction

I’ve recently become a fan of TrueNAS Scale working in conjunction with Proxmox and iSCSI. Getting to this point has been a long and somewhat hard journey, I would say about a year of toying around and figuring out what will work for us in our data centre for Virtual Machines. The journey started by evaluating Synology and QNAP. QNAP makes incredible products and so does Synology, but in the end I spent a lot of time on Synology because we have a local distributor that sells the equipment.

Eventually I gave us on Synology. I watched so many videos about it, and I made a number of enquiries, but none of my enquiries were ever completely satisfied. QNAP was also an option but at some point I learnt that QNAP is more “somewhat proprietary” (e.g. 10git/s interface) so I stopped liking it.

The breakthrough came when I found TrueNAS Scale and TrueNAS Core. I’m a huge fan of open source and I have 100s of servers running Ubuntu derivatives. I also adore Proxmox and my idea was to combine both for the best of both worlds.

Before you read to far I must point I have a very specific application – I want super fast virtual machines. I’m not a home user who wants to store my movies. Unfortunately once you start watching videos you quickly learn that the most popular Youtube videos are based around this “I want big storage” crowd. My application is different – our business has a few data centres and I needed something very specific for our environment. I think in all honesty if I had $ 10K or more, I would have probably have bought something off the shelf. The problem is even buying something off the shelf is hard and if you spend that kind of money, you probably don’t want to make too many mistakes. It’s incredible hard to find a specialist in this field, and of course you’ll learn a lot via Youtube but doing it yourself is the ultimate learning school.

In the end I’m thankful I discovered TrueNAS because I could utilize my fairly advanced systems administration skills to get going.

One of the final hurdles I faced was should I use NFS or should I use iSCSI. My first attempt at iSCSI was a dismal failure but I got NFS working. What I found however is NFS seemed a bit slow and when I enquired about the different some users pointed out that iSCSI is better. There you go, apparently it is:

GitHub reply SCSI versus NFS

Proxmox VE Storage Reference

Most of the rest of this article is a mashup of all the technical information I found on this journey. There is a lot of publicablly available information on the internet and the goal of this article is to document as much as possible about running your own TrueNAS Scale using Proxmox and iSCSI. I’m particularly thankful to the Proxmox Forum and @thegrandwazoo for the information I found provided by those avenues.

A big shoutout to NAS Compares as well. These guys are incredible and they have every conceivable Youtube video about NAS stuff when you start Googling. They have free / donation based user-friendly “Contact Us” line as well, and twice I sent them personal questions in my quest and twice I got great replies. It’s just their information is overwhelming and I think it’s mostly aimed at mainstream. My specific user case seemed to narrow for their information. Also their pages are so text heavy, for example this one:

https://nascompares.com/tag/synology-vs-qnap/

Or this video:https://www.youtube.com/watch?v=3cdzP8YvWt8

The information is useful, but it’s just too much, too quick. I guess “NAS” is a complex topic so that’s the way it will be with ‘getting advice’.

Why TrueNAS Scale and not TrueNAS Core?

I started off using TrueNAS Core because it seemed sort of default. Eventually I discovered there that Scale is also free and seems a lot more powerful:

> TrueNAS SCALE is the latest member of the TrueNAS family and provides Open Source HyperConverged Infrastructure (HCI) including Linux containers and VMs. TrueNAS SCALE includes the ability to cluster systems and provide scale-out storage with capacities of up to hundreds of Petabytes.

Reference

Well since we’re running a data centre having great scalability seemed wise. But the huge drawcard to me was that it’s Debian based. TrueNAS Core is built using FreeBSD and although I’m all for “let’s have a gazillion Linux distros” life is short and there was a lot to learn. Working with a new distribution on basic things like networking or firewalls is a pain if your actual goal is something entirely different.

Why ZFS?

This was another mystery question that eventually got solved by just pushing through. From what is seems the combination of TrueNAS and ZFS means you have a very advanced ‘network disk operating environment’ that has many a great redundancy and performance features. To be honest a lot of the file system information is totally overwhelming but with time it became certain that ZFS is the way to go.

So without much further ado, here goes:

Install Proxmox

Install Proxmox as per usual. In my instance I had to revert to version 7.x because I needed cluster integration.

7.x installer:

https://www.proxmox.com/en/downloads/proxmox-virtual-environment/iso/proxmox-ve-7-4-iso-installer

8.x installer:

https://www.proxmox.com/en/downloads/proxmox-virtual-environment/iso/proxmox-ve-8-0-iso-installer

Install TrueNAS Scale

One caveat I ran into was you can install TrueNAS Scale on dedicated hardware, or you can actually install it as a virtual machine. The second option seemed quite interesting to me, but I couldn’t exactly grasp why. In general I believe virtualization is always better in spite of a small overhead of the hypervisor. I would go as far as using virtualization even when I’m going to be installing just one operating system. Anyhow, for now this tutorial doesn’t cover this scenario and is rather focussed on getting the iSCSI bits working.

https://www.truenas.com/download-truenas-scale/

Create a VM. I used a 10GB disk with all the defaults, 4 processors, and 32 GiB RAM.

Passthrough

lsblk -o +MODEL,SERIAL,WWN
sdb 8:16 0 7.3T 0 disk HGST_ VYJTWE 0x123
sdc 8:32 0 7.3T 0 disk HGST_ VYJTVD 0x456
sdd 8:48 0 7.3T 0 disk HGST_ VYJU0H 0x789

Then

# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root 9 Apr 3 21:39 ata-HGST_HUS728T8TALE6L4_X -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 3 21:39 ata-HGST_HUS728T8TALE6L4_Y -> ../../sdb
lrwxrwxrwx 1 root root 9 Apr 3 21:39 ata-HGST_HUS728T8TALE6L4_Z -> ../../sdd

In the command below, if you’ve already created the VM, you’ll have SCSI1 to use:

I’ve re-ordered mine so that B,C, and D anr sequential:

qm set 124 -scsi1 /dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_Y
qm set 124 -scsi2 /dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_X
qm set 124 -scsi3 /dev/disk/by-id/ata-HGST_HUS728T8TALE6L4_Z

According to WunderTech you also have to do this:

One important option is SSD Emulation. If it’s important for the VM to think the disk size is what is specified (rather than the full disk of the storage selected), enable SSD Emulation! You can also disable backups in this section (if you’re backing up your VMs to a NAS or other server).

To be honest, I believe this would require more research for our environment, but since I’ve heard about the SSD Emulation before I guess this will be fine.

For Proxmox docs on this, see here (no mentioned of SSD Emulation):

https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

From their forum though:

“SSD Emulation” only tells the guest OS to treat the disk as a non-spinning disk (afaik it just sets the rotation rate to 0). It really shouldn’t matter for most situations.

“Discard” on the other hand has a noticeable effect, read our documentation for more on that.

EFI Boot

Stick with the default on Proxmox to allow EFI boot.

IP Configuration

After TrueNAS Core is installed, there is a very cumbersome menu to set IP address (called an alias) and default gateway.

Also set the name servers.

ZFS

After TrueNAS scale has been installed, we configure the ZFS. Steps:

Storage => No Pools => Create Pool

Name: Pool

Type: RAIDZ1 (3 disks)

All defaults for the rest.

If you also have NVMe, you can use that as Cache VDEVs.

When you’re done, your Storage Dashboard will look similar to this:

Configure Proxmox to work with iSCSI

Kevin Scott Adams is your friend here. You can roll your own or you can simply use the Perl helper Kevin has created. The fact is Proxmox supports many different file systems and although the TrueNAS application seems popular, they don’t have specific documentation on how to get iSCSI working with it. The forums are another story, many people have wanted the same and a lot of the information in this blog post will be from there. I am incredibly thankful for the community for making this possible.

If you want to follow the Proxmox documentation route, here is where you would start:

Legacy documentation:
https://pve.proxmox.com/wiki/Legacy:_ZFS_over_iSCSI
points to:
https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI

In all honesty, in my opinion it would be good if Proxmox could incorporate Kevin’s work or at least make an official article and attribute to this work. In the meanwhile, let’s get started:

On the Proxmox server

Install Kevin’s plugin:

https://github.com/TheGrandWazoo/freenas-proxmox

Set keyring location and load GPG key:

keyring_location=/usr/share/keyrings/ksatechnologies-truenas-proxmox-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/gpg.284C106104A8CE6D.key' |  gpg --dearmor >> ${keyring_location}
Create repo:
cat << EOF > /etc/apt/sources.list.d/ksatechnologies-repo.list
# Source: KSATechnologies
# Site: https://cloudsmith.io
# Repository: KSATechnologies / truenas-proxmox
# Description: TrueNAS plugin for Proxmox VE - Production
deb [signed-by=${keyring_location}] https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/deb/debian any-version main

EOF
apt update
apt install freenas-proxmox

Kevin’s library installs a number of Javascript and Perl routines to give you a sexy new “FreeNAS-API” dropdown option when you select ZFS over iSCSI in the Proxmox user interface:

Get SSH login working to TrueNAS

Next next step is to get iSCSI working is to allow the Proxmox server to communicate seamlessly with the TrueNAS server using SSH. SSH is used because the “…plugin uses TrueNAS APIs but still uses SSH keys due to the underlying Promox VE Perl modules that use the iscsiadm command. In other words, Proxmox is just an UI to a lot of other smart things, and to get Kevin’s plugin working with TrueNAS some information exchange over SSH has to take place.

On the Hypervisor:

mkdir /etc/pve/priv/zfs

This directory might already exist if you’re running a Proxmox cluster and you’ve already installed the iSCSI plugin on another host.

In the code sample below, replace 192.168.1.22 with the IP address of your TrueNAS server.

ssh-keygen -f /etc/pve/priv/zfs/192.168.1.22_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/192.168.1.22_id_rsa.pub [email protected]
ssh -i /etc/pve/priv/zfs/192.168.1.22_id_rsa [email protected]

Note: When copying the ID, you’ll get the following error:

/usr/bin/ssh-copy-id: ERROR: ssh: connect to host a.b.c.d port 22: Connection refused

System Settings => Services => SSH make sure it’s running and on auto start

Note: Even after starting SSH, you might run into this problem next:

[email protected]: Permission denied (publickey).

In this case, copy your hypervisor’s public key to the destination NAS using the Systems Settings /  Shell menu.

The last command is just for testing.

Configure Proxmox User Interface

In this section we’ll move over to configuration of the Proxmox VE user interface.

Here is a diagram to get you started:

Datacenter => Storage => Add => ZFS over iSCSI -> Choose FreeNAS-API

Here is a breakdown of the rest of the values that you must enter. Some super basic defaults have been shown in the diagram, but Target is of note because some of the text is cut off. Let me tell you entering Proxmox values are easy – getting them to match up with what’s going on in TrueNAS is hard! It took quite a bit of trial and error to get this going.

ID => The common name that you are going to refer to the disk. Internal to Proxmox.
Portal => The IP address of the TrueNAS server
Pool => The name of the pool as you’ve defined it in TrueNAS. We’ll include some screenshots for this.
Target => iqn.2005-10.org.freenas.ctl:target
API use SSL => 
Do not select. For some reason most tutorials suggested this and since this is complex I would stick to not ticking it until you have the entire system working.
Thin provision => Tick. I have no idea why anyone wouldn’t want thin provisioning 🙂
API IPv4 Host => The IP address of the TrueNAS server. Same as Portal.
API Password => This should really be called SSH password because that’s exactly what you must fill in here.

The unfortunately thing is you can get very far with filling in these values and Proxmox will accept them. It’s only once you start using the disk that you discover that some of the settings might be wrong. Mostly the place to look then will be the TrueNAS host. One quick way to see if you’re up and running is to run this command with your TrueNAS IP address:

iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.1.22



Warning: You’ll get no portals found even though Proxmox and TrueNAS is talking to each other.

After following this whole procedure, you get this problem:

Warning: volblocksize (4096) is less than the default minimum block size (16384).
To reduce wasted space a volblocksize of 16384 is recommended.
TASK ERROR: unable to create VM 137 - Unable to connect to the FreeNAS API service at 'a.b.c.d' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 380.

That means your iSCSI is not set up to allow this host and you have to follow setting up of a portal.

Also CHECK password allow is on!

For starters, your Shares / Sharing / Block (iSCSI) Shares Targets will be stopped.

Start it

Also this is NB:

System Settings => Services iSCSI must be running and started automatically!

Configuring the TrueNAS Host for iSCSI

This was hard. If you’ve never used iSCSI then it’s difficult to figure out where what must go. There are all kinds of checkboxes and ticks and if you miss a single one you’re out of business. Luckily a Proxmox forum community member made good documentation with screenshots. I love the community forum but editing is hard so I’ve attempted to redo it. From time to time I’ll come back here and edit it as new information comes to light.

Many of the screenshots presented here seems obvious and they are.

SSH

System Settings => Services=> SSH make sure it’s running and on auto start

Please note! Password must be enabled in Credentials!
Edit

  • Log in as Root with Password
  • Allow Password Authentication

iSCSI

System Settings => iSCSI make sure it’s running and on auto start
Next, Edit

Target Global Configuration

Base Name

Leave this at default:

Base Name => iqn.2005-10.org.freenas.ctl

Portals

Description

I called mine ‘portal’.

I didn’t select discovery authentication method nor the discovery authentication group

It’s important to add the IP address of the TrueNAS below:

Intiators Groups

Allow all initiators

Authorized Access

Leave as “No Authorized Access”, so don’t change anything.

Targets

In target, I called my Target simply target. This is used when configuring the Proxmox UI’s iqn.

Specify the local network /24

Choose the Portal group ID 1

Authentication Method None

Extends

Extents seemed to populate itself on it’s own. On my working system I see this:

Associated Targets

This also populated itself and this is what I see:

Datasets

pool

The assumption is that you have already created a pool. The most important step here by Datasets is create a Dataset under the pool.

Connecting from Debian/Ubuntu

We’ve documented some steps if want to create an EXT4 volume on a ZFS disk.

The steps are:

  1. Install the open-iscsi package
  2. Check discovery to identify your portal
  3. Login to your portal
  4. Use lsblk to identify the drives available in your portal
  5. Use fdisk to create a partition
  6. Make a mount point
  7. Mount the drive

Here is a transcript:

# apt install open-iscsi

# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.25
192.168.1.25:3260,1 iqn.2005-10.org.freenas.ctl:target

# iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:target --portal 192.168.1.25 --login
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:target, portal: 192.168.1.25,3260]
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:target, portal: 192.168.1.25,3260] successful.

If you’re already logged into the portal, you’ll get this. It’s possible to simply change login to logout but don’t do this when are already using the drives!

# iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:target --portal 172.168.1.25 --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals

Create a partition and mount the disk:

fdisk /mnt/sdX
m
exit and save
mkdir /mnt/iscsi-disk
mount /dev/sdd1 /mnt/iscsi

You can print information about your current iscsi session once you’ve completed this procedure. Note, on a Promox VE ZFS over iSCSI, the session won’t print anything instead you’ll see iscsiadm: No active sessions.

# iscsiadm --mode session --print=1
Target: iqn.2005-10.org.freenas.ctl:target (non-flash)
Current Portal: 172.168.1.25:3260,1
Persistent Portal: 172.168.1.25:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2004-10.com.ubuntu:01:649f58d5b124
Iface IPaddress: 172.168.1.26
Iface HWaddress: default
Iface Netdev: default
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

iscsid.conf

For authentication information, see this file:

> /etc/iscsi/iscsid.conf

Relevant sections in this file:

node.session.auth.authmethod = CHAP
node.session.auth.username = root
node.session.auth.password = secret

For information on adding a ZFS share, go here:

https://www.truenas.com/docs/core/coretutorials/sharing/iscsi/addingiscsishare/

Troubleshooting

Troubleshooting this setup can be complex. Once can make a mistake anywhere along the way and be pretty screwed. Through lots of trial and error I eventually discovered that the iscsiadm command is your friend, but only once things are working.

After Node Restart, no ISCSI

qm start 147
WARN: iothread is only valid with virtio disk or virtio-scsi-single controller, ignoring
Unable to connect to the FreeNAS API service at '192.168.1.21' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 380.

Solution; discover again and try again:

 

Warning: You’ll get no portals found even though Proxmox and TrueNAS is talking to each other.

iscsiadm --mode discovery --op update --type sendtargets --portal 172.168.1.21

SSH

Use this command to see if your SSH setup is working:

ssh -i /etc/pve/priv/zfs/192.168.1.22_id_rsa [email protected]

Don’t continue until the SSH is working

Using iscsiadm discovery

This command can be run on the Proxmox host, but only _after_ iSCSI has properly connected:

Working

# iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.1.22
192.168.1.22:3260,1 iqn.2005-10.org.freenas.ctl:target

Not working

# iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.1.22
iscsiadm: No portals found

What stands out about the above working reply is that iSCSI’s TCP port is 3260, and id of 1 is returned and it’s called iqn.2005-10.org.freenas.ctl and final there is colon : separation between the iqn thingie and the name of the target. The target in this case has simply been called target.

If you click in the user interface and you just see the new disk hanging, you might also see this output:

/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/portal_id_rsa root@portal zfs list -o name,volsize,origin,type,refquota -t volume,filesystem -d1 -Hp pool

Using iscsiadm loginall

During troubleshooting I was advised to use this command:

# /sbin/iscsiadm -m node --loginall=automatic
iscsiadm: No records found

I never got that working in spite of having a working system. If anyone can comment what on earth this command is supposed to do I would be really appreciative.

Block Size Warning During Moves

You may see this warning:

create full clone of drive scsi0 (local-lvm:vm-101-disk-0)
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.
<snip>

To get some insight into this error, and also make your head hurt, go here:

https://github.com/openzfs/zfs/issues/14771

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top