The Raspberry Pi 4 is a super neat little device that has a whole bunch of uses, and if there isn’t for something you’re looking for you can make one! As they come out with newer and newer generations of the Raspberry Pi, the hardware gets better, faster, and the capabilities greatly improve.
I decided it was time with the newer and powerful Raspberry Pi 4, to try and turn it in to an iSCSI SAN! Yes, you heard that right!
With the powerful quad core processor, mighty 4GB of RAM, and USB 3.0 ports, there’s no reason why this device couldn’t act as a SAN (in the literal sense). You could even use mdadm and configure it as a SAN that performs RAID across multiple drives.
In this article, I’m going to explain what, why, and how to (with full instructions) configure your Raspberry Pi 4 as an iSCSI SAN, an iSCSI Target.
Please Note: these instructions also apply to standard Linux PCs and Servers as well, but I’m putting emphasis that you can do this on SBCs like the Raspberry Pi.
Over the years on the blog, I’ve written numerous posts pertaining to virtualization, iSCSI, storage, and other topics because of my work in IT. On the side as a hobby I’ve also done a lot of work with SBC (Single Board Computers) and storage.
Some of the most popular posts, while extremely old are:
You’ll notice I put a lot of effort specifically in to “Lio-Target”…
When deploying or using Virtualization workloads and using shared iSCSI storage, the iSCSI Target must support something called SPC-3/SPC-4 Reservations.
SPC-3 and SPC-4 reservations allow a host to set a “SCSI reservation” and reserve the blocks on the storage it’s working with. By reserving the storage blocks, this allows numerous hosts to share the storage. Ultimately this is what allows you to have multiple hosts accessing the same volume. Please keep in mind both the iSCSI Target and the filesystem must support clustered filesystems and multiple hosts.
Originally, most of the open source iSCSI targets including the one that was built in to the Linux kernel did not support SCSI reservations. This resulted in volume and disk corruption when someone deployed a target and connected with multiple hosts.
Lio-Target specifically supported these reservations and this is why it had my focus. Deploying a Lio-target iSCSI target fully worked when using with VMware vSphere and VMware ESXi.
Ultimately, on January 15th, 2011 the iSCSI target in the Linux kernel 2.6.38 was replaced with Lio-target. All new Linux kernels use the Lio-Target as it’s iSCSI target.
An iSCSI target is a target that contains LUNs that you connect to with an iSCSI initiator.
The Target is the server, and the client is the initiator. Once connected to a target, you can directly access volumes and LUNs using iSCSI (SCSI over Internet).
iSCSI is mostly used as shared storage for virtual environments like VMware vSphere (and VMware ESXi), as well as Hyper-V, and other hypervisors.
It can also be used for containers, file storage, remote access to drives, etc…
Some users are turning their Raspberry Pi’s in to NAS devices, whynot turn it in to a SAN?
With the powerful processor, 4GB of RAM, and USB 3.0 ports (for external storage), this is a perfect platform to act as a testbed or homelab for shared storage.
For virtual environments, if you wanted to learn about shared storage you could deploy the Raspberry Pi iSCSI target and connect to it with one or more ESXi hosts.
Or you could use this to remotely connect to a disk on a direct block level, although I’d highly recommend doing this over a VPN.
As mentioned above, you normally connect to an iSCSI Target and volume or LUN using an iSCSI initiator.
Using VMware ESXi, you’d most likely use the “iSCSI Software Adapter” under storage adapters. To use this you must first enable and configure it under the Host -> Configure -> Storage Adapters.
Using Windows 10, you could use the iSCSI initiator app. To use this simply search for “iSCSI Initiator” in your search bar, or open it from “Administrative Tools” under the “Control Panel”.
There is also a Linux iSCSI initiator that you can use if you want to connect from a Linux host.
To get started using this guide, you’ll need the following:
Using this guide, we’re assuming that you have already installed, are using, and have configured linux on the Raspberry Pi (setup accounts, and configured networking).
The Ubuntu Server image for Raspberry Pi comes ready to go out of the box as the kernel includes modules for the iSCSI Target pre-built. This is the easier way to set it up.
These instructions can also apply to Raspbian Linux for Raspberry Pi, however Raspbian doesn’t include the kernel modules pre-built for the iSCSI target and there are minor name differences in the apps. This is more complex and requires additional steps (including a custom kernel to be built).
If you’re running Raspbian, you need to compile a custom kernel and build the iSCSI Target Core Modules. Please follow my instructions (click here) to compile a custom kernel on Raspbian or Raspberry Pi. When you’re following my custom kernel build guide, in addition after running “make menuconfig”:
<M> Generic Target Core Mod (TCM) and ConfigFS Infrastructure
--- Generic Target Core Mod (TCM) and ConfigFS Infrastructure
<M> TCM/IBLOCK Subsystem Plugin for Linux/BLOCK
<M> TCM/FILEIO Subsystem Plugin for Linux/VFS
<M> TCM/pSCSI Subsystem Plugin for Linux/SCSI
<M> TCM/USER Subsystem Plugin for Linux
<M> TCM Virtual SAS target and Linux/SCSI LDD Fabcric loopback module
<M> Linux-iSCSI.org iSCSI Target Mode Stack
If you’re running Ubuntu Server, the Linux kernel was already built with these modules so the action above is not needed.
We’re going to assume that the USB drive or USB stick you’ve installed is available on the system as “/dev/sda” for the purposes of this guide. Also please note that when using the create commands in the entries below, it will create it’s own unique identifiers on your system different from mine, please adjust your commands accordingly.
Let’s start configuring the Raspberry Pi iSCSI Target!
apt install targetcli-fbAs root (or use sudo) run the following command if you’re running Raspbian.
apt install targetcli
targetcli
cd iscsi/
create
cd /backstores/block
create block0 /dev/sda
cd /iscsi/iqn.2003-01.org.linux-iscsi.ubuntu.aarch64:sn.eadcca96319d/tpg1/acls
create iqn.1991-05.com.microsoft:your.iscsi.initiator.iqn.com
cd /iscsi/iqn.2003-01.org.linux-iscsi.ubuntu.aarch64:sn.eadcca96319d/tpg1/luns
create /backstores/block/block0
cd /
ls
saveconfig
exit
That’s it, you can now connect to the iSCSI target via an iSCSI initiator on another machine.
For a quick example of how to connect, please see below.
To connect to the new iSCSI Target on your Raspberry Pi, open up the configuration for your iSCSI Software Initiator on ESXi, go to the targets tab, and add a new iSCSI Target Server to your Dynamic Discovery list.
Once you do this, rescan your HBAs and the disk will now be available to your ESXi instance.
To connect to the new iSCSI Target on Windows, open the iSCSI Initiator app, go to the “Discovery” tab, and click on the “Discover Portal” button.
In the new window, add the IP address of the iSCSI Target (your Raspberry Pi), and hit ok, then apply.
Now on the “Targets” tab, you’ll see an entry for the discovered target. Select it, and hit “Connect”.
You’re now connected! The disk will show up in “Disk Management” and you can now format it and use it!
Here’s what an active connection looks like.
That’s all folks!
There you have it, you now have a beautiful little Raspberry Pi 4 acting as a SAN and iSCSI Target providing LUNs and volumes to your network!
Leave a comment and let me know how you made out or if you have any questions!
While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host. Some might be surprised to… Read More
This guide will outline the instructions to Disable the VMware Horizon Session Bar. These instructions can be used to disable the Horizon Session Bar (also known as the Horizon Client… Read More
Normally, any VMs that are NVIDIA vGPU enabled have to be manually migrated with manual vMotion if a host is placed in to maintenance mode, to evacuate the host. While… Read More
You may experience GPU issues with the VMware Horizon Indirect Display Driver in your environment when using 3rd party applications which incorrectly utilize the incorrect display adapter. This results with… Read More
Today we're going to cover a powerful little NAS being used with VMware; the Synology DS923+ VMware vSphere Use case and Configuration. This little (but powerful) NAS is perfect for… Read More
Today we'll go over how to install the vSphere vCenter Root Certificate on your client system. Certificates are designed to verify the identity of the systems, software, and/or resources we… Read More
View Comments
I have completed the install as detailed above and am able to connect from a vmware host as the initiator. It connects and is online, but it does not see any devices. Is there a secret on how to setup the disks? I fdisk 2 disks, pvcreate, vgcreate, and lvcreate. I did not make any filesystem since it should be a block device. Both external drives are spindle drives. Is the raspberry
As long as you called out to the proper devices while following the instructions it should work.
If you want, copy/paste your config and I'll take a quick peak.
Also, keep in mind that you should only be using this for testing/learning with VMware, as the Pi probably won't have the power or performance to be a datastore for VMware.
Cheers,
Stephen
/> ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- fileio ................................................................................................... [0 Storage Object]
| o- iblock .................................................................................................. [2 Storage Objects]
| | o- disk1 ................................................................................... [/dev/mapper/vg01-lvol01, in use]
| | o- disk2 ................................................................................... [/dev/mapper/vg02-lvol01, in use]
| o- pscsi .................................................................................................... [0 Storage Object]
| o- rd_mcp ................................................................................................... [0 Storage Object]
o- iscsi .............................................................................................................. [1 Target]
| o- iqn.2020-05.net.silverwolf78621:raspberrypi ......................................................................... [1 TPG]
| o- tpg1 ............................................................................................................ [enabled]
| o- acls ........................................................................................................... [2 ACLs]
| | o- iqn.1998-01.com.vmware:elgin6-18a8ced0 ................................................................ [2 Mapped LUNs]
| | | o- mapped_lun0 ............................................................................................. [lun0 (rw)]
| | | o- mapped_lun1 ............................................................................................. [lun1 (rw)]
| | o- iqn.1998-01.com.vmware:esxi6-08eed7d1 ................................................................. [2 Mapped LUNs]
| | o- mapped_lun0 ............................................................................................. [lun0 (rw)]
| | o- mapped_lun1 ............................................................................................. [lun1 (rw)]
| o- luns ........................................................................................................... [2 LUNs]
| | o- lun0 ......................................................................... [iblock/disk1 (/dev/mapper/vg01-lvol01)]
| | o- lun1 ......................................................................... [iblock/disk2 (/dev/mapper/vg02-lvol01)]
| o- portals ...................................................................................................... [1 Portal]
| o- 0.0.0.0:3260 ...................................................................................... [OK, iser disabled]
o- loopback .......................................................................................................... [0 Targets]
HI Richard,
That looks like it should be working.
On the ESXi side, when you rescan the HBA it doesn't show the disks? Does it show any LUNs on the HBA?
Stephen
No, It does not. It show now paths and no devices. I have tried to connect using both using the static and dynamic method. Both work as far as connecting, but sill shows no devices. I have even tried connecting directly from the Pi to the ESX Host. When opening the iSCSI initiator window, under the network config tab, it is showing the port group as not used.
Do you have an example connecting from another Pi running ubuntu? Going through open-scsi, it seems to want a uname/password.
Hello there.
I've followed the steps, but I am stuck cause I can't find the 'Device Drivers' section in the menuconfig. Is there a specific version to use? Any pointers on what am I doing something wrong?
Hi Alisio,
It should be right there once the big blue menu opens up!
Cheers,
Stephen
Were you able to run an entire lab say 2 esxi hosts with this configuration . i was planning a 2 node setup to boot via usb and use a rpi with iscsi for storage
Hi gcp,
I've done this in the past way back with the 1st generation Raspberry Pi. You should be able to do this no problem! :)
Cheers
Morning,
I wanted to say thank you for this tutorial, I got this working first try thanks to your instructions!
But now I've run into a snag, I booted up my Raspberry Pi 4 this morning,and tried to get my Windows 2016 servers to connect to it, but I keep getting 'Target Error' I know the machine is live because I can ping it and I've got a monitor connected. The IPs are all correct.
when I run the command systemctl status open-iscsi I get this:
open-iscsi.service - Login to default iSCSI targets
Loaded: loaded (/lib/systemd/system/open-iscsi.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Condition: start condition failed at Sun 2020-07-26 08:49:39 BST; 33min ago
├─ ConditionDirectoryNotEmpty=|/etc/iscsi/nodes was not met
└─ ConditionDirectoryNotEmpty=|/sys/class/iscsi_session was not met
Docs: man:iscsiadm(8)
man:iscsid(8)
Jul 26 08:49:39 ubuntu systemd[1]: Condition check resulted in Login to default iSCSI targets being skipped.
can you advise on how I can get this working again please?
Sincere thanks
David Kernaghan
Hi David,
Glad the post helped. Did the issues start on restart, or did it occur after updates.
If it occurred on restart, either the config wasn't saved, or the kernel modules may not be loaded.
If it occurred after updating your Pi, there's a chance that the Pi installed an update for the kernel and is booting the new kernel instead of the custom one with the iSCSI target modules.
I'm thinking it's probably an update and different kernel, so you'll need to specify to boot the custom one you built instead.
Cheers,
Stephen
Stephen,
it happened on a reboot, or rather a power-down.
I did all the updates on Ubuntu Server LTS 20.04 before starting the iscsi instructions.
I'll try again and see where it gets to.
Thanks
David
I have got the same issue that LUN cannot be detected and listed by ESXi. Finally I found there were 2 reasons, one is CHAP failure, the other is LUN write-protect. To resolve it, make sure to double check the following 4 attributes under the /iscsi/iqn...../tpg1 folder.
/iscsi/iqn.20...213234f2/tpg1> set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1
Parameter authentication is now '0'.
Parameter demo_mode_write_protect is now '0'.
Parameter generate_node_acls is now '1'.
Parameter cache_dynamic_acls is now '1'.
Furthermore, the above settings might not take effective by targetcli, then you have to save config first and then vim the configuration file for the 4 settings as below.
# vi /etc/rtslib-fb-target/saveconfig.json. --> targets --> tpgs --> attributes --> the 4 settings.
Last, you have to reload the new settings from the configure file
# targetctl restore /etc/rtslib-fb-target/saveconfig.json.
Hope above can help someone and save time, good luck!
Hi Tower,
If the instructions are followed and the ACL is created properly and full read/write should be provided.
However, I appreciate you posted those instructions on how to manually modify the ACLs and access. The more info, the better!!! :)
Thanks and Happy Holidays!
Hello,
I've a weird issue with VMWare... everytime my ESX tries to connect it end up in timeout because raspberry loses network (wireless AND cable). It's not happening if nothing is trying to connect on iSCSI.
What would cause this ?
Hello,
I would check to make sure that there's no networking service that's resetting the network connection.
Also you mention wireless and wired? You should only be using wired for optimum speed. If you have both connections on the same subnet, it could be causing issues. I'd recommend disabling the wireless and make sure you don't have a network connection services managing your networking connections.
Cheers,
Stephen
Hello Stephen,
Thanks for the reply.
I use wireless as backup connection, it's not on the subnet neither the same VLAN. However, it's OK to disable it.
Also, no network connection services is managing my network apart from builtin netplan.
Weird thing is that it's working with a Windows 10 Computer..
That's good to know. You don't have the Windows 10 machine accessing the LUN at the same time as the ESXi hosts, do you?
Only the ESXi hosts should be connected, as having non-ESXi systems accessing it concurrently can cause corruption.
No, I don't, this was just as testing purpose.
Would it be possible connect directly from rpi ethernet to a free esxi nic ? But then I wouldnt be able to set up network, do I ? Since there wouldnt be any router inbetween.
Is there a router between? For iSCSI, you shouldn't have anything between the storage device and the ESXi host, except for switching fabric.
Yes I have a router between.
It's like:
ESXis -- Switch (connected itself to a router) --RPI --- USB -- Storage
Do you mean I should have :
ESXis -- Switch (alone) --RPI --- USB -- Storage
How would I set up the network then ?
Oh sorry, I misunderstood. I thought you meant the connection was routed. You should be fine.
I've set up a new distributed vSwitch to isolate (still going to the same hardware switch and router tho) as possible, but it's still the same. When I run the format it doesnt go through and it end up with Operational State of "Dead or Error" and it's not possible to do anything else.
This is weird as it's supposed to be working fine..
Did you add the ESXi initiator ACL?
Yup, this permits to see my iSCSI before it crashed when I format to VMFS.
I've seen this log on rpi before it crashes:
Message from syslogd ... kernel:[ 135.261709] Internal error: Oops: 96000004 [#1] PREEMPT SMP Message from syslogd ... kernel:[ 135.477438] Code: 9278dc16 f27e001f 9a9f12d6 b94012a0 (f94006c1)
What distribution are you using? I'd recommend using Ubuntu server if possible.