In this tutorial, I will be showing you how to get Lio-Target (an iSCSI target that is compatible with persistent reservations required by both VMware and MS Clustering) running on CentOS 6.

While this tutorial is targetted for CentOS 6 users, I see no reason why this should work on any other newer distributions.

Please note that while Lio-Target 4.x (and required tcm_loop and iSCSI) is available on newer/non-stable development kernels, Lio 3.X is stable, and currently builds nicely on CentOS 6. I will be doing up a tutorial for Lio 4.X once I myself start using it.

One more note, In the past I have thrown up a few tutorials on how to get Lio-Target running on various Linux distributions. These tutorials have worked for some, and not for others. I myself have had a few difficulties replicating the success I did originally. I myself am a technical guy, I do not understand some developer terms, and am not an expert in understanding some development cycles. This is one of the reasons why I had so many difficulties earlier. Since the earlier tutorials, I have caught up to speed and am familiar with what is required to get Lio-Target running.

Now on to the tutorial:

It is a good idea to start with a fresh install of CentOS 6. Make sure you do not have any of the iSCSI target packages installed that ship with CentOS. In my case I had to remove a package called something like “iSCSI-Target-utils” (This shipped with the CentOS 6 install).

1. Let’s download the software. We need to download both the 3.5 version of Lio-Target, along with Lio-utils which was built for 3.x of Lio-Target. (I chose the RisingTide Systems GIT repo since lio related projects have been missing from kernel.org’s GIT repo due to the issues kernel.org has been having recently).

Issue the following commands:

git clone git://risingtidesystems.com/lio-core-backports.git lio-core-backports.git

git clone git://risingtidesystems.com/lio-utils.git lio-utils.git

cd lio-utils.git/

git checkout --track -b lio-3.5 origin/lio-3.5

cd ..

(You have now downloaded both Lio-Target 3.5 backport, and lio-utils for lio-target 3.x)

2. Build kernel modules for your existing running CentOS kernel. Make sure you change in to the lio-core-backports directory first.

Change in to the lio-core-backports directory then issue the following commands:

make

make install

(You have now built, and installed the kernel modules for Lio-Target)

3. Build lio-utils and install. This is one of the tasks I had difficulties with, for some reason the install scripts were calling out to the incorrect python directory, I found a fix to this myself.

Apply the fix first:

Go into the tcm-py and lio-py directories inside of the lio-utils directory. Open the install.sh in both the tcm-py and lio-py directories and change the “SITE_PACKAGES” string to reflect the following:

SITE_PACKAGES=/usr/lib/python2.6/site-packages

Remember to do this in both the install.sh files for lio-py and tcm-py. Now on to building and installing lio-utils.

Issue the following commands from the lio-utils directory:

make

make install

And you are now done!

Lio-Target and Lio-Utils have no succesfully been installed. As you can see, this was way easier than my previous tutorials, and doesn’t include and rebuilding of kernels, etc… One of the plus’s is that you actually build the kernel modules for the existing CentOS kernel.

One last thing. Start lio-target by issuing the command:

/etc/init.d/target start

And do a ‘dmesg’ to confirm that it started ok!

As always, feel free to post any comments or questions. I’ll do my best to help!

### 46 Responses to “How To and Guide (Updated): Get Lio-Target running on CentOS 6”

1. […] VISIT http://www.stephenwagner.com/?p=300 for an updated tutorial on how to get Lio-Target running stable on CentOS 6! Posted by Stephen at […]

2. Just a side note,

Stress tested this setup for over 12 hours. No freezing or odd kernel panics (as was observed with CentOS 5 and my previous tutorial).

3. thanks to your website i have lio running without problems on scientific linux 6.1.
only targetcli is not working correctly: i can only access the backstores… any idea ?

this is what i did to get it working:

yum install system-config-network-tui system-config-firewall-tui gcc make patch python-devel kernel-devel git net-snmp-devel epydoc python-setuptools

easy_install simpleparse
easy_install netifaces
easy_install configobj

cd /usr/src
git clone git://risingtidesystems.com/lio-core-backports.git lio-core-backports.git
git clone git://risingtidesystems.com/lio-utils.git lio-utils.git

cd lio-utils.git/
git checkout –track -b lio-3.5 origin/lio-3.5
cd ../lio-core-backports.git/
make
make install

cd ../lio-utils.git/

edit tcm-py/install.sh and lio-py/install.sh

#SITE_PACKAGES=python ../get-py-modules-path.py
SITE_PACKAGES=/usr/lib/python2.6/site-packages

make
make install

cd /usr/src
git clone git://risingtidesystems.com/configshell.git configshell.git
cd configshell.git
python setup.py install
cd ../
git clone git://risingtidesystems.com/rtslib.git rtslib.git
cd rtslib.git
python setup.py install
cd ../
git clone git://risingtidesystems.com/targetcli.git targetcli.git
cd targetcli.git
python setup.py install

chkconfig target on

mkdir -p /var/target/fabric

!! REBOOT !!

4. Hi Frederik,

Unfortunately I haven’t played with targetcli at all. I’ve never even touched it…

Right now I’m just using lio-utils for configuration and operations… I’ll probably start playing with targetcli soon though… I’ll keep you posted on how I make out…

Stephen

5. Hi Stephen

This is an really good article. Althrough I do have some more questions:

1. have you played with multi-pathing with round-robin selection policy using 2 or more NICs on storage and ESXi hosts?
2. have you tried ESXi 5?
3. since you have an working configuration, would you mind posting the full Lio-Target configuration files of your target (where you’ve successfuly run vMotion, etc.)?

Thanks and keep up the good work!

Kind regards, Marko.

6. Hey, Frederik,

Can you post your’s full Lio-Target configuration files of your target on SL6.1? I’m also interested in deploying test SL6.1 with Lio-Target and would apprechiate some help!

Kind regards, Marko.

7. Hi Marko,

Thanks, and I’m glad if the article helped at all!

1. I’ve done no testing with multi-pathing whatsoever. I just don’t have the equipment available, and my main target has just been to get Lio-Target running stable and properly to provide VMFS over iSCSI to multiple ESXi hosts in a cluster. Maybe one day in the future, no doubt I will post a blog entry if I do!

2. Unfortunately I haven’t tried ESXi 5 yet. My company is a VMware partner, unfortunately we didn’t sell enough licensing last year to make the cut for the “Solution Provider” tier. So I’m stuck with 4.x licenses for my testing/training environment. If I do make some sales, and hit the “Solution Provider” tier, upgrading my test/dev environment to 5.x will be the first thing I’ll do.

3. I’ll post the config below. Keep in mind that I configured Lio at first using Lio-Utils, then used tcm_dump to create the bootup config. I won’t post the bootup config (as it’s generated by using lio-utils and tcm_dump), but here’s what I typed to configure the target…. Keep in mind I’m using a HP SmartArray 6402 controller so that’s why you see /dev/cciss/c0d0 instead of /dev/sdb. Also, keep in mind there’s no fancy configuration required to be able to use Storage vMotion, or vMotion… Just keep in mind that if you use SCSI pass-through, that your SCSI device itself supports SCSI persistent reservations, if not, make sure you use tcm_loop and blockio which handles reservation emulation… I don’t know if the SmartArray 6402 controller and MSA20 arrays support persistent reservations, so I just used blockio to be safe.

Here’s what I typed to configure the target:

# First I start the target

/etc/init.d/target start

# Then I configure tcm_loop and use blockio to setup the arrays to be accessible by lio-target. /dev/cciss/c0d0 is a HP MSA20 Storage Array disk array. If this were just a normal scsi disk you’d simply put /dev/sdb or so the like.

tcm_node –block iblock_0/array1 /dev/cciss/c0d0
tcm_node –block iblock_1/array2 /dev/cciss/c0d1

# Now we configure lio, setup the LUNs, setup the Target Portals and setup LUN ACLs for security, and finally enable the portals. I’ve changed iqns and IPs below for my own privacy. 192.168.0.1 would be the IP of the iSCSI server/target itself.

lio_node –addlun iqn.2010.com.stephenwagner.iscsi:array1 1 0 iscsi00 iblock_0/array1
lio_node –addlun iqn.2010.com.stephenwagner.iscsi:array2 1 0 iscsi01 iblock_1/array2
lio_node –addnp iqn.2010.com.stephenwagner.iscsi:array1 1 192.168.0.1:3260
lio_node –addnp iqn.2010.com.stephenwagner.iscsi:array2 1 192.168.0.1:3260
lio_node –disableauth iqn.2010.com.stephenwagner.iscsi:array1 1
lio_node –disableauth iqn.2010.com.stephenwagner.iscsi:array2 1
lio_node –addlunacl iqn.2010.com.stephenwagner.iscsi:array1 1 iqn.1998-01.com.vmware:esx02 0 0
lio_node –addlunacl iqn.2010.com.stephenwagner.iscsi:array1 1 iqn.1998-01.com.vmware:esxi01 0 0
lio_node –addlunacl iqn.2010.com.stephenwagner.iscsi:array1 1 iqn.1998-01.com.vmware:esx03 0 0
lio_node –addlunacl iqn.2010.com.stephenwagner.iscsi:array2 1 iqn.1998-01.com.vmware:esx02 0 0
lio_node –addlunacl iqn.2010.com.stephenwagner.iscsi:array2 1 iqn.1998-01.com.vmware:esxi01 0 0
lio_node –addlunacl iqn.2010.com.stephenwagner.iscsi:array2 1 iqn.1998-01.com.vmware:esx03 0 0
lio_node –enabletpg iqn.2010.com.stephenwagner.iscsi:array1 1
lio_node –enabletpg iqn.2010.com.stephenwagner.iscsi:array2 1

Let me know if you need anything else 🙂

Stephen

8. i’m going to test the multipathing configuration today or tomorrow on ESX 5

i’ll provide my findings asap, including documentation

9. Oh and by the way…

Here’s just a performance note:
Lio-Target server is an older Xeon 3.2Ghz Single core with hyperthreading, 512MB of SDRAM
Storage Arrays are 2 X HP MSA20 (1 X RAID 5, 1 X RAID 10)

I’ve been getting 90-110MB/sec constant network throughput, CPU usage usually sits around 30% at those high speeds.

I REALLY want to set up trunking and see what kind of speeds this bad boy can do!

10. @Frederik
did you by any chance find a solution for the targetcli problem or are you just using the lio utils as well now? Because i am having the exact same issue atm where i can only access the backstores with it.

11. Hi there, I was wondering if you have had any chance to test LIO on Ubuntu. I have wanted to set it up for some time and each time I get nowhere.

You had talked about using LIO 4.x when you get it working, I was wondering if you have had a chance to mess with LIO 4.x, I am told it is supposed to be “baked in” to the 3.1.x Linux kernel. Though it does not appear to be in my Ubuntu server. Is it possible it need to be selected at the time you configure the kernel?

12. Hi Ian,

Originally my iSCSI production box was running Ubuntu with Lio 3.x, not 4.x. When it came time for me to upgrade, I spent a really long time trying to get 4.x running (especially since I thought it was incorporated into the Linux kernel).

It turns out that even though Lio 4.x was incorporated in to the kernel, it was still under development. The kernel that I was using actually did not contain the iSCSI component. I took a look at the Lio-Target website and it looks like it wasn’t put in place until 3.1 in November 24th of 2011.

Are you using Kernel 3.1? Or 3.0? This could be one of your issues… I know that distribution kernel releases are usually behind kernel.org releases. Also there is also the possibility that they may not have compiled lio into the standard release kernel (if indeed the kernel release is 3.1.x).

I’m actually still running Lio-Target 3.5 backport on my CentOS 6 box, and absolutely love it. One day I’ll give 4.x a try, but it probably won’t be until the new year.

Stephen

13. Thanks for replying, I actually thought that I was running on kernel 3.1, but then found out that I am running on 3.0.0.14. I also saw the same info you saw about the iSCSI component not being added until 3.1, so I went and tried to compile a 3.1 kernel. Got the compile done but still having trouble getting the target to work right. As far as the backports are concerned would someone who using the target for testing really notice a huge difference between 3.5 and the new 4.0 target? Because I might just wait until Ubuntu releases 12.04 that is when they will move to kernel 3.2 supposedly.

Also have you found a serious manual or anything for LIO, something that shows a layout of all of the commands and such?

14. I’m not too sure of any major differences, but I know there is added support for technologies that weren’t included in 3.x. But to be honest, I doubt you using any of these.

Another thing, I could be wrong, but I think lio-utils is being depreciated for Lio 4.x for another configuration application (possibly RTSadmin).

I haven’t found a full admin, but here’s some of the pages that got me going on configuring Lio-Target:
http://linux-iscsi.org/wiki/Lio-utils
http://linux-iscsi.org/wiki/Lio-utils_HOWTO

15. Thanks for the info, it was very helpful (your entire blog).
I now have a fully working LIO 4.x target running on Ubuntu server 11.10 64-bit.

I went and compiled my own kernel based on Ubuntu’s 12.04 source, this is where I was able to get my hands on a 3.1 kernel so I have iSCSI support.

As far as the tools are concerned as of right now they are not getting rid of “lio-utils” but they are adding a “better” (need a lot of work still) config tool called “targetcli”. I have built it and found it a little confusing to work with, mainly when it comes to tweaking the parameters of the target. IMO lio-utils may be a little more complex but its nothing a pen, paper and about 20 mins can’t fix.

Now if I can just get lio-utils to install in Python 2.7 correctly I will be all set. (Ubuntu 11.10 uses python2.7 by default)

16. Well I’m glad you hear you have it up and running!

What type of issues are you having with Python? On CentOS I’ve had some issues with Lio-Utils and python, but found a work around for it. In my case the issue was placing tcm_node, lio_node, and all the other applications in to the wrong directory. I’m not sure if this is the same issue that is occurring on your side?

Stephen

17. Yeah that is the same problem I am having. It keeps wanting to place the tcm_* and lio_* file into “/usr/local/python2.7/dist-packages” which apparently does not exist. Ubuntu is using python 2.7 so that much is correct.

Where is should be placing the files is “/usr/lib/python2.x/dist-packages” (I put 2.x because targetcli needs 2.6 as of now.)

I even built a crude .deb package for the lio_utils, it only work correctly if default version of python is 2.6 as in “/usr/bin/python” –> “/usr/bin/python2.6” link exists.

Other than that it have got it working…. might have spoke a little soon on the kernel though. It works but some reason VMware-Tools is having some issues I think it may be a bad version so I am building a new kernel from the “kernel.org” git repo. This kernel is more up to date but still will include my iSCSI target module.

18. Well this is interesting…

I seem to have found a “bug” with LIO 3.5 / 4.0 (I say “bug” because I am not sure)

I installed ZFS on Linux v 0.6.0-rc6 on my target server and when I try to add the pseudo block device “/dev/zd0” or “/dev/zvol/tank/fish” (both are the same, the first is a symlink) targetcli says it can’t add it because it is not a “TYPE_DISK” device.

Unless there is a solution to this issue this does not make me happy, ZFS was something I was really psyched about using with LIO-target.

Have you tried anything like this yet?

Does LVM do the same thing?

19. Hi Ian,

I know with LVM, once a volume is created it should be available to lio-target as a block device. However, I know absolutely nothing about ZFS. But theoretically, as long as the device is being presented to Linux as a block device it should work.

Is /dev/zvol/tank/fish a LVM device, or a block device? Again, I don’t know anything about ZFS, but from examples I have seen the paths are usually a little longer being block devices? (Sorry if I’m incorrect).

Stephen

20. I had similar problems builiding a debian package of the most recent lio-utils git using python 2.7.
Either it installed it into /usr/local and the scripts couldn’t find it, or it installed it into /usr/python…/site-packages, but the correct dir would be /usr/python…/dist-packages.
Well apparently this is working as designed in python 2.6+ to protect python’s system packages:

I edited the rules file in the debian subdirectory and added –install-layout=deb in the following lines:
cd tcm-py ; $(setup) build –build-base build/tcm-py/ install –no-compile –install-layout=deb –root=$(CWD)/debian/lio-utils/
cd lio-py ; $(setup) build –build-base build/lio-py/ install –no-compile –install-layout=deb –root=$(CWD)/debian/lio-utils/

After that my dpkg-buildpackage -rfakeroot worked just fine and it installed into the correct directory and python found the appropriate files.

21. Hi Stephen,

Newly back to nix and am building a CentOS 6.2 server with LIO. Following your instructions above I am getting the following error running make in the lio-core-backports.git directory:

make -C kernel/drivers/target all
make[1]: Entering directory /root/lio-core-backports.git/kernel/drivers/target’
make -C /lib/modules/2.6.32-220.2.1.el6.centos.plus.x86_64/build SUBDIRS=/root/lio-core-backports.git/kernel/drivers/target modules CWD=/root/lio-core-backports.git/kernel/drivers/target ARCH=x86_64 KBUILD_VERBOSE=0
make: Entering an unknown directory
make: *** /lib/modules/2.6.32-220.2.1.el6.centos.plus.x86_64/build: No such file or directory. Stop.
make: Leaving an unknown directory
make[1]: *** [all] Error 2
make[1]: Leaving directory /root/lio-core-backports.git/kernel/drivers/target’
make: *** [all] Error 2

Any suggestions you may have would be greatly appreciated. I did git this from both repositories in the LIO wiki with the same result (thought one may have been corrupted). I also did some research in the CentOS fora with no real results.

TIA Pat

22. Hi Pat,

Look’s like it’s looking for your kernel libraries.

Typically you need to have the following packages installed (usually using yum):
kernel
kernel-devel
kernel-firmware (this one package really isn’t needed, but why not to keep things clean)

These packages are your kernel, kernel libraries, and various other things needed to not only build a kernel, but also build separate kernel modules, which is exactly what we are doing with lio-target.

Normally I would just say run:
yum install kernel kernel-devel kernel-headers kernel-firmware

But, in your case, I notice that “centos.plus” is inside of the kernel name of the directory it is looking for. Did you install a custom kernel, or a kernel from another yum repository? Try running the command I put above, and see if it fixes it. If not, you’re going to have to find out what the package name is for the special kernel you are using for the development stuff, headers, etc…

Stephen

23. Stephen,

Thanks for these fantastic tutorials. Easily the best info I’ve found on TCM/LIO.

Have you worked with TCM FCoE? Wow, is there a lack of decent documentation. If I get it working I will send you info.

I have previously used Solaris/OpenIndiana COMSTAR, have you any experience with it? It does handle SCSI reservations properly (I’ve used it for the back end of an HA cluster) and is a very nice and easy to configure iSCSI target given the right combination of hardware (it’s picky about that).

However the FCoE implementation seems immature, and since Oracle bought Sun development has really stalled. 🙁

— Trey

24. No problem! Glad I have readers and I’m able to help!

I’ve actually never touched anything FiberChannel. Just haven’t had the opportunity.

If you get it running, let me know. I notice you have a blog, if you post the info, I’ll post the link, or you can post it here!

As far as iSCSI goes, all my work has been done on Linux. I did setup Solaris once and got the iSCSI target going, but that was about it. I’ve also got an iSCSI target running on BSD, but that was it. Most of the technical stuff I do is on Linux. At the same time, the box I was doing testing with iSCSI on, I was also using for other storage related things, so Linux seemed fit.

One more thing, I see a LOT of promise for Lio-Target. It’s already used in numerous storage appliances, and works beautifully with VMware. Now that it’s part of the Linux kernel, use of it is going to skyrocket. I just wish I had more of an opportunity to work with it!

Let me know how you make out with the fiberchannel stuff!

Stephen

25. Hi, Stephen! Thank you for the guide 🙂

By now, I seem to have finished configuring LIO-target, but I don’t really now how to operate it from the client.
My setup is: a block device and FCoE interface to access it. Well, I see you haven’t tried to configure FibreChannel, but I really need some point to start of. Could you please point out, what should I have/do on initiator to discover/use the target, at least if I was using iSCSI? – the core-iscsi and core-iscsi-tools that are mentioned on http://linux-iscsi.org/wiki/ (which I was using to configure target) seem to be very outdated anyways – something like 2005/2006, while the last commit to lio-utils.git was made 6 weeks ago.

Any help or guidance would be great. Thanks!

26. Ok, this is weird. The ostype.pm script is failing to parse the version information, getting the following error, when I try to make the backport…

# make
Unknown architecture: could not continue — at ostype.pm line 50.
make -C kernel/drivers/target all
Unknown architecture: could not continue — at /root/LIO/lio-core-backports.git/kernel/drivers/target/../../../ostype.pm line 50.
make[1]: Entering directory /root/LIO/lio-core-backports.git/kernel/drivers/target’
make -C SUBDIRS=/root/LIO/lio-core-backports.git/kernel/drivers/target modules CWD=/root/LIO/lio-core-backports.git/kernel/drivers/target ARCH= KBUILD_VERBOSE=0
make: Entering an unknown directory
make: *** SUBDIRS=/root/LIO/lio-core-backports.git/kernel/drivers/target: No such file or directory. Stop.
make: Leaving an unknown directory
make[1]: *** [all] Error 2
make[1]: Leaving directory /root/LIO/lio-core-backports.git/kernel/drivers/target’
make: *** [all] Error 2

uname -r returns…
# uname -r
2.6.32-220.2.1.el6.i686

So the ostype.pm script can’t understand that i686 is i386. I am running a default, no customized CentOS 6.2 install of Linux. I had to add…

# yum install file

For some reason it was missing.

27. Schorschi,

There’s quite a few packages you need to be able to build properly. I always do custom installs (and I make them pretty beefy), but I still usually need to install a bunch of packages after CentOS has been installed to get a decent build environment. But the important thing is it’s working now right 🙂 Thanks for the post!

Pavel,

Sorry for the late response. I know absolutely NOTHING about FiberChannel, but I should be able to see if I can put something together… So just so I get an idea of what you’re doing, you have your Linux box, attached to a storage device over FiberChannel, and what your trying to do is setup that storage as an iSCSI target?

28. Stephen,

You point is well taken, although I usually prototype enironments with as lean a build as I can get away with, in this case I believe I had all required components, only ‘file’ was missing beyond what was required.

I found the true issue, somewhere, somehow, when I updated, from .2.1 to .4.1 at some point, the update was incomplete or failed. I had the sources for 2.6.32-220.2.1.el6.i686 and 2.6.32-220.4.1.el6.i686 mixed. I have never seen this quirk before, but even after forcing the system to go to 2.6.32-220.4.1.el6.i686, and explicitly forced reinstall of .4.1. sources, I was stil finding references to 2.6.32-220.2.1.el6.i686 popping up in the make results/logging. Weird.

Fortunately, since I prototype systems in VMs (I architect/engineering Hyper-V, vSphere, KVM, RHEV, etc., for living for a financial firm, since the early days of virtualization, I first worked on ESX 1.5.2 of all things, and Connectix Server while it was still beta, which later was reworked and became Microsoft Virtual Server). Lately I have working on prototype solutions based on LXC and OpenVZ, isn’t virtualization fun? But I digress…

I created a new VM with a clean CentOS 6.2 install, validated that update to 2.6.32-220.4.1.el6.i686 was clean, installed the required components, and then everything compiled as expected. Once in while you see something strange, and this was such. My reason for experimenting with LIO is to support SCSI-3 reservations for Hyper-V iSCSI, and VPD page 83h support, which Hyper-V has to have for clustering disks a-la Microsoft Cluster Shared Volumes support, from a simple Linux based filer.

Thanks for the blog entry on LIO.

29. By the way, on CentOS 6, I am having issues with targetcli as well, the entire /iscsi tree is missing, I too can only see the /backstores tree. Same basic issue as Frederik.

# targetcli
/> ls
o- / …………………………………………………………………………………………………………. […]
o- backstores ……………………………………………………………………………………………….. […]
| o- fileio ……………………………………………………………………………………… [1 Storage Object]
| | o- file01 ……………………………………………………………. [/test/file01.img deactivated]
| o- iblock ……………………………………………………………………………………… [0 Storage Object]
| o- pscsi ………………………………………………………………………………………. [0 Storage Object]
| o- rd_dr ………………………………………………………………………………………. [0 Storage Object]
| o- rd_mcp ……………………………………………………………………………………… [0 Storage Object]
o- loopback …………………………………………………………………………………………….. [0 Target]

Interesting, but also frustrating.

30. Schorschi

I believe that the iscsi kernel driver for use with targetcli is only available with linux kernel 3.1. You’ll have to use lio-utils until it becomes the standard kernel 🙁

31. Hi there,

you need definitely kernel 3.1.x or later.
If you see only the backstore – tree in targetcli, you need to copy the following files from the “specs”-directory of rtslib to /var/target/fabric:
iscsi.spec
loopback.spec

After that you will see a iscsi and loopback – tree in targetcli.

Greets,
SpeedCracker

32. You can save the config files with tcm_dump –o command.

It saves the current config.

33. And then chkconfig target on and chkconfig iptables off

That way your target will start up automatically.

34. I’have to update Lio- target but i can´t get the the git from //risingtidesystems.com/lio-core-backports.git lio-core-backports.git I alwase get the error. You dont hava acces to this side.. Can someone send it to me?

35. They have removed the 3.x backport repo.

This is from the target-devel mailing list at kernel.org

“It appears that your using an ancient version (3.5.3) of the target from
the now deprecated lio-core-backports.git tree with a RHEL 6.x kernel.

We are no longer maintaining this backport, and have not been
maintaining this version for the last ~18 months. I’d very strongly
recommend upgrading to Fedora 17 in order to use LIO v4.x, or use a
stable mainline kernel tree (v3.[2,4,5,6]) with RHEL 6.x that contains a
modern version of the mainline iscsi-target code.”

I get really excited when I found this post, only to have my dreams dashed that they removed the repo.

36. Actually to be honest, running the latest version of CentOS and/or RHEL, Lio-Target runs right out of the box. No more fighting, fussing, or issues.

Love it!

37. Stephan
can you please elaborate on “LIO-Target runs right out of the box”. Which packages do I need to install to get the LIO out of the box.

38. Stephen
I am trying to use the ISCSI Block LIO target on Centos 6.4
Really appreciate your help in figuring this out.

39. Hi Stephen

I hate trying to find an answer via a blog (nice stuff on here btw), but I seem to be stuck and forums aren’t helping.

I am on Centos 6.4
installed fcoe-target-utils
open up targetcli
and there is no iscsi module listed in the tree.

What you state in comment 37 is that lio runs right out of the box. Lio does, but how the heck do you get the iscsi module going? Im sure I am missing some mundane thing, but just not finding the answer in my travels in the tubes. And unfortunately searching for LIO gets a lot of noise from Mac LION…

here is what I see in targetcli
/> ls
o- / ……………………………………………………………………………………………………. […]
o- backstores ………………………………………………………………………………………….. […]
| o- block …………………………………………………………………………………. [1 Storage Object]
| o- fileio ………………………………………………………………………………… [0 Storage Object]
| o- pscsi …………………………………………………………………………………. [0 Storage Object]
o- loopback ………………………………………………………………………………………. [0 Targets]

As you can see. Loopback and back stores, but no iscsi, or any of the others i would expect to see based on all the examples I have found online.

Any thoughts?
Thanks!
Dave

40. Hi Dave,

Hopefully I can help!

Have you tried configuring the Target in targetcli? I could be wrong, but there’s a chance it may load the module once you start configuring it. If not, try loading the kernel modules before running targetcli and see if it loads.

It’s been a while since I’ve played with this stuff. Try doing what I mentioned above and report back. If you don’t have any success I’ll look back in my notes and see if I can find out what I did.

Cheers,
Stephen

41. And Dave, just curious did the targetcli package install the init script /etc/init.d/target ?

If it did, try running:
/etc/init.d/target start

Do that before loading up target cli

42. I agree with above.. I’m having trouble getting ISCSI to work.. here’s what I did to get this far.. maybe you can help us Stephen..

yum install fcoe-target-utils
\rm /usr/sbin/tcm_node && ln -s /usr/lib/python2.6/site-packages/tcm_node.py /usr/sbin/tcm_node
\rm /usr/sbin/tcm_dump && ln -s /usr/lib/python2.6/site-packages/tcm_dump.py /usr/sbin/tcm_dump
\rm /usr/sbin/tcm_loop && ln -s /usr/lib/python2.6/site-packages/tcm_loop.py /usr/sbin/tcm_loop
\rm /usr/sbin/tcm_fabric && ln -s /usr/lib/python2.6/site-packages/tcm_fabric.py /usr/sbin/tcm_fabric
\rm /usr/sbin/lio_dump && ln -s /usr/lib/python2.6/site-packages/lio_dump.py /usr/sbin/lio_dump
\rm /usr/sbin/lio_node && ln -s /usr/lib/python2.6/site-packages/lio_node.py /usr/sbin/lio_node

chmod +x /usr/lib/python2.6/site-packages/tcm_node.py
chmod +x /usr/lib/python2.6/site-packages/tcm_dump.py
chmod +x /usr/lib/python2.6/site-packages/tcm_loop.py
chmod +x /usr/lib/python2.6/site-packages/tcm_fabric.py
chmod +x /usr/lib/python2.6/site-packages/lio_dump.py
chmod +x /usr/lib/python2.6/site-packages/lio_node.py

service target start
chkconfig target on

if I load up targetcli all I get is this. I’m not sure how we are supposed to configure a target without going to /iscsi

targetcli shell version 2.0rc1.fb16
Copyright 2011 by RisingTide Systems LLC and others.
For help on commands, type ‘help’.

/backstores> ls
o- backstores …………………………………………………………………………………………………. […]
o- block ………………………………………………………………………………………… [0 Storage Object]
o- fileio ……………………………………………………………………………………….. [1 Storage Object]
| o- md_block0 …………………………………………………. [/zfs-data/vstorage.img (2.0GiB) write-back deactivated]
o- pscsi ………………………………………………………………………………………… [0 Storage Object]
/backstores>

as you can see im trying to do a simple 2gb block file device to test a basic setup.

43. Hi Joe,

I don’t have a test box I can mess with right now. I did however load up fcoe-utils on my CentOS box. I think what’s happening is that the iscsi target isn’t built with the generic CentOS kernel and that’s why it’s not showing up.

44. Ok, I finally have the dirt. The backports have been removed. This means you need to use a newer linux kernel with CentOS (higher then 2.6.38) which has the iSCSI target built in. My apologize for not catching this sooner!

45. Thanks for the quick response Stephen.. that’s what I was afraid of.. everywhere I read you need to update the kernel.. that’s making a mess of a box and making future updates/management more challenging.. I guess there’s no other option. :)… Would you suggest going with 2.6.38 or going right into the 3.x? I read elsewhere that iscsi with ILO isn’t supported until 3.1 or 3.2 (can’t remember which). it may be wrong, ofcourse.