ISCSI Woes

So if your like me you may have been getting emails from WD (Western Digital) saying your MyCloud device is going out of support. I have been largely ignoring these but the deadline was fast approaching and I just wanted something that worked better. I knew the MyCloud just ran a software raid with mdadm and that it could be assembled on any linux system I could fit the drives in.

I ended up getting my hands on an older enterprise 1u server that I was able to fit the 4 drives into and install CentOS linux on it. If you end up using enterprise hardware make sure to get your hands on a HBA that can do JBOD / IT mode so it presents the disks directly to the OS. These instructions will be geared towards Red Hat based distros but should work on most.

Step1 (Move everything over):

  • Make sure to shutdown your WD nas gracefully.
  • Migrate disks to new system.

Step 2 (Install Packages):

  • Once the system is booted verify that you see all the drives and they are reported the right size. (If your HBA is SAS 3Gbps most likely you will be limited to 2TB per disk)
  • If all drives show up correctly you should be able to move on to installing the required packages. mdadm should already be there in most installs so you should only need
    yum install targetcli” and “yum install mdadm” if needed.
  • Once installed run “systemctl enable target” and “systemctl start target“.

Step 3 (mdadm):

  • Now lets use mdadm to generate a config based on inspecting the disks you moved over. You can do this by running “mdadm –detail –scan >> /etc/mdadm/mdadm.conf“.
  • If you open that file now with an editor you should see something like below.

    ARRAY /dev/md0 level=raid5 num-devices=3 metadata=00.90 UUID=966552e4:0211e47f:5hjgce44:817d167c

  • Now  you can run the assemble: “mdadm –assemble –scan /dev/md0“. Use what ever your /dev/device was in the mdadm.conf.
  • Once that completes you should be able mount this new device. For example “mount /dev/md0 /mnt/raidvolume“.
  • Next run “cat /proc/mdstat” and check the output to see if everything is active and without error. You should get an output like below:

    Personalities : [raid1] [raid6] [raid5] [raid4]md0 : active raid1 sdd1[0] sde1[3] sdf1[2] sdg1[1]2097088 blocks [4/4] [UUUU]bitmap: 0/128 pages [0KB], 8KB chunk

  • If all looks good then see if you can view the files. If you were running the built in SMB, AFP, or one of the other services that was not ISCSI you are good to go from here. If you were using ISCSI like me move on to the next steps.

Step 4 (ISCSI Config):

First we need to setup the ISCSI target config. This is where I got tied up a bit.

  • First lets browse the mount you created and go to the .systemfile folder. This is where WD stores their fileio and volume images for ISCSI. Gather the names of your volume images in the  iscsi_images folder.
  • Run
    targetcli

    and run

    cd /backstores/block/

    to navigate to the backstores menu.

  • For this next step you need to create the volume image. Run
    create <volume_name> <path_to_mount>/.systemfile/iscsi_images/<name_of_volume_in_the_iscsi_images_folder> <size_of_image>

    it does not matter what you set for a size really if you have existing files the create will see that and apply the same settings as the current extent. Do this for all volume images you have.

  • After the image files are created you can now set the target name by running “cd /iscsi” and running “create” with no parameters sets the default name which is fine. You can verify the target info by running “ls” in the /iscsi directory.
  • Next is setting up the ACL by going running
    cd /iscsi/<name_of_target_created_on_last_step>/tpg1/acls

    Gather the initiator name from your client you want to map the ISCSI lun to and run

    create <initiator_client_name>

    and you can verify the ACL by running “ls“.

  • Now we need to add the volume(s) to the TPG to create the LUNs. Run
    cd /iscsi/<name_of_target>/tpg1/luns

    Next run

    create /backstores/fileio/<volume_name>

    Save and exit the targetcli config by running “saveconfig“.

And that should be it, now you can make sure all the services are running and point a initiator client to the system to try mapping it.

 

Leave Comment

Your email address will not be published. Required fields are marked *