Connect with me!

Have a question? Want to hire me? Reach out and Connect!
I'm available for remote and onsite consulting!
To live chat with me, Click Here!
Categories: ESXiVMwarevSphere

VMware vSphere Data Protection – Upgrade 6.1.2 to 6.1.3 ISO not detected

In the process of prepping my test environment so I can upgrade from vSphere 6.1 to 6.5, one of the prerequisites is to first upgrade your VDP appliances to version 6.1.3 (6.1.3 is the only version of VDP that supports vSphere 6.5). In my environment I’ll be upgrading VDP from 6.1.2 to 6.1.3.

After downloading the ISO, changing my disks to dependant, creating a snapshot, and attaching the ISO to the VM. My VDP appliances would not recognize the ISO image, showing the dreaded: “To upgrade your VDP appliance, place connect a valid upgrade ISO image to the appliance.”.

I tried a few things, including trying the old “patch” that was issues for 6.1 when it couldn’t detect, unfortunately it didn’t help. I also tried to manually mount the virtual CD-Rom to the mountpoint but had no luck. The mountpoint /mnt/auto/cdrom is locked by the autofs service. If you try to modify these files (such as delete, create, etc…), you’ll encounter a bunch of errors and have no luck (permission denied, file and/or directory doesn’t exist, etc…).

Essentially the autofs service was not auto-mounting the virtual CD drive to the mount point.

To fix this:

  1. SSH in to the VDP appliance
  2. Run command “sudo su” to run commands as root
  3. Use vi to edit the auto.mnt file using command: “vi /etc/auto.mnt”
  4. At the end of the first line in the file, you will see “/dev/cdrom” (without quotation), change this to “/dev/sr0” (again, without quotation)
  5. Save the file (after editing the text, Ctrl+c, then type “:w” and enter which writes the file, then type “:q” then enter to quit vi.
  6. Reload the autofs config using command: “/etc/init.d/autofs reload”
  7. At the shell, run “mount” to show the active mountpoints, you’ll notice the ISO is now mounted after a few seconds.
  8. You can now initiate the upgrade. Start it.
  9. At 71%, autofs updates via a RPM, and the changes you made to the config are cleared. IMMEDIATELY edit the /etc/auto.mnt file again, change “/dev/cdrom” to “/dev/sr0” and save the file, and issue the command “/etc/init.d/autofs reload”. Do this as fast as possible.
  10. You’re good to go, the install will continue and take some time. The web interface will fail, and become unresponsive. Simply wait, and the vDP appliance will eventually shut down (in my case it took over 30 minutes after the web interface failed to reconnect, in a high performance environment for the vDP VM to shut down).

And done! Leave a comment!

 

Stephen Wagner

Stephen Wagner is President of Digitally Accurate Inc., an IT Consulting, IT Services and IT Solutions company. Stephen Wagner is also a VMware vExpert, NVIDIA NGCA Advisor, and HPE Influencer, and also specializes in a number of technologies including Virtualization and VDI.

View Comments

  • Good afternoon

    I researched this problem and saw that as of version 6.1, to update the VDP, a new deploy is necessary
    I'm doing this right now

  • Hi Rodrigo,

    Yes, some versions require a complete new install to upgrade to 6.1. However this problem in the post is a problem when upgrading from 6.1.2 to 6.1.3.

    Two separate problems, but thank you for the post as I'm sure numerous others are experiencing what you are.

    Cheers,
    Stephen

  • Good afternoon

    The problem I referred to is exactly the migration from VDP 6.1.2 to 6.1.3.

    A new installation is required to succeed

  • Hi Rodrigo,

    Using the above steps in my post, you can upgrade and do not require a new installation.

    I did not require a new installation, and was able to upgrade my appliance.

  • Hello.

    Thank you for your help.

    It happened exactly as reported by you in the post above.

    Resolved, update executed successfully

    Thanks again.

  • Hello, many thanks, but I suggest to create a link /dev/cdrom -> /dev/sr0 instead of edit that file.
    This does not require to edit the file "quickly" at 71%, so the process will be more confortable :)

  • Hi Matthew,

    Did you try or verify if the linking worked?

    I was in a hurry to get my vDP appliance running, so I can't remember if I tried that or not. I do remember though that the permissions on the mnt folder were odd (I think due to the automount service), I can't remember if they opened up once the service was stopped.

    It would be great to know if that worked though, as it may help out if this occurs in future upgrades.

    Cheers

  • Thank your for sharing this GREAT write-up for the vDP in-place upgrade process.

    The process is exactly as described, especially step 9 @ 71% part!

    Thanks again Stephen.

  • Thanks for the hint. It got me on track to a more elegant solution to getting the auto mount back to work:

    The issue with the missing CDR is actually in the udev configuration:
    /etc/udev/rules.d/70-persistent-cd.rules
    It assigns the wrong SCSI ID to /dev/cdrom

    root@vdp1:~/#: cat /etc/udev/rules.d/70-persistent-cd.rules
    # This file was automatically generated by the /lib/udev/write_cd_rules
    # program, run by the cd-aliases-generator.rules rules file.
    #
    # You can modify it, as long as you keep each rule on a single
    # line, and set the $GENERATED variable.

    # VMware_IDE_CDR00 (pci-0000:00:07.1-scsi-4:0:0:0)
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-4:0:0:0", SYMLINK+="cdrom", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-4:0:0:0", SYMLINK+="cdrw", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-4:0:0:0", SYMLINK+="dvd", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-4:0:0:0", SYMLINK+="dvdrw", ENV{GENERATED}="1"

    # VMware_IDE_CDR00 (pci-0000:00:07.1-scsi-1:0:0:0)
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="cdrom1", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="cdrw1", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="dvd1", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="dvdrw1", ENV{GENERATED}="1"

    You can simply get this fixed with

    # rm /etc/udev/rules.d/70-persistent-cd.rules
    # reboot

    This will detect your CDR upon reboot and re-generate the file with the correct content.

    root@vdp1:~/#: cat /etc/udev/rules.d/70-persistent-cd.rules
    # This file was automatically generated by the /lib/udev/write_cd_rules
    # program, run by the cd-aliases-generator.rules rules file.
    #
    # You can modify it, as long as you keep each rule on a single
    # line, and set the $GENERATED variable.

    # VMware_IDE_CDR00 (pci-0000:00:07.1-scsi-1:0:0:0)
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="cdrom", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="cdrw", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="dvd", ENV{GENERATED}="1"
    SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:07.1-scsi-1:0:0:0", SYMLINK+="dvdrw", ENV{GENERATED}="1"

    The only thing I noticed: when auto mount is in effect you won't be able to disconnect the ISO image while the appliance is running. Just shutdown the VDP appliance after the update, disconnect the ISO and boot it up again.

    Stefan

  • # rm /etc/udev/rules.d/70-persistent-cd.rules
    # reboot

    Thanks for the solution, fixed issue while 6.1.4 to 6.1.5 upgrade

Share
Published by

Recent Posts

How to properly decommission a VMware ESXi Host

While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host. Some might be surprised to… Read More

4 months ago

Disable the VMware Horizon Session Bar

This guide will outline the instructions to Disable the VMware Horizon Session Bar. These instructions can be used to disable the Horizon Session Bar (also known as the Horizon Client… Read More

4 months ago

vGPU Enabled VM DRS Evacuation during Maintenance Mode

Normally, any VMs that are NVIDIA vGPU enabled have to be manually migrated with manual vMotion if a host is placed in to maintenance mode, to evacuate the host. While… Read More

4 months ago

GPU issues with the VMware Horizon Indirect Display Driver

You may experience GPU issues with the VMware Horizon Indirect Display Driver in your environment when using 3rd party applications which incorrectly utilize the incorrect display adapter. This results with… Read More

4 months ago

Synology DS923+ VMware vSphere Use case and Configuration

Today we're going to cover a powerful little NAS being used with VMware; the Synology DS923+ VMware vSphere Use case and Configuration. This little (but powerful) NAS is perfect for… Read More

4 months ago

How to Install the vSphere vCenter Root Certificate

Today we'll go over how to install the vSphere vCenter Root Certificate on your client system. Certificates are designed to verify the identity of the systems, software, and/or resources we… Read More

5 months ago
Powered and Hosted by Digitally Accurate Inc. - Calgary IT Services, Solutions, and Managed Services