Apr 172018
 

With the news of VMware vSphere 6.7 being released today, a lot of you are looking for the download links for the 6.7 download (including vSphere 6.7, ESXi 6.7, etc…). I couldn’t find it myself, but after doing some scouring through alternative URLs, I came across the link.

VMware vSphere 6.7 Download

VMware vSphere 6.7 Download Link

Here’s the link: https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/6_7

HPe Specific (HPe Customization for ESXi) Version 6.7 is available at: https://www.hpe.com/us/en/servers/hpe-esxi.html

Unfortunately the page is blank at the moment, however you can bet the download and product listing will be added shortly!

UPDATE 10:15AM MST: The Download link is now live!

More information on the release of vSphere 6.7 can be found here, here, here, here, here, and here.

An article on the upgrade can be found at: https://blogs.vmware.com/vsphere/2018/05/upgrading-vcenter-server-appliance-6-5-6-7.html

Happy Virtualizing!

  20 Responses to “VMware vSphere 6.7 released! Here’s the download link…”

  1. Would you know why the VMWare compatibility tool is NOT showing the HPE MSA 2042 as compatible with ESXi 6.7?

    Cheers

  2. Hi IP,

    I have a feeling that this is so, because the MSA 204x is the previous generation of the storage product, whereas the focus is now on the MSA 205x line of product.

    VMware probably just hasn’t gotten around to testing the MSA 2042 on VMware yet (it’s virtually the same as the 2040, only ships with SSD disks standard and comes with teiring licensing).

    Hope this helps.

    Cheers,
    Stephen

  3. Hi Stephen,

    ESXi 6.7 and the MSA 2040 are not compatible.
    This is the problem I have come across :

    The ESXi 6.7 datastores are sitting on Pool A
    The datastores are presented as iSCSI (I’m using software iscsi on the esxi 6.7 host)

    I purchase new disks and add to Pool A as a Disk group.

    I reboot my esxi6.7 host and the datastores (located on Pool A) no longer loads. I can still see the device but the datastore will not mount. I look in vmkernel.log and I see the folloiwng errors

    Warning: Vol3: 3102: : Invalid physDisksBlockSize 512

    I can solve the problem by removing the disk group from pool A or by downgrading esxi to 6.5
    HP and VMWARE will simply point the finger at each other on this one I guess!
    Got a call going with both at the moment.

    Have you come across this problem?

    regards

  4. Hi AughIP,

    How are you adding new physical disks to the virtual pool? Technically on an MSA 2040, you cannot add new physical disks to a new existing virtual pool. Only with linear disk pools can you add new physical disks.

    Is there a chance that you’re not adding new physical disks, but instead creating a new virtual disk from the pool? If so, are you correctly assigning LUNs when mapping to the hosts? Also, are all the disks types the same sector size (512, 512e, 4K)?

    I don’t think this is an incompatibility, but more so a configuration issue.

    Cheers,
    Stephen

  5. HI Stephen,

    Thanks very much for taking the time to reply.

    There may be a mix up of technologies at play here. You may be referring to the older MSA technology.

    At my site we are using the MSA 2040 with the GL225R003 firmware. This new technology encourages us to move away from the older technology “vdisks?” to the new technology “Disk Groups”

    I’ve come across one of your brilliant blogs that recommend the use of the new technology

    https://www.stephenwagner.com/2017/02/14/hpe-msa-2040-the-switch-from-linear-disk-groups-to-virtual-disk-groups/

    Does this clarify my situation?

    thanks again

  6. Hi AughIP,

    The newer firmware still utilizes Virtual Disk Groups inside of Virtual Pools. I need to know which one you are referring to?

    Are you adding physical disks to the MSA unit and trying to expand a linear disk pool (and vdisk), or are you simply adding a virtual disk group to an existing virtual disk pool?

    With the terms you used in your first post, I just don’t clearly understand what exactly your doing. Once we clarify, I’m hoping I should be able to help out.

    Stephen

  7. Hi Stephen

    I’m adding a virtual disk group to an existing virtual disk pool

    This existing virtual disk pool contains the volumes that are presented using iSCSI to my esxi hosts – which mount them as datatsores.

    After I add the second disk group I reboot one of the esxi 6.7 hosts. This results in the esxi host being unable to mount the Datastore (Which it could do before the second disk group was added)

    many thanks

  8. I finally understand! Thanks for clarifying! 🙂

    When you add the new virtual disk group, you’re not modifying or deleting the previous disk group correct? Also, after you create the disk group, I’m assuming you’re creating a volume, and then mapping it to the hosts. When you do this, you’re setting the LUN number to a different number than the previous volume, correct?

  9. HI Stephen

    that is correct – I do not modify the previous disk group

    No – I do not create a new volume. I simply add the disk group and then reboot the esxi 6.7 host.

    regards,
    p.s. I first came across this problem when I added a “read cache” to the virtual disk pool. So even adding a “read-cache” causes the same problem.

  10. This is interesting… What if you don’t add anything, and reboot the hosts, is there an issue?

    Technically the hosts should see absolutely no change on the MSA, if you’re not presenting anything to the hosts. I’m wondering if you have a configuration issue either on the hosts or the MSA.

    What type of physical disks do you have in the MSA. And what is the LUN number of the existing volume?

  11. Hi Stephen

    The hosts reboot normally when I do not alter the virtual pool. I’ve tested many times on all the esxi hosts.

    I’m using SAS 900GB 10k 12gb disks (I have six of these in RAID 5) – Model EG0900JFCKB. These make up the original Disk Group.

    I have four SSD 200GB 12gb disks – Model MO0200JEFNV . I add these to the virtual pool as a disk group or a read-cache and after the reboot I notice the datastore problem.

    I can solve the problem in two ways :
    1. On the MSA remove the disk group I recently added to the pool and then in vsphere rescan the storage . The datastores are remounted immediatley after the rescan.

    2. Downgrade esxi 6.7 to esxi 6.5. I did this (on one host) because I read that the GL225R003 does not support esxi 6.7 – It does support esxi 6.5 (Basically vmware/hp have not tested 6.7 and the MSA 2040 and probably never will).
    When I downgrade to esxi 6.5 I run the test again (add disk group) and there are no problems – the datastores mount properly on reboot.

    This is the release notes for GL225R003 – the firmware on the MSA 2040 – it supports up to 6.5
    https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1158764188/v144456/723983-006.html

    A new release of the firmware came out last week(26th March) GL225-P001- however,again, its release notes only mention support for 6.5
    https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_6e1c6f98c76548179b486c166d

  12. This is very odd. I’m still leaning towards a configuration issue (just because of the sector size errors).

    What sector size are the HDDs on the array (512, 512e, 4k)? And what sector size is the SSD?

    Are you using all HPe branded disks and SSDs?

  13. PS One other note, I recommend upgrading to the latest firmware. I’m reading the fixes and it sounds like it might be worthwhile for you to try upgrading.

  14. The disks are HP disks.

    The sector format on the SAS disks is 512n
    The sector format on the SSD disk is 512e

    hmmmm….I’m not sure if that makes a difference.

  15. In vSphere 6.7 (looking at the properties of the storage device) I’ve noticed that the sector format size changes when I add a disk group to the virtual disk pool – changes from 512n to 512e

    ############# Before I add the disk group to the virtual disk pool – the sector format size is 512n
    General
    Name HP iSCSI Disk (naa.600c0ff00029475d021f915c01000000)
    Identifier naa.600c0ff00029475d021f915c01000000
    Type disk
    Location /vmfs/devices/disks/naa.600c0ff00029475d021f915c01000000
    Capacity 2.05 TB
    Drive Type HDD
    Hardware Acceleration Supported
    Transport iSCSI
    Owner NMP
    Sector Format 512n
    ###############

    ################### esxi 6.7, after I add disk group to virtual disk pool the sector format size is 512e
    General
    Name HP iSCSI Disk (naa.600c0ff00029475d021f915c01000000)
    Identifier naa.600c0ff00029475d021f915c01000000
    Type disk
    Location /vmfs/devices/disks/naa.600c0ff00029475d021f915c01000000
    Capacity 2.05 TB
    Drive Type HDD
    Hardware Acceleration Supported
    Transport iSCSI
    Owner NMP
    Sector Format 512e
    ###############################

    With esxi 6.5 the sector format size does not change(after I add the disk group to the virtual disk pool) it remains 512n (and the datastore mounts correctly)

    It appears that its this mix of 512e and 512n is causing the trouble (for 6.7).

    I guess I should replace the 512n SAS disks (it seems to be the older technology) with 512e disks?

  16. Hi AughIP,

    It looks like the SSDs are 4k disks using 512 emulation technology. Starting in vSphere 6.7, they started to support 4k native sector sizes. I’m wondering if after adding the cache, and with the change to 512e (emulated), that the 4k sectors aren’t aligned, or vSphere believes it’s not aligned. There still could be a configuration issue though existing on the array (which could either be causing the issue your experiencing, or combining with the 4k issue).

    Is this unit being used in production? Can you redeploy the MSA, and re create all the pools and virtual disks, but configure everything first (also using the MSA best practices for vSphere document), and then add it to the hosts? If the VMFS volume is formatted for 512, but then the array starts to present it as a different format, this could be causing the issues.

    Stephen

  17. One other question, is the VMFS volume formatted with VMFS 5 or VMFS 6?

  18. Right! I think we have found the problem! and a solution ! (Thanks Stephen)

    I had four spare SSD disks – they are all 512e.
    At this stage I only had one virtual disk pool – Pool A (virtual disk pool, Pool B, did not exist)

    I created a disk group (RAID1 – two SSD 512e disks) and added it to Pool B
    I created a volume on Pool B and presented it to the esxi 6.7 hosts.
    I added the volume to a 6.7 host as a datastore. It mounted correctly.
    Great, all good.

    I then added a second disk group (RAID1 – two SSD 512e disks) to the virtual disk pool – Pool B.
    This meant that Pool B contained two disk groups – the disks in both Pools were all 512e

    I rebooted the esxi 6.7 host.
    Normally at this stage when the esxi host comes back up the datastores on the virtual disk pool that was amended will no longer load.
    However on this occasion THE DATASTORE MOUNTS! SUCCESS!

    The problem seems to be mixing disks of different sector format sizes in the same Pool AND ESXi 6.7.
    ESXi 6.7 does not like the mix (esxi 6.5 does not mind)

    I have read in the HP literature that, and I quote, “A disk group can contain a mix of 512-byte native sector size (512n) disks and 512-byte emulated sector size (512e) disks. For consistent and predictable performance, do not mix disks of different sector size types (512n, 512e).”. Now, that quote is referring to disk groups containing 512n and 512e disks – which is something I never did, my disk groups only ever conatined 512n or 512e disks.

    However I did notice , when I checked the virtual disk pool, when it contained two disk groups one disk group being 512n and the other disk group being 512e, that the Pool “Sector Format” size has a value of “MIXED”.
    The Pool B, that I created containing two disk groups which were all 512e disks, displayed a Sector Format size value of 512e.

    It seems that whatever has chaneged in esxi 6.7 – it does not like the mix. Ensure you disk Sector Format size is all the same.

  19. I’m glad you figured it out!

    I still think you can mix though if you wanted to. You just need to mix, configure, and get everything setup before presenting it to the ESXi host. I think what’s causing the problem, is that when adding the disk type, and then restarting, the volume sector type changes, and this is what causes everything to go nuts. Typically a sector size change on a volume can lead to corruption, so this might be a safety mechanism.

    You could avoid this, by configuring everything first, and then presenting it to the ESXi host, then formatting, etc.

    But if everything is working, that’s great!

  20. Yes Stephen I think you are correct.
    Most organisations get the disk configuration correct BEFORE they present it to the esxi hosts and it then never changes! Most organisations also buy the MSA full of disks (and not 10 disks like we did $$$).

    Hopefully this helps anyone else which changes config after presenting to esxi 6.7.

    thanks again for your support – an excellent web site – keep up the great work.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)