Skip to content
  • Home
  • About Me
  • Disclaimer

Transmitting on the wire

Adversity doesn't build character, it reveals it.

VMware vVols: Why you should never manually connect a vVol to a host or host group.

January 23, 2020 by jhop

It has been quite a while since I have blogged and am excited to be back! I think we can all appreciate and understand that the holidays just get so crazy everything else can fall by the wayside. 🙂

Today I would like to address a very common issue that we see with Pure Storage and vVols; manually connecting a virtual volume (vVol) to a host or host group.

Why do I want to manually connect a vVol to a host or host group?

The answer here is easy; habit! In the olden days before vVols were invented everything was done manually with volume/LUN connectivity. To illustrate the workflow it would look similar to the following (it could vary, but you’ll get the idea):

1. Create a volume on the FlashArray for a datastore.
2. Map that volume to a set of host(s) or host group objects.
3. Rescan the ESXi hosts and find the newly presented volume.
4. Create a VMFS-3, VMFS-5 or VMFS-6 datastore.

Once the volume was presented it had remain connected at all times because it can house hundreds of VMs at any time. If that volume was disconnected then all VMs on the datastore are impacted and SEV-1 bells would sound!

Thus, when a storage admin or vSphere Admin sees a message about a volume being disconnected (with vVols) it can create a state of panic and feel action needs to be taken quickly to resolve the issue.

Why I should never manually connect a vVol to a host or host group.

The answer here is easy too; because it won’t accomplish anything but cause problems later on down the road! 🙂

Unfortunately I am sure most people are not going to be okay with just a single line answer so, for those of you that need more, keeping reading.

The real answer here is because the ESXi host cluster is “self-managing” when it comes to vVol connectivity (with the use of binds). The cluster determines which volumes should be connected and to which host they should be connected to. The method it uses is quite simple, wherever the powered on VM that owns those vVols resides is where the volumes should be connected.

Notice that “powered on” is highlighted. The reason for this is to emphasize that a powered off VM will not have the underlying vVols mapped to any host in the cluster. In reality, why should it? The host doesn’t have any use for the vVols while the VM is powered off. There are a few exceptions to this (such as browsing a datastore), but holds true 9 out of 10 times. If you read Cody Hosterman’s blog I linked above it goes into this concept more in-depth. It is a great read and would recommend it.

Now that it has been explained I want to illustrate it so it makes a little more sense for those that are still newer to the concept. I have a VM that is using vVols and I have called it ‘vVol-VM-Example’:

root@purearray-ct0:~# purevol list |grep -E 'Name|Example'
 Name                                              Size  Source                                Created                  Serial
 vvol-vVol-VM-Example-c630ec8a-vg/Config-5ee8c42b  4G    -                                     2020-01-23 12:30:37 PST  8A75393BECAD4E430002F415
 vvol-vVol-VM-Example-c630ec8a-vg/Data-a3884348    256G  vvol-Fredo-7a0207a9-vg/Data-366d7c60  2020-01-23 12:30:39 PST  8A75393BECAD4E430002F416

Notice that we have 2 volumes associated with this VM (Config and Data vVols). If I look at which hosts they connected to in a powered off state this is what you will see:

root@purearray-ct0:~# purevol list –connect |grep -E ‘Name|Example’
Name Size LUN Host Group Host

None, they are not mapped to any host. Based off of what we discussed above this is expected and normal. Since it isn’t powered on or in use there is no need to have it connected. It is usually at this point that someone sees this and panics and takes steps to manually map the vVols to a host group because they think the VM won’t be accessible.

But, that is not necessary just as we outlined above. To show you what I mean I am going to simply power on the VM (that owns these vVols) from the vCenter Server and then we’ll see what it looks like:

root@purearray-ct0:~# purevol list --connect |grep -E 'Name|Example'
 Name                                              Size  LUN     Host Group         Host
 vvol-vVol-VM-Example-c630ec8a-vg/Config-5ee8c42b  4G    254:8   -                  elaine
 vvol-vVol-VM-Example-c630ec8a-vg/Data-a3884348    256G  254:10  -                  elaine
 vvol-vVol-VM-Example-c630ec8a-vg/Swap-a0f80e79    16G   254:9   -                  elaine

Notice now that the Config and Data vVol have been connected to the ESXi host “Elaine” (Yes, my entire environment is based off of Seinfeld). Additionally, a Swap vVol was automatically created now that it is powered on.

If we look at the audit trail it looks like this:

root@purearray-ct0:~# pureaudit list
 ID        Time                     User      Command     Subcommand   Name                                                        Arguments
10144877  2020-01-23 12:37:55 PST  pureuser  purevol     connect                                                                   --protocol-endpoint pure-protocol-endpoint --host elaine vvol-vVol-VM-Example-c630ec8a-vg/Config-5ee8c42b
10144878  2020-01-23 12:37:55 PST  pureuser  purevol     create                                                                   --size 16G Swap-a0f80e79
10144879  2020-01-23 12:37:55 PST  pureuser  purevol     recover                                                                  vvol-vVol-VM-Example-c630ec8a-vg/Swap-a0f80e79
10144880  2020-01-23 12:37:56 PST  pureuser  purevol     connect                                                                   --protocol-endpoint pure-protocol-endpoint --host elaine vvol-vVol-VM-Example-c630ec8a-vg/Swap-a0f80e79
10144881  2020-01-23 12:37:56 PST  pureuser  purevol     connect                                                                   --protocol-endpoint pure-protocol-endpoint --host elaine vvol-vVol-VM-Example-c630ec8a-vg/Data-a3884348

All of those calls were made automatically from ESXi to our VASA provider. This concept can confuse people at times but is understandable because it says that “pureuser” performed these steps. That is simply the account that was used to register the VASA provider with, thus calls from the ESXi hosts will reflect that account. So if you want to more accurately track calls from ESXi to VASA you could setup a service account and use that to register VASA.

Okay, we have seen it automatically takes the necessary steps when the VM is powered on. What if we power off a VM?

Let’s take a look:

root@purearray-ct0:~# pureaudit list
 ID        Time                     User      Command     Subcommand   Name                                                        Arguments
10144884  2020-01-23 12:47:13 PST  pureuser  purehost    disconnect   elaine                                                      --vol vvol-vVol-VM-Example-c630ec8a-vg/Swap-a0f80e79
 10144885  2020-01-23 12:47:13 PST  pureuser  purevol     destroy                                                                  vvol-vVol-VM-Example-c630ec8a-vg/Swap-a0f80e79
 10144886  2020-01-23 12:47:13 PST  pureuser  purevol     eradicate                                                                vvol-vVol-VM-Example-c630ec8a-vg/Swap-a0f80e79
 10144887  2020-01-23 12:47:45 PST  pureuser  purehost    disconnect   elaine                                                      --vol vvol-vVol-VM-Example-c630ec8a-vg/Config-5ee8c42b
 10144888  2020-01-23 12:48:16 PST  pureuser  purehost    disconnect   elaine                                                      --vol vvol-vVol-VM-Example-c630ec8a-vg/Data-a3884348

Again, notice how the Swap vVol is disconnected and destroyed (swap isn’t needed if the machine isn’t powered on) and a short time later ESXi requests the Config and Data volumes be disconnected (unbound). All of this is happening without any intervention other than powering the VM off or on.

What is the takeaway?

The takeaway is if you start making decisions for the vSphere cluster then it can cause confusion and lead to other failures. Specific calls from ESXi to the FlashArray require information that is only available if the volumes are mounted through the protocol-endpoint. If done any other way those calls will fail and then you will have problems with that virtual machine.

NOTE: You can review VMware vVols: VMs on a Pure Storage vVol datastore report vSphere HA errors for one such example of things that can go wrong.

Additionally, three of the main reasons to use vVols is for simplicity, scalability and performance (by utilizing a protocol-endpoint and sub-lun addressing). If you start manually mapping volumes to hosts or host groups you have just defeated the three main reasons to use vVols.

So, as my colleague so eloquently put it:

“WARNING: Directly connecting a vVol volume to a host group or host achieves nothing at all times!”

If you ever feel like manual intervention is required then please reach out to Pure Storage support, they are ready and willing to help!

-jhop

Post navigation

Previous Post:

Pure Storage: Volume should be directly connected to at most one host or host group.

Next Post:

Pure Storage: py-pure-client and Error 404

Subscribe to stay up-to-date!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts

  • Pure Storage & VMware: Adaptive Queueing, Storage I/O Control and QoS (Part 2)
  • Pure Storage & VMware: Adaptive Queueing, Storage I/O Control and QoS (Part 1)
  • Pure Storage: Management Pack 3.0.2 for vRealize Operations Manager (vROps)
  • purepyvmware: Pure Storage and VMware Python module
  • Pure Storage: Management Pack 3.0.1 for vRealize Operations Manager (vROps)

Recent Comments

  • jhop on purepyvmware: Pure Storage and VMware Python module
  • Ravichandran K on purepyvmware: Pure Storage and VMware Python module
  • Christopher Kusek on vSphere 7.0: Configuring NVMe-RoCE with Pure Storage
  • jhop on VMware vVols: Pure Storage and thick provisioned vVols.
  • Tayfun Deger on VMware vVols: Pure Storage and thick provisioned vVols.

Archives

  • November 2020
  • September 2020
  • August 2020
  • July 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • October 2019
  • September 2019
  • August 2019
  • May 2019

Categories

© 2021 Transmitting on the wire | WordPress Theme by Superbthemes