Well it is that time again, time to spread some knowledge and hope it helps someone in need!
In Purity//FA 4.8.0 and later Pure Storage introduced a new feature that reports the number of connections (or mappings) there are to a volume. To clarify, I am not talking about the number of connections such as FC paths or iSCSI sessions/connections/paths. The type of connection I am referring to here is how many host group objects or individual host objects are associated with the volume(s).
When a volume is mapped (or connected) to a host group object / host object, like any other storage vendor, you are providing access to that volume for the specified host or hosts. Once this connection has been completed, hosts are now able to access the volume as needed to perform whatever tasks are required.
Let’s look at an example below to show what I am referring to:
In the example above you will note, by reviewing the volume properties, it shows a single host group object mapped to the volume. This is accurately reflected below by “# connections” being equal to “1”.
The part people often get confused about is that the “# Hosts” value is “2”. Which means there are two host objects that have been mapped (or have access) to this volume. If there are two hosts how can there be only 1 connection? The reason is because the host objects are “contained within” the host group itself and thus the ACL is passed to the hosts. The actual “connection / mapping” though is directly associated with he host group itself.
In the description of a host group it states:
A Purity//FA host group is an abstraction that implements consistent connections between a set of hosts and one or more volumes. Connections are consistent in the sense that all hosts associated with a host group address a volume connected to the group by the same LUN. Host groups are typically used to provide a common view of storage volumes to the hosts in a clustered application.
Often times you will find that hosts defined in a host group are also in the same ESXi Cluster, Windows Server Failover Clustering (WFSC), Oracle RAC Configuration, etc. By associating them all together with a single host group it provides consistency (as mentioned), organization, and simplicity.
Since host groups are typically associated with clustered environments we report it as a single connection as all hosts in that group should be “Cluster Aware”. As previously mentioned, this concept is really no different than any other vendors… just a different naming convention.
WARNING: Multiple connections reported for a volume.
With us now understanding the background of what connections mean in this context, let’s look at the warning that has asked about:
What this message is saying, in a very simple way, is that you now have multiple host groups, standalone hosts, or a mixture of both connected to a single volume.
With the release of vVols there has been an increase in concern surrounding this warning. I will explain below why that is the case and why it is okay to ignore (for vVols specifically).
Why report this warning?
It is important to understand the underlying reason this warning is being reported and why it was implemented.
The primary purpose is to ensure the end-user meant to connect (or map) a volume to multiple hosts or host groups.
The reason we want to inform end-users of this is because the consequences of a mistake in this area could be potentially catastrophic.
Above we discussed host groups are typically used (and are recommended to be used) for cluster configurations. Consistent presentation to a cluster (especially in old versions of ESXi, Windows, Oracle etc) is and was very important due to how identifiers would be generated and used for each individual volume.
The benefit of clusters is they are often using a clustered filesystem (such as VMFS with ESXi or Cluster Shared Volumes (CSV) with Windows) or other mechanism that ensures all hosts who have access to the volume / filesystem are working in co-op with one another. Ensuring that hosts are writing to specific locations, not overwriting critical information, etc.
So what happens if you were to take a volume, with a clustered file system on it, and present it to a stand alone host that doesn’t have the insight to know what is being written and why?
The answer: corruption. It is possible that, even if the host recognized the filesystem format, it could overwrite sections of the LUN / volume that had critical information for a specific application. The next time the application wants to read that information the checksum / data wouldn’t match and could thus cause problems resulting in downtime or a need to restore.
Additionally, there are often different organizations within a company that will have different departments. If there are multiple orgs waiting to have a volume provisioned, and the storage admin mistakenly maps the wrong volume, the org may mistakenly overwrite that volume completely. I have been on a couple of these calls in my time within the storage industry and it is not a fun or easy call to have.
So, Pure Storage decided it was worth throwing in an additional verification to be sure the end-user / storage admin meant to perform the step they actually wanted to. There are scenarios where having multiple host groups or hosts connected to a single volume are actually needed and supported. If you are in one of those scenarios, feel free to ignore the warning and just understand it is reported for your sake and the sake of others in times of potential human error. As much as I wish it was true, no one is perfect so having another verification in place makes me feel a little better.
If you do not have a scenario where this configuration would be needed, and the warning is reported, please double check to be sure there were no mistakes made while provisioning the volume(s).
Why are vVols not necessarily an issue?
One of the greatest benefits of vVols is the simplicity of it all. In reality there is only one “volume” (the protocol-endpoint) that needs to be mapped on the FlashArray for ESXi hosts by the VMware or Storage admin. If you are using the Pure Storage vSphere plugin this process is even easier. With the exception of the protocol-endpoint everything else is done for you.
What I mean by that is, when you create a VM using the vVol architecture all of the “configuration” needed for that VM is handled by the ESXi host and the VASA provider. This is true for both the VMs and the vSphere-HA vVol that is created (when that feature is in use).
When a VM is created, powered on, powered off, etc the ESXi host makes calls to the VASA provider, which translates to tasks within the FlashArray, and appropriate actions are taken. Most of the time there is only a single host that will have a “bind” on individual VMs, which is much different than VMFS obviously. It is all handled dynamically by VMware and you won’t often see multiple hosts needing access to the same vVol.
There are some scenarios however where you will. Once such scenario is if you are vMotioning a VM from one ESXi host to another. During the last phase of the vMotion (and for a short time after) you will notice that there are 2 connections listed (one for each ESXi host). This is all being managed by VMware and is not only okay but expected… as towards the end both hosts need temporary access to the config vVol to make necessary updates.
Another common scenario is when vSphere-HA needs to make updates to a VM but that VM doesn’t reside on the “Master” HA node. You can see a temporary bind request (connection) sent from the Master node to perform whatever update is required for the specified task. Once the task is completed it will unbind and the connection will no longer be present.
These are just two scenarios of several that you might see. Since it is all managed by VMware and VASA you do not need to worry about these warnings reported in the GUI unless otherwise noted by Pure Storage support or employees.
NOTE: I want to reiterate, you should not need to manually configure vVols to have access to the ESXi hosts. If you do, something is wrong and you should contact support to determine what is happening and how to fix it. If you have manually configured anything for vVols (often times it is the vSphere HA vVol people do this with) then please unconfigure and allow ESXi and VASA to maintain the configuration and mappings.
So what’s the “Quick n Dirty” of it all?
- This warning is to ensure end-users / storage admins are only mapping volumes to the proper host groups or hosts that the volumes belong to.
- It is only precautionary and can be ignored if your environment is meant to be configured with a volume being accessed by multiple host groups or individual hosts.
- vVols is expected to report this warning from time to time and, as long as they are not reported for longer than 10-15 minutes, are nothing to be concerned with.
If you have any questions surrounding this information please let me know. If you are ever in doubt, and need peace of mind, please open a ticket with Pure Storage support and they should be able to help you out in no time! 🙂