Skip to content
  • Home
  • About Me
  • Disclaimer

Transmitting on the wire

Adversity doesn't build character, it reveals it.

VMware vVols: Pure Storage and thick provisioned vVols.

August 16, 2019 by jhop

The other day we got a question in the Pure Storage community forums that I thought was worth sharing. I didn’t go very much in-depth there so thought I would explain it more fully here.

We are going to be updating our official documentation as well to ensure this is better understood.

The Question

I attempted to create, or migrate, a thick provisioned vVol to the Pure Storage FlashArray and I received the following error:

Error creating disk Error creating VVol Object. This may be due to insufficient available space on the datastore or the datastore's inability to support the selected provisioning type

Why am I unable to create a thick provisioned vVol on a Pure Storage FlashArray?

This is actually a great question and does seem a little counter-intuitive. You are able to create a thick provisioned VM when using a VMFS datastore so why not a vVol?

Since I am the type of person that feels it is important to have a “full picture” to understand things, I am going to provide some basic information about VMFS and vVols so that the comparison is better understood. If you already understand VMFS and vVols and don’t want to read all of the information here, feel free to skip to the, “Bringing it home” section for the direct answer to that question. 🙂

Let’s provide some fairly basic information here just to make the point more clear at the end of this post.

What is a VMFS datastore?

Since most people who will be reading this are likely already familiar with VMFS I won’t really go too much into detail and just provide a high level overview so that the difference is clear enough to understand why you can create a thick provisioned VM on VMFS with Pure Storage but not with vVols.

Simply stated, a datastore is a logical container that is used to store virtual machines files. The underlying structure of the datastore is a “disk”; disk meaning a physical disk inside the ESXi hardware, a LUN provided by backend storage, etc.

In our case it is a LUN (or volume) provided by the Pure Storage FlashArray to the ESXi host. Once the disk is available to the ESXi host you then have the option to format the LUN with the vSphere Virtual Machine File System (VMFS). Once the filesystem is written to the LUN you are then able provision VMs, templates, ISOs, etc on this datastore.

For the sake of comparison let me explain just a little more (hang in there with me).

Let’s say you now provision a VM on the datastore, you choose thick provision, and the VM is created. When you choose the option of thick provision you are effectively asking the filesystem to “guarantee” this space (i.e. reserve all this space for the VM it is always available for use). Thick provisioning is great because when you look at the filesystem space the used / available is accurately reflected. You know that over time this space will not change (unless you add or remove objects) because that space is provisioned ahead of time. This can bring a lot of piece of mind to administrators because then they don’t have to worry about running out of space unexpectedly (there are some rare cases, but let’s not rabbit hole).

This is the complete opposite of thin provisioning. With thin provisioning you create a VM of an X amount of size and then “hope” that the space will be available if you need it. Often times datastores with thin provisioned VMs are over provisioned (hopefully strategically) so that they can fit more VMs into one space. There are some great reasons to use thin provisioning (some obvious, some not) and would recommend you read more about this if you aren’t familiar with it.

The primary thing to understand with thin provisioning is that if you create a 100GB VM then you are not guaranteed that space in the future. You are only guaranteed what you have written down as it is available and nothing more.

The takeaway:

The main point we can take away from here is that, when using a VMFS filesystem, that filesystem itself is able to guarantee that the space will be available for the VM(s) to use when / if it is needed. Obviously this depends on the backend storage being available, but you should get the general idea here. That when VMFS is in use, VMware is responsible for the space management / reservation and keeping its filesystem up-to-date to ensure these reservations are honored.

Feel free to read more about VMFS datastores for more in-depth information if you so desire!

What are vVols?

Before I begin it is important to note that I am not going to be discussing a vVol datastore, just the vVol itself. If you want to understand the difference you can review my good friend Cody Hosterman’s blog: What is a vVol datastore?

At a basic level a vVol (also called a virtual volume) is a volume created, and dedicated, for each virtual disk (VMDK) associated with a virtual machine. So if you were to create a VM with 3 VMDKs, you would expect to see 4 volumes created directly on the FlashArray; a volume for the config data and then 3 volumes for the data disks. The VMs will then write their data directly to these dedicated volumes.

Again, since we are not going in-depth on vVols here, I will keep it simple. The thing I am pointing out here is that the VMFS filesystem is removed from the equation and VMs are no longer logically stored there. Rather they are stored directly on the FlashArray. This means it is the FlashArray’s responsibility to “manage the space” so to speak. So if a thick provisioned VM was created using vVols then it would be the FlashArray’s responsibility to reserve that space for the VM. This is because there is no logical layer any more between the FlashArray and the VM (the VMFS datastore).

If you are hungry for more information on vVols (which we hope you are!) again, please ready Cody Hosterman’s blog: Introducing vSphere Virtual Volumes on the FlashArray.

Bringing it home

All right, now that we understand the difference between the two concepts, we can accurately explain why we do not allow thick provisioned VMs to be created on the FlashArray with vVols.

Applying what was explained above, that with a thick provisioned vVol the FlashArray would need to reserve that space, it is important to understand that the FlashArray can’t guarantee that space. That’s right, it cannot reserve space (i.e. thick provision) anything on the FlashArray. Any volume created on the FlashArray is both thin and micro provisioned.

The best way to explain why it was designed this way can be found in the FlashArray User Guide (any version):

To optimize physical storage utilization, however, FlashArray volumes are thin and micro provisioned.

• Thin provisioning. Like conventional arrays that support thin provisioning, FlashArrays do not allocate physical storage for volume sectors that no host has ever written, or for trimmed (expressly deallocated by host or array administrator command) sector addresses.

• Micro provisioning. Unlike conventional thin provisioning arrays, FlashArrays allocate only the exact amount of physical storage required by each host-written block after reduction. In FlashArrays, there is no concept of allocating storage in “chunks” of some fixed size.

Since the FlashArray is not designed to guarantee that space it was decided to reject any thick provisioned requests for vVols. This was done in an effort to ensure that the end-user isn’t deceived into thinking that space is always available to that vVol / VM. It is a matter of ensuring both storage and vSphere administrators understand how space is being provisioned so that they are able to accurately plan accordingly.

So there you go you have your answer! Hopefully you found this helpful and, as always, please feel free to ask questions or leave comments!

-jhop

Post navigation

Previous Post:

VMware and SCSI: Why do my Pure Storage datastores report SCSI 0x85 errors every 30 minutes?

Next Post:

Pure Storage: Purity 5.3 & QoS updates

2 Commments

  1. Tayfun Deger says:
    May 13, 2020 at 6:42 am

    Thanks for sharing, good article.

    1. jhop says:
      May 13, 2020 at 3:38 pm

      Thank you, I appreciate it! Thank you for taking the time to read it!

      -jhop

Comments are closed.

Subscribe to stay up-to-date!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts

  • Pure Storage & VMware: Adaptive Queueing, Storage I/O Control and QoS (Part 2)
  • Pure Storage & VMware: Adaptive Queueing, Storage I/O Control and QoS (Part 1)
  • Pure Storage: Management Pack 3.0.2 for vRealize Operations Manager (vROps)
  • purepyvmware: Pure Storage and VMware Python module
  • Pure Storage: Management Pack 3.0.1 for vRealize Operations Manager (vROps)

Recent Comments

  • jhop on purepyvmware: Pure Storage and VMware Python module
  • Ravichandran K on purepyvmware: Pure Storage and VMware Python module
  • Christopher Kusek on vSphere 7.0: Configuring NVMe-RoCE with Pure Storage
  • jhop on VMware vVols: Pure Storage and thick provisioned vVols.
  • Tayfun Deger on VMware vVols: Pure Storage and thick provisioned vVols.

Archives

  • November 2020
  • September 2020
  • August 2020
  • July 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • October 2019
  • September 2019
  • August 2019
  • May 2019

Categories

© 2021 Transmitting on the wire | WordPress Theme by Superbthemes