For anyone that may be following this blog I do apologize about the delay in providing new content. I was out of the office for a period of time to take care of some important personal items that needed my undivided attention. Needless to say, it feels great to be back to work and back to doing the things I love!
Today’s topic is one that has been asked by multiple customers and internal folks here at Pure Storage and, since a good amount of testing has been done, I thought it would be a good post to provide some deeper information on what was found.
The question of the day: Can (or should) I use Adaptive Queueing (AQDA) or Storage I/O Control (SIOC) when QoS Rate Limiting is enabled on a FlashArray volume or vgroup?
Anyone who reads my blog posts knows by now that it is important to me that everyone is on the same page before a question is answered. It is frustrating when I navigate to a blog and find the answer but have no idea how or why they came to a decision or how something works (at even a high level). When that happens I usually go on a wild goose hunt to get all of the information I may have missed from their article. As such, I like to try and provide the context as well as the answer so that people are able to ask better questions, implement the right strategy for them, not search all over the web for the information, or in some cases simply just call me out on my BS. 😉
I began writing this article and found it to be a little lengthy and decided to split it into a mini series (only 2, maybe 3 blog posts). That way the information is easier for some people to digest and those who already know something can skip to the applicable article.
This is going to be Part 1 of the series and will explain each feature listed below. I will not go super in-depth about everything under the covers but will (try to) provide a comprehensive overview so that there is an understanding of what each technology is trying to accomplish.
So without further ado let’s first start off by discussing these 3 technologies mentioned in our subject:
– Adaptive Queue Depth Algorithm (AQDA)
– Storage I/O Control (SIOC)
– Quality of Service (QoS) Rate Limiting (on the Pure Storage FlashArray)
Note: If you are not familiar with Queue Depth then reading my colleagues blog post called “Understanding VMware ESXi Queueing and the FlashArray” will help understand why I am referring to queue depth so often below.
What is the Adaptive Queue Depth Algorithm (AQDA)?
The first VMware feature I want to discuss today is called “Adaptive Queueing”, also known as the “Adaptive Queue Depth Algorithm”. Throughout this article I will refer to this feature as AQDA.
AQDA was first introduced in ESXi 3.5U4 (over 12 years ago in 2008!). This technology was released to help overloaded storage arrays (or storage ports) by throttling how much I/O is allowed to be sent at once. AQDA accomplishes this task by dynamically changing (reducing) the queue depth on a per datastore level for any device that is reporting congestion.
By reducing the queue on the impacted datastores this limits the amount of outstanding I/O that can be sent to the storage array. In theory, by slowing down incoming I/O request, this should give the array time to “catch up” to the workload. If nothing else, it will hopefully at least create a more consistent bandwidth / throughput / latency. This means that there “should be” less impact to the environment. The “should be” is in quotes because there are obviously some situations in which it may not help or may not help enough. You hopefully get the general idea of its purpose by this point in time though.
One of the most important takeaways with AQDA is that the congestion has to be reported to the ESXi host from the storage array before it kicks in. It is not simply based off of latency. The storage array informs the ESXi host of congestion by sending specific SCSI Status Codes back to the host as I/O requests are received.
The specific SCSI codes required to trigger AQDA are:
- TASK_SET_FULL (0x28)
- BUSY (0x8)
As the ESXi host monitors the I/O responses it will determine which datastore is reporting the congestion and keep a count of how many of those requests are tagged as BUSY or FULL. Once a specific amount of these congestions responses have been received (we’ll say 16) then the ESXi host will begin to reduce the queue depth and throttle the I/O being sent to the impacted datastore.
AQDA does this by cutting the queue in halves as congestion is continually reported (e.g 64 > 32 > 16 > 8 > 4 > 2 > 1). We have confirmed through testing that queue depth can indeed drop all the way down to 1 with AQDA. This only happened in the most extreme of circumstances though when QoS was set very low on the volume and a very high workload was triggered.
As time goes on, and the storage array is able to successfully complete I/O requests, the ESXi host will keep track of these completions (0x00). Again, once a specific amount of successful completions are received (we’ll say 8) from storage it will slowly start to increase the queue depth back to its original value before the congestion event was reported. These queue slots are returned one at a time and do not double like it does when the queue is being reduced. So if the queue dropped from 64 down to 32 then AQDA would need a total of 256 successful I/O completions to get back to the original queue depth value of 64 (32 * 8).
Important Note: If TASK_SET_FULL or BUSY responses are received while the queue is being replenished then the queue slots will stop filling until there are 8 successful completions in a row. If the threshold for BUSY or TASK_SET_FULL responses are met, then not only will the queue stop increasing but it will begin to decrease again to slow the flow of I/O.
How do you enable this feature?
Adaptive Queue Depth is disabled by default. So in order to enable this feature you need to change the following settings from “0” to your preferred values:
– QFullSampleSize – The amount of TASK_SET_FULL or BUSY responses needed before AQDA kicks in and begins to throttle.
Maximum value that should be set here is: 64.
– QFullThreshold – The amount of GOOD (successful completion) responses needed before AQDA begins to increase the queue depth until back to its original value.
Maximum value that can be set here is: 16.
Enabling this feature does not require a reboot.
How do you disable this feature?
You can disable AQDA by simply setting both the QFullSampleSize and QFullThreshold back to a value of “0“.
Disabling this feature does not require a reboot.
Important Note: One issue that I have been able to reproduce in my environment (on 6.7U3) is that disabling AQDA may result in the ESXi host setting the Queue Depth to a value of “1”. This means that once you disable AQDA you may notice that your I/O throughput and latency gets worse! If you were to look in ‘esxtop’ you would see that the queue depth (DQLEN) would be reporting a value of “1” rather than the default of (likely) 64 or 32.
In order to resolve this you can either:
- Re-enable AQDA and then disable it again immediately after.
- Reboot the ESXi host
Obviously re-enabling and disabling again is quicker and less impactful to the environment. The only caution to note here is that I don’t know if any other underlying problems are created as a result from that. I personally haven’t noticed any issues, but if you prefer to take the safe route, then a reboot may be best for your environment.
If you decide that AQDA should be used in your environment it is very important that you enable it on ALL hosts within the cluster. If you do not then you could make things even worse during times of congestion.
For more information you can review the following VMware KB:
Controlling LUN queue depth throttling in VMware ESX/ESXi (1008113)
What is Storage I/O Control (SIOC)?
Storage I/O Control (referred to as SIOC moving forward) is a feature created by VMware that is also triggered during times of increased congestion. This is actually quite a powerful feature (if used properly) and we could spend quite a long time explaining every detail. Since this is more about helping you understand what it does and what conditions trigger this feature, I won’t give you all the details, but will provide links to places you can go further into detail if you so desire.
Some people may ask, “If we already have AQDA, why create another feature for congestion?”. This is a fair and reasonable question, one that is relatively simple in concept but a little trickier under the hood.
We know that AQDA is engaged only when the storage array says, at a SCSI level,
“Hey, I am a little overloaded over here… mind slowing down your I/O requests so I can catch up?”.
Once that signal is received the ESXi host begins to reduce the queue and slow things down until the signal received from storage says,
“Okay, all caught up.”
at which point the queue slots are slowly replenished.
This is a great feature, right? But what about times the storage array isn’t reporting problems because it isn’t responsible for the congestion? Or maybe the storage array simply doesn’t send those notifications? What if congestion is happening within the environment due to other limitations? I know we all love to blame storage, but storage isn’t always the problem! 😉 Other things that could cause congestion are oversaturated network, failing switches and/or paths, a rouge VM (or VMs), and the list goes on…. and this is where SIOC comes into play and really shines.
Unlike AQDA, SIOC is engaged at times of increased latency on individual datastores rather than waiting for SCSI codes from an array. This means that SIOC doesn’t have to wait for storage to say there is a problem before it is able to kick in. It can engage and start trying to mitigate (or lessen the impact) of the problem without that signal.
In current versions of ESXi there are two options available for determining when SIOC should engage:
– Percentage of Peak Throughput – This option indicates the estimated latency threshold when the datastore is using that percentage of its estimated peak throughput. (Default is 90%)
– Manual – This allows the end-user to manually determine what the latency should be before SIOC engages (Default: 30ms)
Once engaged SIOC works very similarly to adaptive queueing (with slight differences), in the sense that it slows the rate of I/O by adaptively changing (reducing) the queue depth of the impacted datastore. As soon as latency subsides then the queue depth will be increased to allow for full rate of exchange between the hosts and storage.
Okay great, but if AQDA and SIOC both work by reducing queues then what more is SIOC providing outside of simply engaging during increased latency rather then SCSI triggered events?
In a single (important) word: fairness. The best part about SIOC is that it takes into account all of the VMs residing on the impacted datastore and is cluster aware, which means it doesn’t matter which ESXi hosts are housing the VMs running on the impacted datastore. This means that when I/O throttling does need to be enforced, SIOC will ensure that every VM has the same opportunity to send I/O to the storage array. This will ensure that no single VM can “steal” all of the storage queues during the time of lessened storage resources (unless you specifically configure it to do so).
This is something that AQDA does not offer and is one of the big differentiators between the two.
Several other key distinctions about SIOC should be noted:
- SIOC statistics are evaluated every 4 seconds to determine what action should be taken based on the latency and incoming I/O.
- SIOC uses the average latency of a datastore across all connected hosts. This means that if a single host for some reason is seeing higher latency, but all others don’t, SIOC will not engage… not unless their average still exceeds the latency threshold.
- SIOC will not lower the queue depth below a value of 4.
As I stated previously this is a somewhat simplistic view of what SIOC does under the covers, and there is more complexity around ensuring proper shares are set for more critical VMs, how it can be tuned, etc and that can be found in the resources listed below (and many more I am sure).
- Storage I/O Control, the basics – Duncan Epping Blog Post
- vSphere 6.5 what’s new – Storage IO Control – Duncan Epping Blog Post
- Managing I/O Resources – VMware Official Documentation
Quality of Service (QoS) on the Pure Storage FlashArray
Phew, now that we got through the sections on AQDA and SIOC we can talk a little bit about the FlashArray and why those distinctions above were necessary!
Before I go into discussions about this I want to make a quick clarification about the two different types of QoS that are available on the FlashArray:
- QoS Fairness (Always on QoS) – Array wide setting that ensures the FlashArray itself is not overrun by all attached hosts / initiators. This isn’t configured by end-users and is simply “Always On” as the name indicates.
- QoS Rate Limiting – This is configurable by end-users and is enabled on individual volumes or volume groups. Depending on the version of Purity these restrictions can be set based on bandwidth or IOPs limits.
I will be specifically referring to QoS Rate Limiting here but some of the concepts will apply to both.
I am not sure we need to spend the time going over what QoS is and why you may want to use it as it is pretty well known feature at this point, so I will skip that for now. But if you are new to this concept, and want to learn more, no worries! We all started somewhere to start learning so if you want to know why it is beneficial you can read this Pure Storage KB here for additional data.
Instead I want to focus on a small, under the covers, side of QoS so it makes more sense why I took time to outline the data points above about AQDA and SIOC.
When a bandwidth or IOPs limitation is set on a FlashArray volume or volume group it is up to the FlashArray to enforce and inform the connected host(s) that a limitation has been reached.
So how does the FlashArray do that? Well, it does it the way we discussed up above in the adaptive queue depth section, by sending SCSI code 0x28 (TASK_SET_FULL) responses to the host(s) to inform it that a limitation has been reached. The FlashArray continues to send these responses as long as the host(s) attempting to send I/O are surpassing the desired limitation.
One of the challenges here, specifically for VMFS datastores, is that the FlashArray has absolutely no way to specify which VMs or I/O requests should be prioritized over another. This is because the FlashArray has no real insight into that VMFS datastore and thus no way to make an educated decision on what should have priority. In this sense the I/O is going to be processed in the “first come, first serve” mind set and simply process what it can and reject the rest with the TASK_SET_FULL response so the host can retry the request. This means it would be up to the initiator (host) sending the data to determine what should be processed first and send that data accordingly.
With vVols QoS on the FlashArray is more granular and simplistic in theory. Since each VMDK is a standalone volume on the FlashArray you can not only determine which VMs should be throttled but you can even determine which VMDK(s) you want throttled for specific VMs. This can be done by setting the individual volume restrictions (VMDK level) or setting vgroup restrictions (VM level) depending on the level of granularity you need. This is yet another reason why we love vVols with vSphere and are continuing to drive this feature forward with VMware as much as possible!
Now that we have a (hopefully) better understanding of what each of these features provides we should be able to more accurately answer our original question:
Can (or should) I use Adaptive Queueing (AQDA) or Storage I/O Control (SIOC) when QoS Rate Limiting is enabled on a FlashArray volume or vgroup?
There is some foreshadowing in this information about what is to come and why it would be natural from someone to ask this question! So keep an eye out for Part 2 (coming soon) and we will do just that!
Questions, comments, concerns, etc are all welcome in the comments below! Thanks for reading and have a wonderful day!