VMware ESXi can take advantage of Flash/local SSDs in multiple ways:

  • Host swap cache (since 5.0):  ESXi will use part of the an SSD datastore as swap space shared by all VMs.  This means that when there is ESX memory swapping, the ESXi server will use the SSD drives, which is faster than HDD, but still slower than RAM.
  • Virtual SAN (VSAN) (since 5.5 with VSAN licensing): You can combine  the local HDD and local SSD on each host and basically create a distributed storage platform.  I like to think of it as a RAIN(Redundant Array of Independent Nodes).
  • Virtual Flash/vFRC (since 5.5 with Enterprise Plus): With this method the SSD is formatted with VFFS and can be configured as read and write through cache for your VMs, it allows ESXi to locally cache virtual machine read I/O and survives VM migrations as long as the destination ESXi host has Virtual Flash enabled. To be able to use this feature VMs HW version needs to be 10.

Check if the SSD drives were properly detected by ESXi

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:

flash1

 

From CLI:

To enable the SSD option on the SSD drive

At this point you should put your host in maintenance mode because it will need to be rebooted.

If the SSD is not properly detected you need to use storage claim rules to force it to be type SSD. (This is also useful if you want to fake a regular drive to be SSD for testing purposes)

Add a PSA claim rule to mark the device as SSD (if it is not local (e.g. SAN))

For example (in case this was a SAN attached LUN)

 

Add a PSA claim rule to mark the device as Local and SSD at the same time (if the SSD drive is local)

For the device in my example it would be:

Reboot your ESXi host for the changes to take effect.

 

To remove the rule (for whatever reason, including testing and going back)

Once the ESXi server is back online verify that the SSD option is OK

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:

flash2

From CLI:

Exit Maintenance mode.

Do the same on ALL hosts in the cluster.

Configure Virtual Flash

Now that the ESXi server recognize the SSD drives we can enable Virtual Flash.

You need to perform the below steps from the vSphere Web Client on all ESX hosts

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Resource Management -> Add Capacity…

flash3

You will see that the SSD device has been formatted using the VFFS filesystem, it can be used to allocate space for virtual flash host swap cache or to configure virtual Flash Read Cache for virtual disks.

flash4

 

Configure Virtual Flash Host Swap

One of the options you have is to use the Flash/SSD as Host Swap Cache, to do this:

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Host Swap Cache Configuration -> Edit…

// Enable and select the size of the cache in GB

flash5

 

Configure Flash Read Cache

Flash read cache is configured on a per-vm basis, per vmdk basis. VMs need to be at virtual hardware version 10 in order to use vFRC.

To enable vFRC on a VM’s harddrive:

VM -> Edit Settings -> Expand Hard Disk -> Virtual Flash Read Cache

Enter the size of the cache in GB (e.g. 20)

You can start conservative and increase if needed, I start with 10% of the VMDK size. Below, in the monitor vFRC section, you will see tips to rightsize your cache.

flash6

 

If you click on Advanced, you can configure/change the specific block-size (default is 8k) for the Read Cache, this allows you to optimize the cache for the specific workload the VM is running.

flash7

The default block size is 8k, but you may want to rightsize this based on the application/workload to be able to efficiently use the cache.

If you dont size the block-size of the cache you could potentially be affecting the efficiency of the cache:

  • If the workload has block sizes larger than the configured block-size then you will have increased cache misses.
  • If the workload has block sizes smaller than the configured block-size then you will be wasting precious cache.

Size correctly the block-size of your Cache

To correctly size the block-size of your cache you need to determine the correct I/O length/size for cache block size:

Login to the ESX host running the workload/VM for which you want to enable vFRC

 

Find world ID of each device

 

Start gathering statistics on World ID // Give it some time while it captures statistics

Get the IO length histogram to find the most dominant IO length

You want the IO length for the harddisk you will enable vFRC, in this case scsi0:1

(-c means compressed output)

As you can see, in this specific case,  16383(16k) is the most dominant IO length, and this is what you should use in the Advance options.

flash8

Now you are using a Virtual Flash Read Cache on that VM’s harddisk, which should improve the performance.

Monitor your vFRC

Login to the ESX host running the workload/VM for which you enabled vFRC, in the example below it is a 24GB Cache with 4K block-size:

There is a lot of important information here:
The Cache hit rate shows you the percentage of how much the cache is being used. A high number is better because it means that hits use the cache more frequently.
Other important items are IOPs and latency.

This stats also show information that can help you right size your cache, if you see a high number of cache evictions, Evict->Mean blocks per I/O operation, it could be an indication that your cache size is small or that the block-size of the cache is incorrectly configured.

To calculate available block in the cache, do the following:
SizeOfCache(in bytes) / BlockSizeOfCache(in bytes) = #ofBlocksInvFRC

For the example: A 24GB cache with 4k block-size, will have 6291456 blocks in the vFRC, see:
25769803776
/
4096
=
6291456

 

In the stats above we see 5095521 as the Mean number of cache blocks in use, and no evictions which indicates that 24GB cache with 4k seems to be a correctly sized cache.

Keep monitoring your cache to gain as much performance as you can from your Flash/SSD devices.