Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

Browsing Posts in Virtualization

One of the most important things you should do to your systems is to ensure they have the right time.
In this post I will show how to check and ensure your systems have the correct time using PowerCli.

==> Login to vCenter:

$admin = Get-Credential –Credential EXAMPLE\john
Connect-VIServer -Server -Credential $admin

==> Check time settings:


==> Set time to correct time:

==> Remove old NTP servers (if any):


==> Change NTP to desired configuration:


==> Enable Firewall Exception

==> Start NTPd service

==> Ensure NTPd service starts automatically (via policy)

==> Verify all is set the way you expected


VMware ESXi can take advantage of Flash/local SSDs in multiple ways:

  • Host swap cache (since 5.0):  ESXi will use part of the an SSD datastore as swap space shared by all VMs.  This means that when there is ESX memory swapping, the ESXi server will use the SSD drives, which is faster than HDD, but still slower than RAM.
  • Virtual SAN (VSAN) (since 5.5 with VSAN licensing): You can combine  the local HDD and local SSD on each host and basically create a distributed storage platform.  I like to think of it as a RAIN(Redundant Array of Independent Nodes).
  • Virtual Flash/vFRC (since 5.5 with Enterprise Plus): With this method the SSD is formatted with VFFS and can be configured as read and write through cache for your VMs, it allows ESXi to locally cache virtual machine read I/O and survives VM migrations as long as the destination ESXi host has Virtual Flash enabled. To be able to use this feature VMs HW version needs to be 10.

Check if the SSD drives were properly detected by ESXi

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:



From CLI:

To enable the SSD option on the SSD drive

At this point you should put your host in maintenance mode because it will need to be rebooted.

If the SSD is not properly detected you need to use storage claim rules to force it to be type SSD. (This is also useful if you want to fake a regular drive to be SSD for testing purposes)

Add a PSA claim rule to mark the device as SSD (if it is not local (e.g. SAN))

For example (in case this was a SAN attached LUN)


Add a PSA claim rule to mark the device as Local and SSD at the same time (if the SSD drive is local)

For the device in my example it would be:

Reboot your ESXi host for the changes to take effect.


To remove the rule (for whatever reason, including testing and going back)

Once the ESXi server is back online verify that the SSD option is OK

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:


From CLI:

Exit Maintenance mode.

Do the same on ALL hosts in the cluster.

Configure Virtual Flash

Now that the ESXi server recognize the SSD drives we can enable Virtual Flash.

You need to perform the below steps from the vSphere Web Client on all ESX hosts

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Resource Management -> Add Capacity…


You will see that the SSD device has been formatted using the VFFS filesystem, it can be used to allocate space for virtual flash host swap cache or to configure virtual Flash Read Cache for virtual disks.



Configure Virtual Flash Host Swap

One of the options you have is to use the Flash/SSD as Host Swap Cache, to do this:

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Host Swap Cache Configuration -> Edit…

// Enable and select the size of the cache in GB



Configure Flash Read Cache

Flash read cache is configured on a per-vm basis, per vmdk basis. VMs need to be at virtual hardware version 10 in order to use vFRC.

To enable vFRC on a VM’s harddrive:

VM -> Edit Settings -> Expand Hard Disk -> Virtual Flash Read Cache

Enter the size of the cache in GB (e.g. 20)

You can start conservative and increase if needed, I start with 10% of the VMDK size. Below, in the monitor vFRC section, you will see tips to rightsize your cache.



If you click on Advanced, you can configure/change the specific block-size (default is 8k) for the Read Cache, this allows you to optimize the cache for the specific workload the VM is running.


The default block size is 8k, but you may want to rightsize this based on the application/workload to be able to efficiently use the cache.

If you dont size the block-size of the cache you could potentially be affecting the efficiency of the cache:

  • If the workload has block sizes larger than the configured block-size then you will have increased cache misses.
  • If the workload has block sizes smaller than the configured block-size then you will be wasting precious cache.

Size correctly the block-size of your Cache

To correctly size the block-size of your cache you need to determine the correct I/O length/size for cache block size:

Login to the ESX host running the workload/VM for which you want to enable vFRC


Find world ID of each device


Start gathering statistics on World ID // Give it some time while it captures statistics

Get the IO length histogram to find the most dominant IO length

You want the IO length for the harddisk you will enable vFRC, in this case scsi0:1

(-c means compressed output)

As you can see, in this specific case,  16383(16k) is the most dominant IO length, and this is what you should use in the Advance options.


Now you are using a Virtual Flash Read Cache on that VM’s harddisk, which should improve the performance.

Monitor your vFRC

Login to the ESX host running the workload/VM for which you enabled vFRC, in the example below it is a 24GB Cache with 4K block-size:

There is a lot of important information here:
The Cache hit rate shows you the percentage of how much the cache is being used. A high number is better because it means that hits use the cache more frequently.
Other important items are IOPs and latency.

This stats also show information that can help you right size your cache, if you see a high number of cache evictions, Evict->Mean blocks per I/O operation, it could be an indication that your cache size is small or that the block-size of the cache is incorrectly configured.

To calculate available block in the cache, do the following:
SizeOfCache(in bytes) / BlockSizeOfCache(in bytes) = #ofBlocksInvFRC

For the example: A 24GB cache with 4k block-size, will have 6291456 blocks in the vFRC, see:


In the stats above we see 5095521 as the Mean number of cache blocks in use, and no evictions which indicates that 24GB cache with 4k seems to be a correctly sized cache.

Keep monitoring your cache to gain as much performance as you can from your Flash/SSD devices.

If you are running your VMware infrastructure on NetApp storage, you can utilize NetApp’s Virtual Storage Console (VCS) which integrates with vCenter to a provide a strong, fully integrated solution to managing your storage from within vCenter.

With VCS you can discover, monitor health and capacity, provision, perform cloning, backup and restores, as well as optimize your ESX hosts and misaligned VMs.

The use case I will write about is the ability to take a back up of all of your production Datastores and initiate a SnapMirror transfer to DR.

Installing NetApp’s Virtual Storage Console

Download the software from NetApp’s website (need credentials) from the software section: (version as of this post)

Install on a Windows System (can be vCenter if using Windows vCenter)

There are currently a couple of bugs with version 5.0 that can be addressed by following the following articles – hopefully they will be addressed soon by NetApp:


Follow the wizard…


smvi1 smvi2


Select Backup and Recovery to be able to use these features
smvi3 smvi4 smvi5


You may get a warning here and this is where you need to follow the bug fixes specified earlier (adding a line to smvi.override)

Then you need to enter the information requested:

Plugin service information: hostname/IP of the server you installed VSC (in this case it was the vCenter server)

Then enter the vCenter information


Check that the registration was successful


Verify that it is installed in the vCenter Web Client



Configure the NetApp Virtual Storage Console form vCenter Web Client

On the vCenter Web Client click on the Virtual Storage Console icon


Click on ‘Storage Systems’ and add your NetApp controllers including your DR controllers ( you will need this to successfully initiate SnapMirror after backups)


Once you have added them, you will be able to see their details and status, take a look at the summary and related objects. Also click on the ‘View Details’ link(s) they will provide a wealth of information about your storage


Go back to the main page of the Virtual Storage Console and you will see global details


With the above setup you can start provisioning storage, create backups/restores, mount snapshots and look at the details of everyobject from a storage perspective. Take a look at the Datacenter, Datastores, VMs.




Configure Datasore Backups followed by NetApp SnapMirror for Disaster Recovery


You need to have an initialized SnapMirror relationship

Create an empty schedule by adding the following line to /etc/snapmirror.conf

Ensure you have added your production NetApp controllers as well as your DR controllers on the vCenter Web Clien Virtual Storage Console


In vCenter Web Client, go to your Datastores view.

(Optional but recommended) Enable Deduplication in your Datastores

// This will save storage and increase the efficiency of the replication because you will only replicate deduplicated data. To do so:

Right click on a Datastore -> NetApp VSC -> Deduplication -> Enable

Right click on a Datastore -> NetApp VSC -> Deduplication -> Start (Select to scan the entire volume)


By default the deduplication process is scheduled daily at midnight, I recommend to have it happen at least 2 hours before SnapMirror replication.

For example:

Deduplication: daily at 8pm

SnapMirror: daily at 10pm

To change the default schedule of the deduplication process per volume you need to do the following on the NetApp controllers CLI:

Schedule the Backup and SnapMirror Update

Right click on a Datastore -> NetApp VSC -> Backup -> Schedule Backup









Add other Datastores to the same backup job (please remember that for SnapMirror Update to work you need to have pre-created the SnapMirror relationship).

Right click on the other Datastores -> NetApp VSC -> Backup -> Add to Backup Job

You will see the already created backup job (10pm_backup), select it and click ok.


At this point, all the Datastores you selected will be Deduplicated, Backed-up and Replication to the DR site.

Restoring on the Prod or DR site

Now that NetApp VSC is setup, backing up a replicating data, we can restore at will from the snapshot.

Restore a VM (entire VM or some of its virtual disks)

Right click on VM -> NetApp VSC -> Restore

Select backup from the list and choose to restore entire VM or just some disks

Restore from Datastore

Right click on Datastore -> NetApp VSC -> Restore

Select backup from the list and choose what to restore

Mount a Snapshot (it will show as another Datastore and you can retrieve files or even start VMs)

Click on a Datastore and go to Related Objects -> Backups

Select Backup, Right-Click and select Mount

You will see the datastore present and mounted to one ESX host, from there you can retrieve files, start VMs, etc.

Once you are done go back to the Datastore and unmount the Backup.


In this guide I will go through the process of booting up from an external USB hard drive in VMware fusion.
The main use case is the ability to take the hard drive of an existing physical server and be able to boot from that physical hard drive into a VMware Fusion VM.
Another use case (my latest use case): I enrolled in a technical training course and the vendor provided/shipped a bootable USB external hard drive with a Linux OS installed as a LAB, with the expectation that I was going to boot from it using a PC, that works great, but I wanted to use my Macbook and be able to run this LAB while on the road.
As soon as I tried to boot from it using my Mac I got a kernel panic, due to missing drivers, etc.
So, I decided that I was going to use a VM using VMware Fusion, as follows:

1) Check the system before plugin your USB external hard drive:

2) Plug your USB external hard drive and look for the new disk:

3) In VMware Fusion create a VM as follows:

Create a New VMware Fusion VM:




Customize it as you wish (I removed sound, printers and modified the RAM and CPU,etc)



Also remove the VMware fusion created VMDK as you don’t need it (Unless you actually need it)



OK, the VM creation is complete, now you have to actually use the physical hard drive as shown below


4) Create a RawDisk VMDK in the newly created VM that will point to the USB external hard drive

5) Add the disk to your VM configuration (.vmx file)

6) Power on your VM and voila! you should see your VM booting from the USB external hard drive




Snapshots are a great feature, probably one of the coolest in virtualization. They can become a problem if they are not used appropriately, unfortunately sometimes we let them grow to an unmanageable size, which can bring performance issues and give us headaches when we need to delete them.

In this post, I will show you how to find out what snapshots are present in your environment, along with some other useful information, like size.

To run the commands below you will need to install PowerCLI (on windows), which is a way to manage a VMware environment programmatically using PowerShell scripting.

To get PowerCLI, go to:

1) Once you have PowerCLI, open it up, a command prompt will appear:

// At this point you have a session open with your vCenter

2) Query your vCenter to find out what snapshots are present:

Let me explain what is going on:
Get-VM‘ asks for the VMs that are running on your vCenter, PowerCLI returns an object for each VM and you then asks for the snapshots of each returned VM object by using ‘Get-Snapshot‘, then you take that output and format it by using ‘Format-list‘, but you are only asking for the information about ‘vm,name,sizeGB,create,powerstate

You can request any of the following:

3) The above will give you the info you want, but I prefer CSV reports that I can share with the team or management. To get a good CSV report run the following:

I recommend taking a look at VMware’s best practices around snapshots:

VSM High Availability is optional but it is strongly recommended in a production environment.
High availability is accomplished by installing and configuring a secondary VSM.

For instructions on how to install and configure a Primary Cisco 1000v VSM on your vSphere environment please follow

Then come back to this post to learn how to install and configure a secondary VSM for high availability.

1) Check the redundancy status of your primary VSM

// Check Modules

// check HA status

2) Install the secondary VSM from the OVF.
Select to Manually Configure Nexus 1000v and just like the primary installation select the right VLANs for Control, Packet and Management.

When you get to this properties page:

Do not fill in any of the fields, just click next and Finish

3) Power on the Secondary VSM
The system setup script will prompt for the following:

Admin password // Choose your password
VSM Role: secondary // VSM will reboot
Domain ID: 100 // This must be the same domain ID you gave to the primary, I used 100

Once a VSM is set to secondary it will reboot.

4) Verify VSM high availability
Login to VSM and run:

VMware recommends that you run the Primary and the Secondary on different ESX hosts.

5) Test VSM switchover
From the VSM run system switchover to switch between the active and the standby VSMs.

That is it, now you have a highly available Cisco 1000v VSM infrastructure.

The following guide describes the neccessary steps to install and configure a pair of cisco nexus 1000v switches to be used in a vSphere cluster.
These will connect to Cisco Nexus 5020 Upstream Switches.

In this guide the hardware used consists of:

3x HPProliant DL380 G6 with 2 4-port NICs.
2x Cisco 5200Nexus Switches

vSphere 4 Update 1 Enterprise Plus (needed to use Cisco nexus1000v)
vCenter installed as a virtual machine – (on VLAN 10)
Cisco Nexus 1000v 4.0.4.SV1.3b –
Primary domain id 100 (on VLAN 101)

I am assuming you have already installed and configured vCenter and the ESX cluster.

Cisco recommends that you use 3 separate VLANs for Nexus traffic, I am using the following VLANs:

100 – Control – Control connectivity between Nexus 1000V VSM and VEMs (Non Routable)
101 – Management – ssh/telnet/scp to the cisco Nexux 1000v int mgmt0 (Routable)
102 – Packet – Internal connectivity between Nexus 1000v (Non Routable)

And I will also use VLAN 10 and 20 for VM traffic (10 for Production, 20 for Development)

1) Install vSphere (I assume you have done this step)

2) Configure Cisco Nexus 5020 Upstream Switchports

You need to configure the ports on the upstream switches in order to pass VLAN information to the ESX hosts’ uplink NICs

On the Nexus5020s, run the following:

// These commands give a description to the port and allow trunking of VLANs.
// The allowed VLANs are listed
// spanning-tree port type edge trunk is the recommended spanning-tree type

interface Ethernet1/1/10
description “ESX1-eth0”
switchport mode trunk
switchport trunk allowed vlan 10-20,100-102
spanning-tree port type edge trunk

3) Service Console VLAN !!!

When I installed the ESX server, I used the native VLAN, but after you change the switch port from switchport mode access to switchport mode trunk, the ESX server needs to be configured to send specific VLAN traffic to the Service Console.
My Service Console IP is on VLAN 10, so you will need to console to the ESX host and enter the following:

[root@esx1]# esxcfg-vswitch -v 10 -p “Service Console” vSwitch0

4) Add Port Groups for the Control,Packet and Management VLANs.
I add these Port Groups to VMware Network Virtual Switch vSwitch0 on all the ESX hosts. Make sure to select the right VLANs for your environment.

5) Now that you have configured the Control,Packet and Management Port Groups with their respective VLANs, you can install the Cisco Nexus 1000v.
I chose to install the Virtual Appliace (OVA) file downloaded from Cisco. The installation is very simple, make sure to select to Manually Configure Nexus 1000v and to Map the VLANs to Control, Packet and Management. The rest is just like installing a regular virtual appliance.

6) Power on and open a console window to the Nexus1000v VM(appliance) you just installed. A setup script will start running and will ask you a few questions.

admin password
domain ID // This is used to identify the VSM and VEM. If you want to have 2 Nexus 1000v for high availability, both Nexus 1000v will use the same domain ID. I chose 100
High Availability mode // If you plan to use 2 Nexus 1000v for high availability, then for the first installation select primary, otherwise standalone
Network Information // Things like IP, netmask, gateway Disable Telnet! Enable SSH!
The other stuff we will configure later (Not from the Setup script)

7) Register vCenter Nexus 1000v Plug-in
Once you have the Nexus 1000v basics configured, you should be able to access it. Try to SSH to it (Hopefully you enabled SSH).
Open a browser and point it to the Nexus 1000v management IP address (in this case and you will get a webpage like the following

  • Download the cisco_nexus_1000v_extension.xml
  • Open vSphere client and connect to the vCenter.
  • Go to Plug-ins > Manage Plug-ins
  • Right-click under Available Plug-ins and select New Plu-ins, Browse to the cisco_nexus_1000v_extension.xml
  • Click Register Plug-in (disregard security warning about new SSL cert)

You do NOT need to Download and Install the Plug-in, just Register it.

Now we can start the “advanced” configuration of the Nexus 1000v

8 ) Configure SVS domain ID on VSM

n1kv(config)# svs-domain
n1kv(config-svs-domain)# domain id 100
n1kv(config-svs-domain)# exit

9) Configure Control and Packet VLANs

n1kv(config)# svs-domain
n1kv(config-svs-domain)# control vlan 100
n1kv(config-svs-domain)# packet vlan 102
n1kv(config-svs-domain)# svs mode L2
n1kv(config-svs-domain)# exit

10) Connect Nexus 1000v to vCenter
In this step we are defining the SVS connection which is the link between the VSM and vCenter.

n1kv(config)# svs connection vcenter
n1kv(config-svs-conn)# protocol vmware-vim
n1kv(config-svs-conn)# vmware dvs datacenter-name myDatacenter
n1kv(config-svs-conn)# remote ip address
n1kv(config-svs-conn)# connect
n1kv(config-svs-conn)# exit
n1kv(config)# exit
n1kv# copy run start

//Verify the SVS connection

12) Create the VLANs on the VSM

n1kv# conf t
n1kv(config)# vlan 100
n1kv(config-vlan)# name Control
n1kv(config-vlan)# exit
n1kv(config)# vlan 102
n1kv(config-vlan)# name Packet
n1kv(config-vlan)# exit
n1kv(config)# vlan 101
n1kv(config-vlan)# name Management
n1kv(config-vlan)# exit
n1kv(config)# vlan 10
n1kv(config-vlan)# name Production
n1kv(config-vlan)# exit
n1kv(config)# vlan 20
n1kv(config-vlan)# name Development
n1kv(config-vlan)# exit

// Verify VLANs

13) Create Uplink Port-Profile
The Cisco Nexus 1000v acts like the VMware DVS. Before you can add hosts to the Nexus1000v you will need to create uplink port-profiles; which will allow VEMs to connect with the VSM.

n1kv(config)# port-profile system-uplink
n1kv(config-port-prof)# switchport mode trunk
n1kv(config-port-prof)# switchport trunk allowed vlan 10,20,100-102
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# system vlan 100,102
n1kv(config-port-prof)# vmware port-group dv-system-uplink
n1kv(config-port-prof)# capability uplink
n1kv(config-port-prof)# state enabled

// Verify Uplink Port-Profile

14) It is now time to install the VEM on the ESX hosts.
The preferred way to do this is using VUM(VMware Update Manager). If you have VUM in the system the installation will be very simple.
Simply go to Home->Inventory->Networking
Right Click on the Nexus Switch and add host

// Verify that the task is successfull

// Also take a look at the VSM console

// Do the same for all the other ESX Hosts

15) Create the Port-Profile(s) (VMware Port-Groups)
Port-Profile configure interfaces on the VEM.
From the VMware point of view a port-profile is represented as a port-group.

// The Port-Profile below will be the VLAN 10 PortGroup on vCenter

n1kv# conf t
n1kv(config)# port-profile VLAN_10
n1kv(config-port-prof)# vmware port-group
n1kv(config-port-prof)# switchport mode access
n1kv(config-port-prof)# switchport access vlan 10
n1kv(config-port-prof)# vmware max-ports 200 // By default it has only 32 ports, I want 200 available
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# state enabled
n1kv(config-port-prof)# exit

16) Select the PortGroup you want your VM to connect to

17) Verify Port Profile/Port Groups from the VSM console

At this point you are ready to use the Cisco 1000v, but if you plan to run this in a production environment, it is strongly recommended you run the VSM in High Availability mode.
Follow this post to learn how to install and configure VSM High Availability:

VMware Update Manager is a tool to automate and streamline the process of applying updates, patches or upgrades to a new version. VUM is fully integrated within vCenter Server and offers the ability to scan and remediate ESX/ESXi hosts, virtual appliances, virtual machine templates, and online and offline virtual machines running certain versions of Windows, Linux, and some Windows applications.

In this post you will learn how to Configure VMware Update Manager.
To install VMware Update manager follow Install VMware Update Manager.

  1. VUM Configuration
  2. Create a Baseline
  3. Create a Baseline Group
  4. Attach Baseline to Host/Cluster
  5. Remediate/Patch

1. VUM Configuration
Open Update Manager (Admin View)
Go to Home -> Update Manager

Under the configuration tab, Click on Patch Download Schedule to change the schedule and add an email notification.
Also change the Patch Download Settings to download only what you need, in my case I don’t need windows/linux VM patches or ESX 3.x patches so I am deselecting those.

2. Create a Baseline
There are two types of baselines: Dynamic and Fixed. Fixed baselines are used when you need to apply a specific patch to a system, while dynamic baselines are used to keep the system current with the latest patches. In this guide we will create a Dynamic Baseline.

Go to the Patch Baselines tab and click Create… on the upper right side.

The following screenshots are for a Security patches only baseline:

Give it a name and description

Select Dynamic

Choose Criteria

Review and click Finish

3. Create a Baseline Group
Baseline Groups, are combinations of non conflicting baselines. You can use a Baseline Group to combine multiple dynamic patch baselines, for example the default Critical Patches Baseline and the HostSecurity baseline we created in the previous step

This will create a Baseline Group that includes Critical and Security Patches:
Go to the Patch Baselines tab and click Create… (The Create link that is next to Baseline Groups)

Give it a name and select Host or VM, in this case it is Host

No upgrades, just patches

Select the individual Baselines you want to group

Leave defaults

Review and click Finish

This is how it should look

Now you are all set to attach your Baselines to a Host or to a Cluster.

4. Attach Baseline to Host/Cluster

Go into the Hosts and Clusters View (CTRL+SHIFT+H), select the Host/Cluster you want to attach the baseline to. In this guide I will attach the baseline to the Cluster.

Click on the Cluster, go to the Update Manager tab and click Attach…

Select the Individual or Group Baselines you want to apply to the Cluster and click Attach

You will back at the Hosts and Cluster view, click on Scan…

Once the scan has completed it will show you if you are compliant or not and then you have to remediate (patch).

5. Remediate/Patch
You can remediate the whole cluster or a host at a time, I prefer to do it a host at a time, but it is up to you.

Right click the Cluster/Host you want to patch, and select Remediate…

Select the Baseline you want to remediate

It will list all the patches that will be applied, here you can deselect some patches in case you don’t want them

You can do it immediately or schedule it to happen at a different time

Review the summary and execute

The server will go into maintenance mode and patches will be applied, also, if needed, the server will be rebooted as well.

And that is it, the Host/Cluster is now compliant and patched for Critical and Security patches.

Some time ago I built a secondary VMware cluster for doing some specific testing.
From the primary VMware cluster I copy a virtual machine over SCP to the new secondary VMware cluster.

I then boot up the virtual machine on the new secondary VMware cluster and I experienced some network connectivity issues.

The problem was that the MAC address of the virtual machine was the same MAC address the virtual machine on the main site had and they were running on the same VLAN.

When VMware prompts you about if you Copied or Moved a Virtual Machine make sure you enter that you copied, so that it generates the following unique attributes:


In this case there was no prompts so I had to make the following changes on the Virtual Machine configuration files so that the next time it boots new identifiers are generated.

1) Power off Virtual Machine

2) Go to the Service Console and open the configuration file for the virtual machine in question:

[root@esx4 ~]# vi /vmfs/volumes/[datastore]/[vmname]/[vmname].vmx

Delete the following lines:

3) Power on Virtual Machine and new values will be generated.

When deploying vmware virtual machine on top of VMFS on top of a NetApp SAN, you need to make sure to align it properly otherwise you will end up with performance issues. File system misalignment is a known issue when virtualizing. Also, when deploying LUNs from a NetApp appliance, you need to make sure no to reformat the LUN, or you will lose the alignment, just create a filesystem on top of the LUN.

NetApp provides a great technical paper about this at:

In this post Iwill show you how to align an empty vmdk disk/LUN using the open source utility GParted. This is for new vmdk disks/LUNs, dont do it on disk that contain data as you will lose it. This is for Golden Templates that you want aligned, so subsequent virtual machines will inherit the right alignment, or for servers that need a NetApp LUN attached.

The resulting partition works for Linux and Windows, just create a filesystem on top of it.

You can find GParted at:

1. Boot the VM from the GParted CD/Iso. Click on the terminal icon to open a terminal:

2. Check the partition Starting Offsets, in this case I have 3 disks 2 are already aligned to the 64k offset, I will align the new disk as well.

3. Create an aligned partition on the drive using fdisk

gparted# fdisk /dev/sdc

Below is a screenshot of the answers to fdisk, the important option is to select to start the offset at 64k, as indicated.

4. Now check again and the partition should be aligned

[root@server ~]# fdisk -lu

Disk /dev/sda: 209 MB, 209715200 bytes
64 heads, 32 sectors/track, 200 cylinders, total 409600 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 64 409599 204768 83 Linux

Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders, total 150994944 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 64 41943039 20971488 8e Linux LVM

Disk /dev/sdc: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 64 209715199 104857568 83 Linux