Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

Browsing Posts in Virtualization

One of the most important things you should do to your systems is to ensure they have the right time.
In this post I will show how to check and ensure your systems have the correct time using PowerCli.

==> Login to vCenter:

$admin = Get-Credential –Credential EXAMPLE\john
Connect-VIServer -Server vc.example.com -Credential $admin

==> Check time settings:

Get-VMHost | Sort Name | Select Name, `
   @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
   Timezone, `
   @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
   @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
   @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
   @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object –ExpandProperty Enabled}} `
   | Format-Table -AutoSize

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Sort Name | Select Name, `
>>    @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
>>    Timezone, `
>>    @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
>>    @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
>>    @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
>>    @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object -ExpandProperty Enabled}} `
>>    | Format-Table -AutoSize
>>

Name                 NTPServer 					TimeZone CurrentTime         ServiceRunning StartUpPolicy FirewallException
----                 --------- 					-------- -----------         -------------- ------------- -----------------
esx1.example.com           					UTC      6/7/2015 3:25:39 PM          False off                       False
esx2.example.com           					UTC      6/7/2015 3:25:40 PM          False off                       False
esx3.example.com 	{192.168.10.1,192.168.11.1}	        UTC      6/7/2015 3:25:42 PM          False off                       False
esx4.example.com 	192.168.11.1 				UTC      6/7/2015 3:25:43 PM          False off                       False

==> Set time to correct time:

// Get time from the machine running PowerCli
$currentTime = Get-Date

// Update time on ESX hosts
$hosts_time = Get-VMHost | %{ Get-View $_.ExtensionData.ConfigManager.DateTimeSystem }
$hosts_time.UpdateDateTime((Get-Date($currentTime.ToUniversalTime()) -format u))

==> Remove old NTP servers (if any):

$old_ntp_server = '192.168.10.1'
Get-VMHost | Remove-VmHostNtpServer -NtpServer $old_ntp_server -Confirm

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Sort Name | Select Name, `
>>    @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
>>    Timezone, `
>>    @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
>>    @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
>>    @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
>>    @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object -ExpandProperty Enabled}} `
>>    | Format-Table -AutoSize
>>

Name                 NTPServer 	TimeZone CurrentTime         ServiceRunning StartUpPolicy FirewallException
----                 --------- 	-------- -----------         -------------- ------------- -----------------
esx1.example.com           	UTC      6/7/2015 3:25:39 PM          False off                       False
esx2.example.com           	UTC      6/7/2015 3:25:40 PM          False off                       False
esx3.example.com 		UTC      6/7/2015 3:25:42 PM          False off                       False
esx4.example.com		UTC      6/7/2015 3:25:43 PM          False off                       False

==> Change NTP to desired configuration:

$ntp_server = '192.168.10.1'
Get-VMHost | Add-VMHostNtpServer $ntp_server
Get-VMHost | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Set-VMHostFirewallException -Enabled:$true
Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Start-VMHostService
Get-VMhost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Set-VMHostService -policy "automatic"

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> $ntp_server = '192.168.10.1'
PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Add-VMHostNtpServer $ntp_server
192.168.10.1
192.168.10.1
192.168.10.1
192.168.10.1

==> Enable Firewall Exception

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Set-VMHostFirewallException -Enabled:$true

Name                 Enabled IncomingPorts  OutgoingPorts  Protocols  ServiceRunning
----                 ------- -------------  -------------  ---------  --------------
NTP Client           True                   123            UDP        True
NTP Client           True                   123            UDP        True
NTP Client           True                   123            UDP        False
NTP Client           True                   123            UDP        False

==> Start NTPd service

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Start-VMHostService

Key                  Label                          Policy     Running  Required
---                  -----                          ------     -------  --------
ntpd                 NTP Daemon                     on         True     False
ntpd                 NTP Daemon                     on         True     False
ntpd                 NTP Daemon                     off        True     False
ntpd                 NTP Daemon                     off        True     False

==> Ensure NTPd service starts automatically (via policy)

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMhost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Set-VMHostService -policy "automatic"

Key                  Label                          Policy     Running  Required
---                  -----                          ------     -------  --------
ntpd                 NTP Daemon                     automatic  True     False
ntpd                 NTP Daemon                     automatic  True     False
ntpd                 NTP Daemon                     automatic  True     False
ntpd                 NTP Daemon                     automatic  True     False

==> Verify all is set the way you expected

Get-VMHost | Sort Name | Select Name, `
   @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
   Timezone, `
   @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
   @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
   @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
   @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object –ExpandProperty Enabled}} `
   | Format-Table -AutoSize

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Sort Name | Select Name, `
>>    @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
>>    Timezone, `
>>    @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
>>    @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
>>    @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
>>    @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object -ExpandProperty Enabled}} `
>>    | Format-Table -AutoSize
>>

Name                 NTPServer  TimeZone CurrentTime         ServiceRunning StartUpPolicy FirewallException
----                 ---------  -------- -----------         -------------- ------------- -----------------
esx1.example.com 192.168.10.1 UTC      6/7/2015 3:34:49 PM           True automatic                  True
esx2.example.com 192.168.10.1 UTC      6/7/2015 3:34:51 PM           True automatic                  True
esx3.example.com 192.168.10.1 UTC      6/7/2015 3:34:52 PM           True automatic                  True
esx4.example.com 192.168.10.1 UTC      6/7/2015 3:34:54 PM           True automatic                  True

VMware ESXi can take advantage of Flash/local SSDs in multiple ways:

  • Host swap cache (since 5.0):  ESXi will use part of the an SSD datastore as swap space shared by all VMs.  This means that when there is ESX memory swapping, the ESXi server will use the SSD drives, which is faster than HDD, but still slower than RAM.
  • Virtual SAN (VSAN) (since 5.5 with VSAN licensing): You can combine  the local HDD and local SSD on each host and basically create a distributed storage platform.  I like to think of it as a RAIN(Redundant Array of Independent Nodes).
  • Virtual Flash/vFRC (since 5.5 with Enterprise Plus): With this method the SSD is formatted with VFFS and can be configured as read and write through cache for your VMs, it allows ESXi to locally cache virtual machine read I/O and survives VM migrations as long as the destination ESXi host has Virtual Flash enabled. To be able to use this feature VMs HW version needs to be 10.

Check if the SSD drives were properly detected by ESXi

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:

flash1

 

From CLI:

~ # esxcli storage core device list
...
naa.60030130f090000014522c86152074c9
 Display Name: Local LSI Disk (naa.60030130f090000014522c86199898)
 Has Settable Display Name: true
 Size: 94413
 Device Type: Direct-Access
 Multipath Plugin: NMP
 Devfs Path: /vmfs/devices/disks/naa.60030130f090000014522c86199898
 Vendor: LSI
 Model: MRSASRoMB-8i
 Revision: 2.12
 SCSI Level: 5
 Is Pseudo: false
 Status: on
 Is RDM Capable: false
 Is Local: true
 Is Removable: false
 Is SSD: false  <-- Not recognized as SSD
 Is Offline: false
 Is Perennially Reserved: false
 Queue Full Sample Size: 0
 Queue Full Threshold: 0
 Thin Provisioning Status: unknown
 Attached Filters:
 VAAI Status: unsupported
 Other UIDs: vml.020000000060030130f090000014522c86152074c94d5253415352
 Is Local SAS Device: false
 Is Boot USB Device: false
 No of outstanding IOs with competing worlds: 32
...

To enable the SSD option on the SSD drive

At this point you should put your host in maintenance mode because it will need to be rebooted.

If the SSD is not properly detected you need to use storage claim rules to force it to be type SSD. (This is also useful if you want to fake a regular drive to be SSD for testing purposes)

# esxcli storage nmp device list
...
naa.60030130f090000014522c86152074c9   <-- Take note of this device ID for the command below
 Device Display Name: Local LSI Disk (naa.60030130f090000014522c86152074c9)
 Storage Array Type: VMW_SATP_LOCAL
 Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration.
 Path Selection Policy: VMW_PSP_FIXED
 Path Selection Policy Device Config: {preferred=vmhba2:C2:T0:L0;current=vmhba2:C2:T0:L0}
 Path Selection Policy Device Custom Config:
 Working Paths: vmhba2:C2:T0:L0
 Is Local SAS Device: false
 Is Boot USB Device: false
...

Add a PSA claim rule to mark the device as SSD (if it is not local (e.g. SAN))

# esxcli storage nmp satp rule add --satp=<SATP_TYPE> --device=<device ID> --option="enable_ssd"

For example (in case this was a SAN attached LUN)

# esxcli storage nmp satp rule add --satp=VMW_SATP_XXX --device=naa.60030130f090000014522c86152074c9  --option="enable_ssd"

 

Add a PSA claim rule to mark the device as Local and SSD at the same time (if the SSD drive is local)

# esxcli storage nmp satp rule add –-satp=VMW_SATP_LOCAL –-device=<device ID> --option="enable_local enable_ssd"

For the device in my example it would be:

# esxcli storage nmp satp rule add --satp=VMW_SATP_LOCAL --device=naa.60030130f090000014522c86152074c9 --option="enable_local enable_ssd"

Reboot your ESXi host for the changes to take effect.

 

To remove the rule (for whatever reason, including testing and going back)

esxcli storage nmp satp rule remove --satp VMW_SATP_LOCAL --device <device ID> --option=enable_ssd
esxcli storage nmp satp list |grep ssd
esxcli storage core claiming reclaim -d <device ID>
esxcli storage core device list --device=<device ID>

Once the ESXi server is back online verify that the SSD option is OK

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:

flash2

From CLI:

~ # esxcli storage core device list
...
naa.60030130f090000014522c86152074c9
 Display Name: Local LSI Disk (naa.60030130f090000014522c86152074c9)
 Has Settable Display Name: true
 Size: 94413
 Device Type: Direct-Access
 Multipath Plugin: NMP
 Devfs Path: /vmfs/devices/disks/naa.60030130f090000014522c86152074c9
 Vendor: LSI
 Model: MRSASRoMB-8i
 Revision: 2.12
 SCSI Level: 5
 Is Pseudo: false
 Status: on
 Is RDM Capable: false
 Is Local: true
 Is Removable: false
 Is SSD: true  <-- Now it is true
 Is Offline: false
 Is Perennially Reserved: false
 Queue Full Sample Size: 0
 Queue Full Threshold: 0
 Thin Provisioning Status: unknown
 Attached Filters:
 VAAI Status: unsupported
 Other UIDs: vml.020000000060030130f090000014522c86152074c94d5253415352
 Is Local SAS Device: false
 Is Boot USB Device: false
 No of outstanding IOs with competing worlds: 32
...

Exit Maintenance mode.

Do the same on ALL hosts in the cluster.

Configure Virtual Flash

Now that the ESXi server recognize the SSD drives we can enable Virtual Flash.

You need to perform the below steps from the vSphere Web Client on all ESX hosts

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Resource Management -> Add Capacity…

flash3

You will see that the SSD device has been formatted using the VFFS filesystem, it can be used to allocate space for virtual flash host swap cache or to configure virtual Flash Read Cache for virtual disks.

flash4

 

Configure Virtual Flash Host Swap

One of the options you have is to use the Flash/SSD as Host Swap Cache, to do this:

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Host Swap Cache Configuration -> Edit…

// Enable and select the size of the cache in GB

flash5

 

Configure Flash Read Cache

Flash read cache is configured on a per-vm basis, per vmdk basis. VMs need to be at virtual hardware version 10 in order to use vFRC.

To enable vFRC on a VM’s harddrive:

VM -> Edit Settings -> Expand Hard Disk -> Virtual Flash Read Cache

Enter the size of the cache in GB (e.g. 20)

You can start conservative and increase if needed, I start with 10% of the VMDK size. Below, in the monitor vFRC section, you will see tips to rightsize your cache.

flash6

 

If you click on Advanced, you can configure/change the specific block-size (default is 8k) for the Read Cache, this allows you to optimize the cache for the specific workload the VM is running.

flash7

The default block size is 8k, but you may want to rightsize this based on the application/workload to be able to efficiently use the cache.

If you dont size the block-size of the cache you could potentially be affecting the efficiency of the cache:

  • If the workload has block sizes larger than the configured block-size then you will have increased cache misses.
  • If the workload has block sizes smaller than the configured block-size then you will be wasting precious cache.

Size correctly the block-size of your Cache

To correctly size the block-size of your cache you need to determine the correct I/O length/size for cache block size:

Login to the ESX host running the workload/VM for which you want to enable vFRC

 

Find world ID of each device

~ # /usr/lib/vmware/bin/vscsiStats -l
Virtual Machine worldGroupID: 44670, Virtual Machine Display Name: myvm, Virtual Machine Config File: /vmfs/volumes/523b4bff-f2f2c400-febe-0025b502a016/myvm/myvm.vmx, {
 Virtual SCSI Disk handleID: 8194 (scsi0:0)
 Virtual SCSI Disk handleID: 8195 (scsi0:1)
 }
...

 

Start gathering statistics on World ID // Give it some time while it captures statistics

~ # /usr/lib/vmware/bin/vscsiStats -s -w 44670
 vscsiStats: Starting Vscsi stats collection for worldGroup 44670, handleID 8194 (scsi0:0)
 Success.
 vscsiStats: Starting Vscsi stats collection for worldGroup 44670, handleID 8195 (scsi0:1)
 Success.

Get the IO length histogram to find the most dominant IO length

You want the IO length for the harddisk you will enable vFRC, in this case scsi0:1

(-c means compressed output)

~ # /usr/lib/vmware/bin/vscsiStats -p ioLength -c -w 44670
...
Histogram: IO lengths of Write commands,virtual machine worldGroupID,44670,virtual disk handleID,8195 (scsi0:1)
 min,4096
 max,409600
 mean,21198
 count,513
 Frequency,Histogram Bucket Limit
 0,512
 0,1024
 0,2048
 0,4095
 174,4096
 0,8191
 6,8192
 1,16383
 311,16384
 4,32768
 1,49152
 0,65535
 2,65536
 1,81920
 1,131072
 1,262144
 11,524288
 0,524288
...

As you can see, in this specific case,  16383(16k) is the most dominant IO length, and this is what you should use in the Advance options.

flash8

Now you are using a Virtual Flash Read Cache on that VM’s harddisk, which should improve the performance.

Monitor your vFRC

Login to the ESX host running the workload/VM for which you enabled vFRC, in the example below it is a 24GB Cache with 4K block-size:

# List physical Flash devices
 ~ # esxcli storage vflash device list
 Name Size Is Local Is Used in vflash Eligibility
 -------------------- ----- -------- ----------------- ---------------------------------
 naa.500a07510c06bf6c 95396 true true It has been configured for vflash
 naa.500a0751039c39ec 95396 true true It has been configured for vflash
# Show virtual disks configured for vFRC. You will find the vmdk name for the virtual disk in the cache list:
 ~ # esxcli storage vflash cache list
 vfc-101468614-myvm_2
# Get Statistics about the cache
~ # esxcli storage vflash cache stats get -c vfc-101468614-myvm_2
   Read:
         Cache hit rate (as a percentage): 60
         Total cache I/Os: 8045314
         Mean cache I/O latency (in microseconds): 3828
         Mean disk I/O latency (in microseconds): 13951
         Total I/Os: 13506424
         Mean IOPS: 249
         Max observed IOPS: 1604
         Mean number of KB per I/O: 627
         Max observed number of KB per I/O: 906
         Mean I/O latency (in microseconds): 4012
         Max observed I/O latency (in microseconds): 6444
   Evict:
         Last I/O operation time (in microseconds): 0
         Number of I/O blocks in last operation: 0
         Mean blocks per I/O operation: 0
   Total failed SSD I/Os: 113
   Total failed disk I/Os: 1
   Mean number of cache blocks in use: 5095521

There is a lot of important information here:
The Cache hit rate shows you the percentage of how much the cache is being used. A high number is better because it means that hits use the cache more frequently.
Other important items are IOPs and latency.

This stats also show information that can help you right size your cache, if you see a high number of cache evictions, Evict->Mean blocks per I/O operation, it could be an indication that your cache size is small or that the block-size of the cache is incorrectly configured.

To calculate available block in the cache, do the following:
SizeOfCache(in bytes) / BlockSizeOfCache(in bytes) = #ofBlocksInvFRC

For the example: A 24GB cache with 4k block-size, will have 6291456 blocks in the vFRC, see:
25769803776
/
4096
=
6291456

 

In the stats above we see 5095521 as the Mean number of cache blocks in use, and no evictions which indicates that 24GB cache with 4k seems to be a correctly sized cache.

Keep monitoring your cache to gain as much performance as you can from your Flash/SSD devices.

If you are running your VMware infrastructure on NetApp storage, you can utilize NetApp’s Virtual Storage Console (VCS) which integrates with vCenter to a provide a strong, fully integrated solution to managing your storage from within vCenter.

With VCS you can discover, monitor health and capacity, provision, perform cloning, backup and restores, as well as optimize your ESX hosts and misaligned VMs.

The use case I will write about is the ability to take a back up of all of your production Datastores and initiate a SnapMirror transfer to DR.

Installing NetApp’s Virtual Storage Console

Download the software from NetApp’s website (need credentials) from the software section: VSC_vasavp-5-0.zip (version as of this post)

Install on a Windows System (can be vCenter if using Windows vCenter)

There are currently a couple of bugs with version 5.0 that can be addressed by following the following articles – hopefully they will be addressed soon by NetApp:

http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=821600

and

http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=767444

Follow the wizard…

 

smvi1 smvi2

 

Select Backup and Recovery to be able to use these features
smvi3 smvi4 smvi5

 

You may get a warning here and this is where you need to follow the bug fixes specified earlier (adding a line to smvi.override)

Then you need to enter the information requested:

Plugin service information: hostname/IP of the server you installed VSC (in this case it was the vCenter server)

Then enter the vCenter information

smvi6

Check that the registration was successful

smvi7

Verify that it is installed in the vCenter Web Client

smvi8

 

Configure the NetApp Virtual Storage Console form vCenter Web Client

On the vCenter Web Client click on the Virtual Storage Console icon

smvi9

Click on ‘Storage Systems’ and add your NetApp controllers including your DR controllers ( you will need this to successfully initiate SnapMirror after backups)

smvi10

Once you have added them, you will be able to see their details and status, take a look at the summary and related objects. Also click on the ‘View Details’ link(s) they will provide a wealth of information about your storage

smvi11

Go back to the main page of the Virtual Storage Console and you will see global details

smvi12

With the above setup you can start provisioning storage, create backups/restores, mount snapshots and look at the details of everyobject from a storage perspective. Take a look at the Datacenter, Datastores, VMs.

smvi13

smvi14

 

Configure Datasore Backups followed by NetApp SnapMirror for Disaster Recovery

Pre-requisites:

You need to have an initialized SnapMirror relationship

prod-filer> vol size vm_datastore
vol size: Flexible volume 'vm_datastore' has size 500g.
dr-filer>  vol create vm_datastore_snapmirror aggr0 500g
dr-filer> vol restrict vm_datastore_snapmirror
dr-files> snapmirror initialize -S prod-filer:vm_datastore dr-filer:vm_datastore_snapmirror

Create an empty schedule by adding the following line to /etc/snapmirror.conf

prod-filer:vm_datastore   dr-filer:vm_datastore_snapmirror    - - - - -

Ensure you have added your production NetApp controllers as well as your DR controllers on the vCenter Web Clien Virtual Storage Console

Configuration:

In vCenter Web Client, go to your Datastores view.

(Optional but recommended) Enable Deduplication in your Datastores

// This will save storage and increase the efficiency of the replication because you will only replicate deduplicated data. To do so:

Right click on a Datastore -> NetApp VSC -> Deduplication -> Enable

Right click on a Datastore -> NetApp VSC -> Deduplication -> Start (Select to scan the entire volume)

smvi15

By default the deduplication process is scheduled daily at midnight, I recommend to have it happen at least 2 hours before SnapMirror replication.

For example:

Deduplication: daily at 8pm

SnapMirror: daily at 10pm

To change the default schedule of the deduplication process per volume you need to do the following on the NetApp controllers CLI:

prod-filer> sis config -s sun-sat@20 /vol/vm_datastore

Schedule the Backup and SnapMirror Update

Right click on a Datastore -> NetApp VSC -> Backup -> Schedule Backup

smvi16

smvi17

smvi18

smvi19

smvi20

 

smvi21

 

Add other Datastores to the same backup job (please remember that for SnapMirror Update to work you need to have pre-created the SnapMirror relationship).

Right click on the other Datastores -> NetApp VSC -> Backup -> Add to Backup Job

You will see the already created backup job (10pm_backup), select it and click ok.

smvi22

At this point, all the Datastores you selected will be Deduplicated, Backed-up and Replication to the DR site.

Restoring on the Prod or DR site

Now that NetApp VSC is setup, backing up a replicating data, we can restore at will from the snapshot.

Restore a VM (entire VM or some of its virtual disks)

Right click on VM -> NetApp VSC -> Restore

Select backup from the list and choose to restore entire VM or just some disks

Restore from Datastore

Right click on Datastore -> NetApp VSC -> Restore

Select backup from the list and choose what to restore

Mount a Snapshot (it will show as another Datastore and you can retrieve files or even start VMs)

Click on a Datastore and go to Related Objects -> Backups

Select Backup, Right-Click and select Mount

You will see the datastore present and mounted to one ESX host, from there you can retrieve files, start VMs, etc.

Once you are done go back to the Datastore and unmount the Backup.

 

In this guide I will go through the process of booting up from an external USB hard drive in VMware fusion.
The main use case is the ability to take the hard drive of an existing physical server and be able to boot from that physical hard drive into a VMware Fusion VM.
Another use case (my latest use case): I enrolled in a technical training course and the vendor provided/shipped a bootable USB external hard drive with a Linux OS installed as a LAB, with the expectation that I was going to boot from it using a PC, that works great, but I wanted to use my Macbook and be able to run this LAB while on the road.
As soon as I tried to boot from it using my Mac I got a kernel panic, due to missing drivers, etc.
So, I decided that I was going to use a VM using VMware Fusion, as follows:

1) Check the system before plugin your USB external hard drive:

john@mac.local:~$diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:          Apple_CoreStorage                         250.1 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS Macintosh HD           *249.8 GB   disk1

2) Plug your USB external hard drive and look for the new disk:

john@mac.local:~$diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:          Apple_CoreStorage                         250.1 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS Macintosh HD           *249.8 GB   disk1
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *500.1 GB   disk2
   1:                      Linux                         134.2 MB   disk2s1
   2:                  Linux_LVM                         128.8 GB   disk2s2
   3:                  Linux_LVM                         22.5 GB    disk2s3

3) In VMware Fusion create a VM as follows:

Create a New VMware Fusion VM:

vm_create_new

vm_install_methodvm_install_method2vm_os_choicevm_virtual_disk

vm_customize_settings

Customize it as you wish (I removed sound, printers and modified the RAM and CPU,etc)



cpu_ramdisable_printsharingdisable_bluetoothdisable_sound

 

Also remove the VMware fusion created VMDK as you don’t need it (Unless you actually need it)

vm_remove_disk

 

OK, the VM creation is complete, now you have to actually use the physical hard drive as shown below

 

4) Create a RawDisk VMDK in the newly created VM that will point to the USB external hard drive

john@mac.local:~$ /Applications/VMware Fusion.app/Contents/Library/vmware-rawdiskCreator' create /dev/disk2 fullDevice ~/Documents/Virtual Machines.localized/openstack.vmwarevm/usb-ext-hdd ide

5) Add the disk to your VM configuration (.vmx file)

ide1:1.present = "TRUE"
ide1:1.fileName = "usb-ext-hdd.vmdk"

6) Power on your VM and voila! you should see your VM booting from the USB external hard drive

vm_bootcamp_allow

vm_bootcamp_boot

 

Snapshots are a great feature, probably one of the coolest in virtualization. They can become a problem if they are not used appropriately, unfortunately sometimes we let them grow to an unmanageable size, which can bring performance issues and give us headaches when we need to delete them.

In this post, I will show you how to find out what snapshots are present in your environment, along with some other useful information, like size.

To run the commands below you will need to install PowerCLI (on windows), which is a way to manage a VMware environment programmatically using PowerShell scripting.

To get PowerCLI, go to: www.vmware.com/go/powercli

1) Once you have PowerCLI, open it up, a command prompt will appear:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Connect-VIServer -Server vcenter.example.com -User john

Name                           Port  User
----                           ----  ----
vcenter.example.com             443   john

// At this point you have a session open with your vCenter

2) Query your vCenter to find out what snapshots are present:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VM | Get-Snapshot | Format-list vm,name,sizeGB,create,powerstate

VM         : vm1
Name       : Before_upgrade
SizeGB     : 16.38431124389171600341796875
PowerState : PoweredOn

VM         : vm2
Name       : Before_package_install
SizeGB     : 12.368686250410974025726318359
PowerState : PoweredOn

Let me explain what is going on:
Get-VM‘ asks for the VMs that are running on your vCenter, PowerCLI returns an object for each VM and you then asks for the snapshots of each returned VM object by using ‘Get-Snapshot‘, then you take that output and format it by using ‘Format-list‘, but you are only asking for the information about ‘vm,name,sizeGB,create,powerstate

You can request any of the following:
Description
Created
Quiesced
PowerState
VM
VMId
Parent
ParentSnapshotId
ParentSnapshot
Children
SizeMB
SizeGB
IsCurrent
IsReplaySupported
ExtensionData
Id
Name
Uid

3) The above will give you the info you want, but I prefer CSV reports that I can share with the team or management. To get a good CSV report run the following:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VM | Get-Snapshot | Select-Object vm,name,sizeGB,create,powerstate | Export-Csv C:\vm_snapshots.csv

I recommend taking a look at VMware’s best practices around snapshots:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1025279

VSM High Availability is optional but it is strongly recommended in a production environment.
High availability is accomplished by installing and configuring a secondary VSM.

For instructions on how to install and configure a Primary Cisco 1000v VSM on your vSphere environment please follow
configure-vsphere-and-cisco-nexus-1000v-connecting-to-nexus-5k-upstream-switches

Then come back to this post to learn how to install and configure a secondary VSM for high availability.

1) Check the redundancy status of your primary VSM

n1kv# show system redundancy status
Redundancy role
---------------
      administrative:   primary
         operational:   primary

Redundancy mode
---------------
      administrative:   HA
         operational:   None

This supervisor (sup-1)
-----------------------
    Redundancy state:   Active
    Supervisor state:   Active
      Internal state:   Active with no standby                  

Other supervisor (sup-2)
------------------------
    Redundancy state:   Not present

// Check Modules

n1kv# show module
Mod  Ports  Module-Type                      Model              Status
---  -----  -------------------------------- ------------------ ------------
1    0      Virtual Supervisor Module        Nexus1000V         active *
3    248    Virtual Ethernet Module          NA                 ok
4    248    Virtual Ethernet Module          NA                 ok
5    248    Virtual Ethernet Module          NA                 ok

Mod  Sw               Hw      
---  ---------------  ------  
1    4.0(4)SV1(3b)    0.0    
3    4.0(4)SV1(3b)    1.20   
4    4.0(4)SV1(3b)    1.20   
5    4.0(4)SV1(3b)    1.20   

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
3    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
4    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
5    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.10.10      NA                                    NA
3    192.168.16.82       xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  esx1.example.com
4    192.168.16.53       xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  esx2.example.com
5    192.168.16.149      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  esx3.example.com


* this terminal session 

// check HA status

n1kv# show system redundancy ha status
VDC No    This supervisor                         Other supervisor                        
------    ---------------                         ---------------
                        
vdc 1     Active with no standby                  N/A     

2) Install the secondary VSM from the OVF.
Select to Manually Configure Nexus 1000v and just like the primary installation select the right VLANs for Control, Packet and Management.

When you get to this properties page:

Do not fill in any of the fields, just click next and Finish

3) Power on the Secondary VSM
The system setup script will prompt for the following:

Admin password // Choose your password
VSM Role: secondary // VSM will reboot
Domain ID: 100 // This must be the same domain ID you gave to the primary, I used 100

Once a VSM is set to secondary it will reboot.

4) Verify VSM high availability
Login to VSM and run:

n1kv# show system redundancy status
Redundancy role
---------------
      administrative:   primary
         operational:   primary

Redundancy mode
---------------
      administrative:   HA
         operational:   HA

This supervisor (sup-1)
-----------------------
    Redundancy state:   Active
    Supervisor state:   Active
      Internal state:   Active with HA standby                  

Other supervisor (sup-2)
------------------------
    Redundancy state:   Standby

    Supervisor state:   HA standby
      Internal state:   HA standby
n1kv# show module
Mod  Ports  Module-Type                      Model              Status
---  -----  -------------------------------- ------------------ ------------
1    0      Virtual Supervisor Module        Nexus1000V         active *
2    0      Virtual Supervisor Module        Nexus1000V         ha-standby
3    248    Virtual Ethernet Module          NA                 ok
4    248    Virtual Ethernet Module          NA                 ok
5    248    Virtual Ethernet Module          NA                 ok

Mod  Sw               Hw      
---  ---------------  ------  
1    4.0(4)SV1(3b)    0.0    
2    4.0(4)SV1(3b)    0.0    
3    4.0(4)SV1(3b)    1.20   
4    4.0(4)SV1(3b)    1.20   
5    4.0(4)SV1(3b)    1.20   

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
2    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
3    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
4    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
5    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.10.10      NA                                    NA
2    192.168.10.10      NA                                    NA
3    192.168.16.82       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  esx1.example.com
4    192.168.16.53       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  esx2.example.com
5    192.168.16.149      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  esx3.example.com


* this terminal session 
n1kv# show system redundancy ha status
VDC No    This supervisor                         Other supervisor                        
------    ---------------                         ---------------
                        
vdc 1     Active with HA standby                  HA standby 

VMware recommends that you run the Primary and the Secondary on different ESX hosts.

5) Test VSM switchover
From the VSM run system switchover to switch between the active and the standby VSMs.

That is it, now you have a highly available Cisco 1000v VSM infrastructure.

The following guide describes the neccessary steps to install and configure a pair of cisco nexus 1000v switches to be used in a vSphere cluster.
These will connect to Cisco Nexus 5020 Upstream Switches.

In this guide the hardware used consists of:

Hardware:
3x HPProliant DL380 G6 with 2 4-port NICs.
2x Cisco 5200Nexus Switches

Software:
vSphere 4 Update 1 Enterprise Plus (needed to use Cisco nexus1000v)
vCenter installed as a virtual machine – 192.168.10.10 (on VLAN 10)
Cisco Nexus 1000v 4.0.4.SV1.3b –
Primary 192.168.101.10 domain id 100 (on VLAN 101)

I am assuming you have already installed and configured vCenter and the ESX cluster.

Cisco recommends that you use 3 separate VLANs for Nexus traffic, I am using the following VLANs:

100 – Control – Control connectivity between Nexus 1000V VSM and VEMs (Non Routable)
101 – Management – ssh/telnet/scp to the cisco Nexux 1000v int mgmt0 (Routable)
102 – Packet – Internal connectivity between Nexus 1000v (Non Routable)

And I will also use VLAN 10 and 20 for VM traffic (10 for Production, 20 for Development)

1) Install vSphere (I assume you have done this step)

2) Configure Cisco Nexus 5020 Upstream Switchports

You need to configure the ports on the upstream switches in order to pass VLAN information to the ESX hosts’ uplink NICs

On the Nexus5020s, run the following:

// These commands give a description to the port and allow trunking of VLANs.
// The allowed VLANs are listed
// spanning-tree port type edge trunk is the recommended spanning-tree type

interface Ethernet1/1/10
description “ESX1-eth0”
switchport mode trunk
switchport trunk allowed vlan 10-20,100-102
spanning-tree port type edge trunk

3) Service Console VLAN !!!

When I installed the ESX server, I used the native VLAN, but after you change the switch port from switchport mode access to switchport mode trunk, the ESX server needs to be configured to send specific VLAN traffic to the Service Console.
My Service Console IP is 192.168.10.11 on VLAN 10, so you will need to console to the ESX host and enter the following:

[root@esx1]# esxcfg-vswitch -v 10 -p “Service Console” vSwitch0

4) Add Port Groups for the Control,Packet and Management VLANs.
I add these Port Groups to VMware Network Virtual Switch vSwitch0 on all the ESX hosts. Make sure to select the right VLANs for your environment.

5) Now that you have configured the Control,Packet and Management Port Groups with their respective VLANs, you can install the Cisco Nexus 1000v.
I chose to install the Virtual Appliace (OVA) file downloaded from Cisco. The installation is very simple, make sure to select to Manually Configure Nexus 1000v and to Map the VLANs to Control, Packet and Management. The rest is just like installing a regular virtual appliance.

6) Power on and open a console window to the Nexus1000v VM(appliance) you just installed. A setup script will start running and will ask you a few questions.

admin password
domain ID // This is used to identify the VSM and VEM. If you want to have 2 Nexus 1000v for high availability, both Nexus 1000v will use the same domain ID. I chose 100
High Availability mode // If you plan to use 2 Nexus 1000v for high availability, then for the first installation select primary, otherwise standalone
Network Information // Things like IP, netmask, gateway Disable Telnet! Enable SSH!
The other stuff we will configure later (Not from the Setup script)

7) Register vCenter Nexus 1000v Plug-in
Once you have the Nexus 1000v basics configured, you should be able to access it. Try to SSH to it (Hopefully you enabled SSH).
Open a browser and point it to the Nexus 1000v management IP address (in this case 192.168.101.10) and you will get a webpage like the following

  • Download the cisco_nexus_1000v_extension.xml
  • Open vSphere client and connect to the vCenter.
  • Go to Plug-ins > Manage Plug-ins
  • Right-click under Available Plug-ins and select New Plu-ins, Browse to the cisco_nexus_1000v_extension.xml
  • Click Register Plug-in (disregard security warning about new SSL cert)

You do NOT need to Download and Install the Plug-in, just Register it.

Now we can start the “advanced” configuration of the Nexus 1000v

8 ) Configure SVS domain ID on VSM

n1kv(config)# svs-domain
n1kv(config-svs-domain)# domain id 100
n1kv(config-svs-domain)# exit

9) Configure Control and Packet VLANs

n1kv(config)# svs-domain
n1kv(config-svs-domain)# control vlan 100
n1kv(config-svs-domain)# packet vlan 102
n1kv(config-svs-domain)# svs mode L2
n1kv(config-svs-domain)# exit

10) Connect Nexus 1000v to vCenter
In this step we are defining the SVS connection which is the link between the VSM and vCenter.

n1kv(config)# svs connection vcenter
n1kv(config-svs-conn)# protocol vmware-vim
n1kv(config-svs-conn)# vmware dvs datacenter-name myDatacenter
n1kv(config-svs-conn)# remote ip address 192.168.10.10
n1kv(config-svs-conn)# connect
n1kv(config-svs-conn)# exit
n1kv(config)# exit
n1kv# copy run start

//Verify the SVS connection

n1kv# show svs connections vcenter

connection vcenter:
    ip address: 192.168.10.10
    remote port: 80
    protocol: vmware-vim https
    certificate: default
    datacenter name: myDatacenter
    DVS uuid: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    config status: Enabled
    operational status: Connected
    sync status: Complete
    version: VMware vCenter Server 4.0.0 build-258672

12) Create the VLANs on the VSM

n1kv# conf t
n1kv(config)# vlan 100
n1kv(config-vlan)# name Control
n1kv(config-vlan)# exit
n1kv(config)# vlan 102
n1kv(config-vlan)# name Packet
n1kv(config-vlan)# exit
n1kv(config)# vlan 101
n1kv(config-vlan)# name Management
n1kv(config-vlan)# exit
n1kv(config)# vlan 10
n1kv(config-vlan)# name Production
n1kv(config-vlan)# exit
n1kv(config)# vlan 20
n1kv(config-vlan)# name Development
n1kv(config-vlan)# exit

// Verify VLANs

n1kv(config)# show vlan
VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
1    default                          active    
10   Production                       active    
20   Development                      active    
100  Control                          active 
101  Management                       active   
102  Packet                           active    


VLAN Type
---- -----
1    enet  
10   enet  
20   enet  
100  enet  
101  enet  
102  enet  

13) Create Uplink Port-Profile
The Cisco Nexus 1000v acts like the VMware DVS. Before you can add hosts to the Nexus1000v you will need to create uplink port-profiles; which will allow VEMs to connect with the VSM.

n1kv(config)# port-profile system-uplink
n1kv(config-port-prof)# switchport mode trunk
n1kv(config-port-prof)# switchport trunk allowed vlan 10,20,100-102
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# system vlan 100,102
n1kv(config-port-prof)# vmware port-group dv-system-uplink
n1kv(config-port-prof)# capability uplink
n1kv(config-port-prof)# state enabled

// Verify Uplink Port-Profile

n1kv(config-port-prof)# show port-profile name system-uplink
port-profile system-uplink
  description: 
  type: ethernet
  status: enabled
  capability l3control: no
  pinning control-vlan: -
  pinning packet-vlan: -
  system vlans: 100,102
  port-group: dv-system-uplink
  max ports: -
  inherit: 
  config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-20,100-102
    no shutdown
  evaluated config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-20,100-102
    no shutdown
  assigned interfaces:

14) It is now time to install the VEM on the ESX hosts.
The preferred way to do this is using VUM(VMware Update Manager). If you have VUM in the system the installation will be very simple.
Simply go to Home->Inventory->Networking
Right Click on the Nexus Switch and add host

// Verify that the task is successfull

// Also take a look at the VSM console

n1kv# 2011 Jan 14 14:43:03 n1kv %PLATFORM-2-MOD_PWRUP: Module 3 powered up (Serial number )

n1kv# show module
Mod  Ports  Module-Type                      Model              Status
---  -----  -------------------------------- ------------------ ------------
1    0      Virtual Supervisor Module        Nexus1000V         active *
3    248    Virtual Ethernet Module          NA                 ok

Mod  Sw               Hw      
---  ---------------  ------  
1    4.0(4)SV1(3b)    0.0    
3    4.0(4)SV1(3b)    1.20   

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
3    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.101.10   NA                                    NA
3    192.168.11.82    XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX  esx1


* this terminal session 

// Do the same for all the other ESX Hosts

15) Create the Port-Profile(s) (VMware Port-Groups)
Port-Profile configure interfaces on the VEM.
From the VMware point of view a port-profile is represented as a port-group.

// The Port-Profile below will be the VLAN 10 PortGroup on vCenter

n1kv# conf t
n1kv(config)# port-profile VLAN_10
n1kv(config-port-prof)# vmware port-group
n1kv(config-port-prof)# switchport mode access
n1kv(config-port-prof)# switchport access vlan 10
n1kv(config-port-prof)# vmware max-ports 200 // By default it has only 32 ports, I want 200 available
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# state enabled
n1kv(config-port-prof)# exit

16) Select the PortGroup you want your VM to connect to

17) Verify Port Profile/Port Groups from the VSM console

n1kv# show port-profile usage 

-------------------------------------------------------------------------------
Port Profile               Port        Adapter        Owner
-------------------------------------------------------------------------------
VLAN_10                    Veth1       Net Adapter 1  jeos_10                  
VLAN_20                    Veth2       Net Adapter 1  jeos_20                  
system-uplink              Eth3/5      vmnic4         esx1.example.com        
                           Eth3/6      vmnic5         esx1.example.com        
                           Eth3/9      vmnic8         esx1.example.com        
                           Eth3/10     vmnic9         esx1.example.com        
                           Eth4/5      vmnic4         esx2.example.com        
                           Eth4/6      vmnic5         esx2.example.com        
                           Eth4/9      vmnic8         esx2.example.com        
                           Eth4/10     vmnic9         esx2.example.com 

At this point you are ready to use the Cisco 1000v, but if you plan to run this in a production environment, it is strongly recommended you run the VSM in High Availability mode.
Follow this post to learn how to install and configure VSM High Availability:
cisco-nexus-1000v-vsm-high-availability

VMware Update Manager is a tool to automate and streamline the process of applying updates, patches or upgrades to a new version. VUM is fully integrated within vCenter Server and offers the ability to scan and remediate ESX/ESXi hosts, virtual appliances, virtual machine templates, and online and offline virtual machines running certain versions of Windows, Linux, and some Windows applications.

In this post you will learn how to Configure VMware Update Manager.
To install VMware Update manager follow Install VMware Update Manager.

  1. VUM Configuration
  2. Create a Baseline
  3. Create a Baseline Group
  4. Attach Baseline to Host/Cluster
  5. Remediate/Patch

1. VUM Configuration
Open Update Manager (Admin View)
Go to Home -> Update Manager

Under the configuration tab, Click on Patch Download Schedule to change the schedule and add an email notification.
Also change the Patch Download Settings to download only what you need, in my case I don’t need windows/linux VM patches or ESX 3.x patches so I am deselecting those.

2. Create a Baseline
There are two types of baselines: Dynamic and Fixed. Fixed baselines are used when you need to apply a specific patch to a system, while dynamic baselines are used to keep the system current with the latest patches. In this guide we will create a Dynamic Baseline.

Go to the Patch Baselines tab and click Create… on the upper right side.

The following screenshots are for a Security patches only baseline:

Give it a name and description

Select Dynamic

Choose Criteria

Review and click Finish

3. Create a Baseline Group
Baseline Groups, are combinations of non conflicting baselines. You can use a Baseline Group to combine multiple dynamic patch baselines, for example the default Critical Patches Baseline and the HostSecurity baseline we created in the previous step

This will create a Baseline Group that includes Critical and Security Patches:
Go to the Patch Baselines tab and click Create… (The Create link that is next to Baseline Groups)

Give it a name and select Host or VM, in this case it is Host

No upgrades, just patches

Select the individual Baselines you want to group

Leave defaults

Review and click Finish

This is how it should look

Now you are all set to attach your Baselines to a Host or to a Cluster.

4. Attach Baseline to Host/Cluster

Go into the Hosts and Clusters View (CTRL+SHIFT+H), select the Host/Cluster you want to attach the baseline to. In this guide I will attach the baseline to the Cluster.

Click on the Cluster, go to the Update Manager tab and click Attach…

Select the Individual or Group Baselines you want to apply to the Cluster and click Attach

You will back at the Hosts and Cluster view, click on Scan…

Once the scan has completed it will show you if you are compliant or not and then you have to remediate (patch).

5. Remediate/Patch
You can remediate the whole cluster or a host at a time, I prefer to do it a host at a time, but it is up to you.

Right click the Cluster/Host you want to patch, and select Remediate…

Select the Baseline you want to remediate

It will list all the patches that will be applied, here you can deselect some patches in case you don’t want them

You can do it immediately or schedule it to happen at a different time

Review the summary and execute

The server will go into maintenance mode and patches will be applied, also, if needed, the server will be rebooted as well.

And that is it, the Host/Cluster is now compliant and patched for Critical and Security patches.

Some time ago I built a secondary VMware cluster for doing some specific testing.
From the primary VMware cluster I copy a virtual machine over SCP to the new secondary VMware cluster.

I then boot up the virtual machine on the new secondary VMware cluster and I experienced some network connectivity issues.

The problem was that the MAC address of the virtual machine was the same MAC address the virtual machine on the main site had and they were running on the same VLAN.

When VMware prompts you about if you Copied or Moved a Virtual Machine make sure you enter that you copied, so that it generates the following unique attributes:

uuid.location
uuid.bios
ethernet0.generatedAddress

In this case there was no prompts so I had to make the following changes on the Virtual Machine configuration files so that the next time it boots new identifiers are generated.

1) Power off Virtual Machine

2) Go to the Service Console and open the configuration file for the virtual machine in question:

[root@esx4 ~]# vi /vmfs/volumes/[datastore]/[vmname]/[vmname].vmx

Delete the following lines:
uuid.location
uuid.bios
ethernet0.generatedAddress

3) Power on Virtual Machine and new values will be generated.

When deploying vmware virtual machine on top of VMFS on top of a NetApp SAN, you need to make sure to align it properly otherwise you will end up with performance issues. File system misalignment is a known issue when virtualizing. Also, when deploying LUNs from a NetApp appliance, you need to make sure no to reformat the LUN, or you will lose the alignment, just create a filesystem on top of the LUN.

NetApp provides a great technical paper about this at: http://media.netapp.com/documents/tr-3747.pdf

In this post Iwill show you how to align an empty vmdk disk/LUN using the open source utility GParted. This is for new vmdk disks/LUNs, dont do it on disk that contain data as you will lose it. This is for Golden Templates that you want aligned, so subsequent virtual machines will inherit the right alignment, or for servers that need a NetApp LUN attached.

The resulting partition works for Linux and Windows, just create a filesystem on top of it.

You can find GParted at: http://sourceforge.net/projects/gparted/files/

1. Boot the VM from the GParted CD/Iso. Click on the terminal icon to open a terminal:

2. Check the partition Starting Offsets, in this case I have 3 disks 2 are already aligned to the 64k offset, I will align the new disk as well.

3. Create an aligned partition on the drive using fdisk

gparted# fdisk /dev/sdc

Below is a screenshot of the answers to fdisk, the important option is to select to start the offset at 64k, as indicated.

4. Now check again and the partition should be aligned

[root@server ~]# fdisk -lu

Disk /dev/sda: 209 MB, 209715200 bytes
64 heads, 32 sectors/track, 200 cylinders, total 409600 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 64 409599 204768 83 Linux

Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders, total 150994944 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 64 41943039 20971488 8e Linux LVM

Disk /dev/sdc: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 64 209715199 104857568 83 Linux