Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

One of the most important things you should do to your systems is to ensure they have the right time.
In this post I will show how to check and ensure your systems have the correct time using PowerCli.

==> Login to vCenter:

$admin = Get-Credential –Credential EXAMPLE\john
Connect-VIServer -Server vc.example.com -Credential $admin

==> Check time settings:

Output:

==> Set time to correct time:

==> Remove old NTP servers (if any):

Output:

==> Change NTP to desired configuration:

Output:

==> Enable Firewall Exception

==> Start NTPd service

==> Ensure NTPd service starts automatically (via policy)

==> Verify all is set the way you expected

Output:

In a previous post I showed how to create a vagrant box from scratch and add it to Vagrant: Creating a custom Vagrant Box

Traditionally you will place the Vagrant Box in a web server, and then use it in a Vagrantfile as follows:

In this post I will show you how to add versioning to your vagrant boxes, so that whenever the base box is changed/updated/patched your end-users/developers will be automatically notified and will be able to get the latest vagrant box with a simple ‘vagrant box update’ command.

1) Create the updated Vagrant Box
(e.g. vagrant_rhel6_vanilla_010.box for version 0.1.0) – you can use the procedure: Creating a custom Vagrant Box

2) Get a checksum of the box – You will need it later for the JSON Box catalog

OR

3) Upload the Box to the Vagrant Box web server repository:

I recommend to use the following web directory structure:
../vagrant/boxes serves the actual boxes/VMs
../vagrant serves the Box metadata JSON file (I explain what this is below)

Also, ensure the web server returns json as the Content-Type.
For Apache you need to check/add that the following configuration is present:
AddType application/json .json

To verify if your webserver is returning the correct Content-Type:

4) Create a Box catalog to be served by the web server:

The box catalog is a JSON metadata file allows you to server multiple versions of a box and enable update status notification.
Fields:
name: Name of the Vagrant Box, this is what you will put under config.vm.box on the Vagrantfile
Versions: This is an array/list/collection where you will put the different versions you are hosting/serving
url: This is the actual location of the box, this is what you will put under config.vm.box_url on the Vagrantfile

5) Modify the Vagrantfile to use the json box catalog file, instead of the actual box name

The first time you move from traditional box to versioned box, you need to delete the traditional box that is cached on your system.
This is only needed if you have a traditional box cached.

6) Vagrant up and see Vagrant download the versioned Box

7) Verify cached Vagrant Box is the correct version

At this point you have a Vagrant environment that supports versions and update status notifications, let’s create a new version of the Box and see it in action.

8) The Vagrant administrator creates an updated box
(e.g. vagrant_rhel6_vanilla_010.box for version 0.1.1) – you can use the procedure: Creating a custom Vagrant Box

9) The Vagrant administrator places the Vagrant Box in the web server

10) The Vagrant administrator adds the new Vagrant Box to the Box manifest Catalog

That’s it! from a Vagrant administrator perspective

11) End user/developer experience: User gets notified on the next Vagrant up and can easily update:

Everytime there is an update the end user gets notified and he/she can upgrade by simply:
vagrant box update

To verify, check cached Vagrant Boxes:

To delete older cached Vagrant Box no longer needed:

This improves the user/developer experience because they just need to do ‘vagrant box update‘ anytime there is an updated box from the Vagrant administrator.

Sometime you need to know how much data can be transferred per second on your network (LAN or WAN) and sometimes you need to find out how long it would take for X amount of data to be fully transferred.

For example you want to know how long it would take to seed data from a production site to a DR site or to the cloud.

The below aims to show how to find the theoretical numbers, I mention theoretical because the number is not exact as it could change based on network condition, jitter/etc/etc, but the below can give you a good indication of what you are dealing with.

Find theoretically how much data you can transfer per second:

Example:
If you have a 1Gbps network link.

How much data can you theoretically transfer per second?

# Network metric conversion uses decimal math

1 Gbps = 1000 Mbps = 1,000,000 Kbps = 1,000,000,000 bps

1 second = 1,000 milliseconds

So it will be:
1,000Mbp/s / 8

= 125 MB can be transferred per second over a 1Gbps network link.

If you want to measure how long, theoretically it will take to transfer X amount of data you can do the follow:

Example:

20TB need to be transferred on a 150Mbps link.

How many days will it take to transfer that amount of data over the network?

# Data conversion uses base 2

20 TB = 20 x (2^10) GB = 20 x (2^20) MB = 20 x (2^30) KB = 20 x (2^40) Bytes =

So it will be:
( (20x(2^20)) x 8 ) / (150) = 1,118,481.0667 seconds

1,118,481.0667 / 3600 = 310 hours

310 / 24

= 12.95 days will take to transfer 20TB over a 150Mbps network link

VMware ESXi can take advantage of Flash/local SSDs in multiple ways:

  • Host swap cache (since 5.0):  ESXi will use part of the an SSD datastore as swap space shared by all VMs.  This means that when there is ESX memory swapping, the ESXi server will use the SSD drives, which is faster than HDD, but still slower than RAM.
  • Virtual SAN (VSAN) (since 5.5 with VSAN licensing): You can combine  the local HDD and local SSD on each host and basically create a distributed storage platform.  I like to think of it as a RAIN(Redundant Array of Independent Nodes).
  • Virtual Flash/vFRC (since 5.5 with Enterprise Plus): With this method the SSD is formatted with VFFS and can be configured as read and write through cache for your VMs, it allows ESXi to locally cache virtual machine read I/O and survives VM migrations as long as the destination ESXi host has Virtual Flash enabled. To be able to use this feature VMs HW version needs to be 10.

Check if the SSD drives were properly detected by ESXi

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:

flash1

 

From CLI:

To enable the SSD option on the SSD drive

At this point you should put your host in maintenance mode because it will need to be rebooted.

If the SSD is not properly detected you need to use storage claim rules to force it to be type SSD. (This is also useful if you want to fake a regular drive to be SSD for testing purposes)

Add a PSA claim rule to mark the device as SSD (if it is not local (e.g. SAN))

For example (in case this was a SAN attached LUN)

 

Add a PSA claim rule to mark the device as Local and SSD at the same time (if the SSD drive is local)

For the device in my example it would be:

Reboot your ESXi host for the changes to take effect.

 

To remove the rule (for whatever reason, including testing and going back)

Once the ESXi server is back online verify that the SSD option is OK

From vSphere Web Client

Select the ESXi host with Local SSD drives -> Manage -> Storage -> Storage Devices

See if it shows as SSD or Non-SSD, for example:

flash2

From CLI:

Exit Maintenance mode.

Do the same on ALL hosts in the cluster.

Configure Virtual Flash

Now that the ESXi server recognize the SSD drives we can enable Virtual Flash.

You need to perform the below steps from the vSphere Web Client on all ESX hosts

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Resource Management -> Add Capacity…

flash3

You will see that the SSD device has been formatted using the VFFS filesystem, it can be used to allocate space for virtual flash host swap cache or to configure virtual Flash Read Cache for virtual disks.

flash4

 

Configure Virtual Flash Host Swap

One of the options you have is to use the Flash/SSD as Host Swap Cache, to do this:

ESXi host -> Manage -> Settings -> Virtual Flash -> Virtual Flash Host Swap Cache Configuration -> Edit…

// Enable and select the size of the cache in GB

flash5

 

Configure Flash Read Cache

Flash read cache is configured on a per-vm basis, per vmdk basis. VMs need to be at virtual hardware version 10 in order to use vFRC.

To enable vFRC on a VM’s harddrive:

VM -> Edit Settings -> Expand Hard Disk -> Virtual Flash Read Cache

Enter the size of the cache in GB (e.g. 20)

You can start conservative and increase if needed, I start with 10% of the VMDK size. Below, in the monitor vFRC section, you will see tips to rightsize your cache.

flash6

 

If you click on Advanced, you can configure/change the specific block-size (default is 8k) for the Read Cache, this allows you to optimize the cache for the specific workload the VM is running.

flash7

The default block size is 8k, but you may want to rightsize this based on the application/workload to be able to efficiently use the cache.

If you dont size the block-size of the cache you could potentially be affecting the efficiency of the cache:

  • If the workload has block sizes larger than the configured block-size then you will have increased cache misses.
  • If the workload has block sizes smaller than the configured block-size then you will be wasting precious cache.

Size correctly the block-size of your Cache

To correctly size the block-size of your cache you need to determine the correct I/O length/size for cache block size:

Login to the ESX host running the workload/VM for which you want to enable vFRC

 

Find world ID of each device

 

Start gathering statistics on World ID // Give it some time while it captures statistics

Get the IO length histogram to find the most dominant IO length

You want the IO length for the harddisk you will enable vFRC, in this case scsi0:1

(-c means compressed output)

As you can see, in this specific case,  16383(16k) is the most dominant IO length, and this is what you should use in the Advance options.

flash8

Now you are using a Virtual Flash Read Cache on that VM’s harddisk, which should improve the performance.

Monitor your vFRC

Login to the ESX host running the workload/VM for which you enabled vFRC, in the example below it is a 24GB Cache with 4K block-size:

There is a lot of important information here:
The Cache hit rate shows you the percentage of how much the cache is being used. A high number is better because it means that hits use the cache more frequently.
Other important items are IOPs and latency.

This stats also show information that can help you right size your cache, if you see a high number of cache evictions, Evict->Mean blocks per I/O operation, it could be an indication that your cache size is small or that the block-size of the cache is incorrectly configured.

To calculate available block in the cache, do the following:
SizeOfCache(in bytes) / BlockSizeOfCache(in bytes) = #ofBlocksInvFRC

For the example: A 24GB cache with 4k block-size, will have 6291456 blocks in the vFRC, see:
25769803776
/
4096
=
6291456

 

In the stats above we see 5095521 as the Mean number of cache blocks in use, and no evictions which indicates that 24GB cache with 4k seems to be a correctly sized cache.

Keep monitoring your cache to gain as much performance as you can from your Flash/SSD devices.

Recently while working on a project I found that we were keeping .svn metadata in our Git repo.

It was not desirable to keep that in the Git repo but we needed to keep those .svn object in the local filesystem.

The below steps allowed me to do that:

1) Find all the .svn objects

2) Remove them from Git
The (–cache) flag keeps them on disk and (-r) does it recursively

3) Add a .gitignore  file with contents below to prevent tracking of .svn objects

4) Commit changes

5)Verify

If you are running your VMware infrastructure on NetApp storage, you can utilize NetApp’s Virtual Storage Console (VCS) which integrates with vCenter to a provide a strong, fully integrated solution to managing your storage from within vCenter.

With VCS you can discover, monitor health and capacity, provision, perform cloning, backup and restores, as well as optimize your ESX hosts and misaligned VMs.

The use case I will write about is the ability to take a back up of all of your production Datastores and initiate a SnapMirror transfer to DR.

Installing NetApp’s Virtual Storage Console

Download the software from NetApp’s website (need credentials) from the software section: VSC_vasavp-5-0.zip (version as of this post)

Install on a Windows System (can be vCenter if using Windows vCenter)

There are currently a couple of bugs with version 5.0 that can be addressed by following the following articles – hopefully they will be addressed soon by NetApp:

http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=821600

and

http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=767444

Follow the wizard…

 

smvi1 smvi2

 

Select Backup and Recovery to be able to use these features
smvi3 smvi4 smvi5

 

You may get a warning here and this is where you need to follow the bug fixes specified earlier (adding a line to smvi.override)

Then you need to enter the information requested:

Plugin service information: hostname/IP of the server you installed VSC (in this case it was the vCenter server)

Then enter the vCenter information

smvi6

Check that the registration was successful

smvi7

Verify that it is installed in the vCenter Web Client

smvi8

 

Configure the NetApp Virtual Storage Console form vCenter Web Client

On the vCenter Web Client click on the Virtual Storage Console icon

smvi9

Click on ‘Storage Systems’ and add your NetApp controllers including your DR controllers ( you will need this to successfully initiate SnapMirror after backups)

smvi10

Once you have added them, you will be able to see their details and status, take a look at the summary and related objects. Also click on the ‘View Details’ link(s) they will provide a wealth of information about your storage

smvi11

Go back to the main page of the Virtual Storage Console and you will see global details

smvi12

With the above setup you can start provisioning storage, create backups/restores, mount snapshots and look at the details of everyobject from a storage perspective. Take a look at the Datacenter, Datastores, VMs.

smvi13

smvi14

 

Configure Datasore Backups followed by NetApp SnapMirror for Disaster Recovery

Pre-requisites:

You need to have an initialized SnapMirror relationship

Create an empty schedule by adding the following line to /etc/snapmirror.conf

Ensure you have added your production NetApp controllers as well as your DR controllers on the vCenter Web Clien Virtual Storage Console

Configuration:

In vCenter Web Client, go to your Datastores view.

(Optional but recommended) Enable Deduplication in your Datastores

// This will save storage and increase the efficiency of the replication because you will only replicate deduplicated data. To do so:

Right click on a Datastore -> NetApp VSC -> Deduplication -> Enable

Right click on a Datastore -> NetApp VSC -> Deduplication -> Start (Select to scan the entire volume)

smvi15

By default the deduplication process is scheduled daily at midnight, I recommend to have it happen at least 2 hours before SnapMirror replication.

For example:

Deduplication: daily at 8pm

SnapMirror: daily at 10pm

To change the default schedule of the deduplication process per volume you need to do the following on the NetApp controllers CLI:

Schedule the Backup and SnapMirror Update

Right click on a Datastore -> NetApp VSC -> Backup -> Schedule Backup

smvi16

smvi17

smvi18

smvi19

smvi20

 

smvi21

 

Add other Datastores to the same backup job (please remember that for SnapMirror Update to work you need to have pre-created the SnapMirror relationship).

Right click on the other Datastores -> NetApp VSC -> Backup -> Add to Backup Job

You will see the already created backup job (10pm_backup), select it and click ok.

smvi22

At this point, all the Datastores you selected will be Deduplicated, Backed-up and Replication to the DR site.

Restoring on the Prod or DR site

Now that NetApp VSC is setup, backing up a replicating data, we can restore at will from the snapshot.

Restore a VM (entire VM or some of its virtual disks)

Right click on VM -> NetApp VSC -> Restore

Select backup from the list and choose to restore entire VM or just some disks

Restore from Datastore

Right click on Datastore -> NetApp VSC -> Restore

Select backup from the list and choose what to restore

Mount a Snapshot (it will show as another Datastore and you can retrieve files or even start VMs)

Click on a Datastore and go to Related Objects -> Backups

Select Backup, Right-Click and select Mount

You will see the datastore present and mounted to one ESX host, from there you can retrieve files, start VMs, etc.

Once you are done go back to the Datastore and unmount the Backup.

 

You can use the plethora of Vagrant Boxes available online, but there are times that you want/need to create your own Vagrant Box, based on specific requirements, with specific packages and most importantly with a specific ssh key pair.

In this guide I will walk through the process of creating a Vagrant BOX from scratch. I will be using a Mac with VMware Fusion and my Vagrant Box will be based on RHEL/CentOS.

1)Create a kickstart configuration file that you will use to automate the installation.
You can do it manually but I prefer/recommend using a kickstart file because you are software-defining the resulting machine and you can source control it. I placed the kickstart on a web server, in this case in my own Mac system so that it is reachable by the VM during installation.

Pre-requisites:
root password:
You can use the password ‘vagrant‘ as is on the kickstart, or you can provide the SHA512 hash of the root password you want. You can do this in several ways from a Linux terminal

SSH key pair:
You need to create a key pair that you will use for Vagrant. The way it works is that the generated public key will be added to the authorized keys allowed to ssh as the user ‘vagrant’ in the custom Vagrant BOX while the vagrant software on the host(Mac) will use the private key to connect from the host(Mac) to the Vagrant BOX(vm)
You can create the ssh key pair as follows:

Once you have the password and SSH keys, you can use the below kickstart (replacing the password and SSH public key accordingly)

2) Creating the custom VM

Add new VM
vm_create_new

Continue without Disc

vm_continuewithout_cd

Create a custom virtual machine
vm_create_custom

Choose RHEL6_64
vm_os_choice

Click on Customize settings
vm_customize_settings

 

Select 1CPU and 1GB of RAM (The VM should be as minimal as possible, you can increase CPU and RAM from Vagrant)
cpu_ram

 

Disable unnecessary stuff (print sharing, bluetooth, sound card)
disable_bluetooth

disable_printsharing

disable_sound

 

3) Start installation

Boot VM and add the RHEL/CentOS ISO disk (Choose Disc or Disc Image…)
connect_cd

Point to the kickstart file served by the web server
rhel_bootscreen2box

Installation will proceed and the VM will be powered off (based on the kickstart ‘poweroff’ option)

4) Once VM is off Disable CD drive
disable_cd

5) Quit VMware Fusion (this is to clear the lock files so that you have a pristine VM)

 

6) Difragment and Shrink the VMDK

 

7) On the folder where the virtual machine was created, create the following metadata.json file:

8) Create the Vagrant box container

9) Add the Box to Vagrant

Note: You can also add this Vagrant Box to a webserver so that it can be shared with others, in my case I uploaded the Box to a web server:

Update: You can create a Vagrant Box Catalog that will allow versioning of the Boxes and allows the end-users/developers to get automatic update notifications when you have an updated Vagrant box: Versioning and Update Notifications for your Vagrant boxes

In this guide I will go through the process of booting up from an external USB hard drive in VMware fusion.
The main use case is the ability to take the hard drive of an existing physical server and be able to boot from that physical hard drive into a VMware Fusion VM.
Another use case (my latest use case): I enrolled in a technical training course and the vendor provided/shipped a bootable USB external hard drive with a Linux OS installed as a LAB, with the expectation that I was going to boot from it using a PC, that works great, but I wanted to use my Macbook and be able to run this LAB while on the road.
As soon as I tried to boot from it using my Mac I got a kernel panic, due to missing drivers, etc.
So, I decided that I was going to use a VM using VMware Fusion, as follows:

1) Check the system before plugin your USB external hard drive:

2) Plug your USB external hard drive and look for the new disk:

3) In VMware Fusion create a VM as follows:

Create a New VMware Fusion VM:

vm_create_new

vm_install_methodvm_install_method2vm_os_choicevm_virtual_disk

vm_customize_settings

Customize it as you wish (I removed sound, printers and modified the RAM and CPU,etc)



cpu_ramdisable_printsharingdisable_bluetoothdisable_sound

 

Also remove the VMware fusion created VMDK as you don’t need it (Unless you actually need it)

vm_remove_disk

 

OK, the VM creation is complete, now you have to actually use the physical hard drive as shown below

 

4) Create a RawDisk VMDK in the newly created VM that will point to the USB external hard drive

5) Add the disk to your VM configuration (.vmx file)

6) Power on your VM and voila! you should see your VM booting from the USB external hard drive

vm_bootcamp_allow

vm_bootcamp_boot

 

I needed to keep some content in my laptop synchronized to a NAS. Rsync is the tool of choice, but a simple Rsync command that I was using was not working:

In the above case, the NAS’ ssh deamon was not listening on the default SSH port 22, it was listening on 10022.
So I modified my rsync command accordingly and tried it again:

This time the rsync program was not found on the NAS, but I knew I had installed the RSYNC module on the NAS. What happened is that the rsync program was not on the path, so it could not be found, but rsync allows you to specify the location of the rsync binary.

Now I can successfully rsync to the NAS, even though the SSH port and the Rsync binary path are different than the defaults.

In this post I will build a very simple RPM, this RPM will contain a very useful program/shell script.
With this information you can build complex RPMs later on.

Set up your build environment

In this case I am using a RHEL 6.5 64bit system

Install the tools:
rpm-build: is what you need to build RPMs
rpmdevtools: is not required but it is very helpful because it helps you create the directory tree and base SPEC file

Create a non-privileged user to build the RPMs

Create the directory tree using rpmdev-setuptree

Package Application

Work on packaging your application/program (e.g. very_useful_script.sh)
The folder and the archive naming is important for later when they get unarchived, the rpm tools will by default use name-version (e.g. name=very-useful-script, version=1.0, that is why the folder/archive was named very-useful-script-1.0/ ).
That is the default and can be easily changed in the SPEC file.

Move your packaged application to the SOURCES directory under rpmbuild/

Now it is time to create the SPEC

Create a skeleton spec file

Move it to your directory tree

This is how your directory tree should look like

SPEC file:

Dissecting the SPEC file
The below is header information and just descriptive data

The below is where we prepare our sources to be packaged into RPM
%prep is a section where we can execute commands or use macros.
%setup is a macro that unarchives the original sources.
Earlier I was discussing the importance of naming the folder and archive as name-version, this is because the %setup macro expects that by default, but you can overwrite the default by specifying the folder name (e.g. %setup -q -n very-useful-script-1.0-john-x86)

OR

The below removes previous remains of the files in the buildroot
Then creates a folder /usr/local/bin/ in the buildroot
Then puts our very_useful_script.sh in /usr/loca/bin with mode 755

The below just cleans the buildroot

The below specifies all the files that will be installed by the RPM
You need to list them all, or use wildcards

Build the RPM using the SPEC file

After the RPM has been successfully been built, you can find it under:

Install it (need to be root)

Hopefully this guide will help you when building RPMs.