Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

Browsing Posts tagged vsphere

In a previous post I wrote about how to Create a custom Vagrant Box from scratch.

In this post I will walk through the usage of Packer to automate the creation of a CentOS 7 image that can be used in Vagrant or even vSphere.

Packer is a cool Hashicorp tool that helps you automate the creation of machine images.

You can download the project/git repo at https://github.com/parcejohn/packer-centos7

From their website:
“Packer is easy to use and automates the creation of any type of machine image. It embraces modern configuration management by encouraging you to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.”

I use Packer to continue the journey to Infrastructure As Code, where even my golden images/templates are automated and source controlled (on Git).

Requirements:

* Packer // I installed on Mac using $ brew install packer
* vagrant // I installed on Mac using $ brew install vagrant
* vmware fusion // I installed using $ brew cask install vmware-fusion
* CentOS 7 ISO file

Packer uses a JSON template file to orchestrate the image creation and there are different stages in the process, I will concentrate on the below three, but you should familiarize with all of them (https://www.packer.io/docs/basics/terminology.html)

Builders: Packer component to create a machine image for a single platform, in this case I will be using the VMware builder.
Provisioners: Packer component that installs and configures software within a running machine prior to that machine being turned into a static image. Example provisioners include shell scripts, Chef, Puppet, etc. I will be using the shell provisioner.
Post-Processors: Packer component that takes the result of the builder or another post-processor and process that to create a new artifact. Examples of post-processors are compress to compress artifacts, upload to upload artifacts, etc. I will be creating a Vagrant box as the artifact, and also demonstrate how to upload the image to vSphere.

It’s time to walk through the process of creating a CentOS 7 image using Packer.

1) Create directory structure
The http folder will host a kickstart file, Packer will use its built in web server to serve this kickstart file
The scripts folder will host the provisioning scripts that will define the machine

centos/
├── http
└── scripts

2) Populate with the following configuration files and scripts (the contents of these files are shown in the next steps)

centos/
├── centos-7.1-x64-vmware.json
├── http
│   └── ks.cfg
├── scripts
│   ├── base.sh
│   ├── cleanup.sh
│   ├── hgfs.sh
│   ├── vmware.sh
│   └── zerodisk.sh
├── template.json
└── vagrant_rsa_key

3) Create a SSH key pair, you will need this to login to the server to complete install and configuration

The Public Key will be injected to the ‘vagrant’ user as part of the kickstart, it will then be used in the Packer JSON template to allow Packer to login to the machine
The Private Key will stay on your system to allow Packer and Vagrant to login to the created machine.

$ ssh-keygen -t rsa -b 4096 -C "vagrant" -N '' -q -f ./vagrant_rsa_key

4) Create a Packer JSON Template file (template.json)

{
  "variables": {
    "vm_name": "centos-7.1-vmware",
    "iso_url": "{{env `ISO_URL`}}",
    "iso_sha256": "f90e4d28fa377669b2db16cbcb451fcb9a89d2460e3645993e30e137ac37d284"
  },
  "builders": [
    {
      "headless": true,
      "type": "vmware-iso",
      "boot_command": [
        " text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg"
      ],
      "boot_wait": "10s",
      "disk_size": 8192,
      "guest_os_type": "centos-64",
      "http_directory": "http",
      "iso_url": "{{user `iso_url`}}",
      "iso_checksum_type": "sha256",
      "iso_checksum": "{{user `iso_sha256`}}",
      "ssh_username": "vagrant",
      "ssh_private_key_file": "vagrant_rsa",
      "ssh_port": 22,
      "ssh_wait_timeout": "10000s",
      "shutdown_command": "echo '/sbin/halt -h -p' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
      "tools_upload_flavor": "linux",
      "tools_upload_path": "/tmp/vmware_tools_{{.Flavor}}.iso",
      "vmx_data": {
        "memsize": "1024",
        "numvcpus": "1",
        "cpuid.coresPerSocket": "1"
      }
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
      "override": {
        "vmware-iso": {
          "scripts": [
            "scripts/base.sh",
            "scripts/vmware.sh",
            "scripts/hgfs.sh",
            "scripts/cleanup.sh",
            "scripts/zerodisk.sh"
          ]
        }
      }
    }
  ],
  "post-processors": [
    {
      "type": "vagrant",
      "override": {
        "vmware": {
          "output": "centos-7.1-x64-vmware.box"
        }
      }
    }
  ]
}

Packer Components/Sections:

Variables:
This is just to centralize variables that will be used by other components (e.g. Builder, provisioner, etc)
I can also set a variable to use Environment variables or command line provided values:
User provided values:
“iso_url”: “{{user `iso_url`}}”,

Environment variable values:
“iso_url”: “{{env `ISO_URL`}}”,

Builders:
Here is an array of builders and the needed parameters for each builder, in this case the only builder is of type vmware-iso.
Parameters and some comments about them.

      "headless": true, // Means not to open the VMware Fusion Console
      "type": "vmware-iso", // VMware type builder
      "boot_command": [
        " text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg"  // Send these key combination to the VMWare Fusion Console
      ],
      "boot_wait": "10s",
      "disk_size": 8192,
      "guest_os_type": "centos-64",
      "http_directory": "http",  // Directory where the kickstart file is placed
      "iso_url": "{{user `iso_url`}}", // Location of ISO image, defined at runtime, more on that later
      "iso_checksum_type": "sha256", 
      "iso_checksum": "{{user `iso_sha256`}}",
      "ssh_username": "vagrant",  // Packer will log in to resulting machine for provisioning, this user must exist (user created from the Kickstart) 
      "ssh_private_key_file": "vagrant_rsa", // Packer will log in using ssh key created earlier
      "ssh_port": 22,
      "ssh_wait_timeout": "10000s",
      "shutdown_command": "echo '/sbin/halt -h -p' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
      "tools_upload_flavor": "linux",
      "tools_upload_path": "/tmp/vmware_tools_{{.Flavor}}.iso", 
      "vmx_data": {
        "memsize": "1024",
        "numvcpus": "1",
        "cpuid.coresPerSocket": "1"

Provisioners:
This section uses the shell provisioner and runs all of those scripts listed, which are located in the scripts folder.

      "type": "shell",
      "execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
      "override": {
        "vmware-iso": {
          "scripts": [
            "scripts/base.sh",
            "scripts/vmware.sh",
            "scripts/hgfs.sh",
            "scripts/cleanup.sh",
            "scripts/zerodisk.sh"

Post-processor:
This section states that I want a vagrant box and I am giving it the name.

      "type": "vagrant",
      "override": {
        "vmware": {
          "output": "centos-7.1-x64-vmware.box"

5) Kickstart file (should be http/ks.cfg based on the template.json file)
This is a minimal install of CentOS 7
Note: The private key (XXXX) given to the vagrant user, that is the key that we created in a previous step

install
cdrom
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp --noipv6
rootpw  --plaintext vagrant
firewall --enabled --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --disabled
timezone --utc America/New_York
bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet"

text
skipx
zerombr

clearpart --all --initlabel
autopart

auth  --useshadow  --enablemd5
firstboot --disabled
reboot

%packages --nobase --ignoremissing 
@core
bzip2
kernel-devel
kernel-headers
-ipw2100-firmware
-ipw2200-firmware
-ivtv-firmware
%end

%post
# Install SUDO
/usr/bin/yum -y install sudo

# Create vagrant user
/usr/sbin/useradd vagrant
/bin/mkdir /home/vagrant/.ssh
/bin/chmod 700 /home/vagrant/.ssh
cat >           /home/vagrant/.ssh/authorized_keys <<'VAGRANT_RSA'
ssh-rsa XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX vagrant
VAGRANT_RSA

/bin/chmod 600 /home/vagrant/.ssh/authorized_keys
/bin/chown -R vagrant /home/vagrant/.ssh

# Add vagrant user to SUDO
echo "vagrant        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/vagrant
echo "Defaults:vagrant !requiretty"                 >> /etc/sudoers.d/vagrant
chmod 0440 /etc/sudoers.d/vagrant
%end

6 ) Provisioning Scrips
Packer calls these scripts (from the JSON template) on the created machine before making it a template/image.

base.sh // Install basic packages

#!/usr/bin/env bash
set -x

sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
yum -y install gcc make gcc-c++ kernel-devel-`uname -r` perl

vmware.sh // Install VMTools, I separated this from base as I may want to call different scripts on a VirtualBox/etc builder.

#!/usr/bin/env bash
set -x

yum install -y fuse-libs open-vm-tools

hgfs.sh // For vagrant to work properly with folders

#!/usr/bin/env bash
set -x

VMWARE_ISO=/tmp/vmware_tools_linux.iso
VMWARE_MNTDIR=$(mktemp --tmpdir=/tmp -q -d -t vmware_mnt_XXXXXX)
VMWARE_TMPDIR=$(mktemp --tmpdir=/tmp -q -d -t vmware_XXXXXX)

# Extract tools
mount -o loop $VMWARE_ISO $VMWARE_MNTDIR
tar zxf $VMWARE_MNTDIR/VMwareTools*.tar.gz -C $VMWARE_TMPDIR
umount $VMWARE_MNTDIR

# Install tools
$VMWARE_TMPDIR/vmware-tools-distrib/vmware-install.pl -d

# Clean up
rm -f $VMWARE_ISO
rm -rf $VMWARE_MNTDIR
rm -rf $VMWARE_TMPDIR

cleanup.sh // Clean up before converting to image

#!/usr/bin/env bash
set -x

yum -y erase gtk2 libX11 hicolor-icon-theme avahi freetype bitstream-vera-fonts
rpm --rebuilddb
yum -y clean all

zerodisk.sh // This is so that the resulting box is as small as possible

#!/usr/bin/env bash
set -x

dd if=/dev/zero of=/EMPTY bs=1M
rm -f /EMPTY

7) Run Packer against the template.json file created earlier and watch it build the machine image

$ packer build -var ‘iso_url=/Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso’ -only=vmware-iso template.json
vmware-iso output will be in this color.

==> vmware-iso: Downloading or copying ISO
vmware-iso: Downloading or copying: file:///Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso
==> vmware-iso: Creating virtual machine disk
==> vmware-iso: Building and writing VMX file
==> vmware-iso: Starting HTTP server on port 8101
==> vmware-iso: Starting virtual machine…
vmware-iso: The VM will be run headless, without a GUI. If you want to
vmware-iso: view the screen of the VM, connect via VNC without a password to
vmware-iso: 127.0.0.1:5989
==> vmware-iso: Waiting 10s for boot…
==> vmware-iso: Connecting to VM via VNC
==> vmware-iso: Typing the boot command over VNC…
==> vmware-iso: Waiting for SSH to become available…
==> vmware-iso: Connected to SSH!
==> vmware-iso: Uploading the ‘linux’ VMware Tools
==> vmware-iso: Provisioning with shell script: scripts/base.sh
vmware-iso: + sed -i ‘s/^.*requiretty/#Defaults requiretty/’ /etc/sudoers



==> vmware-iso: Provisioning with shell script: scripts/zerodisk.sh
vmware-iso: + dd if=/dev/zero of=/EMPTY bs=1M
vmware-iso: dd: error writing ‘/EMPTY’: No space left on device
vmware-iso: 5488+0 records in
vmware-iso: 5487+0 records out
vmware-iso: 5754265600 bytes (5.8 GB) copied, 5.27091 s, 1.1 GB/s
vmware-iso: + rm -f /EMPTY
==> vmware-iso: Gracefully halting virtual machine…
vmware-iso: Waiting for VMware to clean up after itself…
==> vmware-iso: Deleting unnecessary VMware files…
vmware-iso: Deleting: output-vmware-iso/564de113-2fc5-2010-8e5d-63fec92f12f7.vmem
vmware-iso: Deleting: output-vmware-iso/packer-vmware-iso.plist
vmware-iso: Deleting: output-vmware-iso/vmware.log
==> vmware-iso: Cleaning VMX prior to finishing up…
vmware-iso: Unmounting floppy from VMX…
vmware-iso: Detaching ISO from CD-ROM device…
vmware-iso: Disabling VNC server…
==> vmware-iso: Compacting the disk image
==> vmware-iso: Running post-processor: vagrant
==> vmware-iso (vagrant): Creating Vagrant box for ‘vmware’ provider
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s001.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s002.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s003.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.nvram
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmsd
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmxf
vmware-iso (vagrant): Compressing: Vagrantfile
vmware-iso (vagrant): Compressing: disk-s001.vmdk
vmware-iso (vagrant): Compressing: disk-s002.vmdk
vmware-iso (vagrant): Compressing: disk-s003.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
vmware-iso (vagrant): Compressing: packer-vmware-iso.nvram
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmsd
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmx
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmxf
Build ‘vmware-iso’ finished.

==> Builds finished. The artifacts of successful builds are:
–> vmware-iso: ‘vmware’ provider box: centos-7.1-x64-vmware.box

8) Add box to Vagrant

$ vagrant box add –name centos-7.1-010-x64-vmware.box centos-7.1-x64-vmware.box
==> box: Adding box ‘centos-7.1-010-x64-vmware.box’ (v0) for provider:
box: Downloading: file:///Users/john/packer/centos/centos-7.1-x64-vmware.box
==> box: Successfully added box ‘centos-7.1-010-x64-vmware.box’ (v0) for ‘vmware_desktop’!

The above is all you need to have your Infrastructure be Code that can be versioned and automatically create your images and use them on Vagrant.
The next item will show you how to use Packer to move the same image to your vSphere environment.

9) Sending Image to vSphere
To send the image created by Packer to vSphere you will need to add another post-provisioning entry to the JSON array:

    {
      "type": "vsphere",
      "host": "{{user `vm_host`}}",
      "username": "{{user `vm_user`}}",
      "password": "{{user `vm_pass`}}",
      "datacenter": "{{user `vm_dc`}}",
      "cluster": "{{user `vm_cluster`}}",
      "resource_pool": " ",
      "datastore": "{{user `vm_datastore`}}",
      "vm_folder": "{{user `vm_folder`}}",
      "vm_name": "{{user `vm_name`}}", 
      "vm_network": "{{user `vm_network`}}",
      "insecure" : "true"
    }

As you can see I am using variables to make this portable, so that means you have to add variables to the variables section as well, below is the new template to do both vagrant and vSphere image provisioning.

{
  "variables": {
    "vm_name": "centos-7.1-vmware",
    "iso_url": "{{env `ISO_URL`}}",
    "iso_sha256": "f90e4d28fa377669b2db16cbcb451fcb9a89d2460e3645993e30e137ac37d284",
    "vm_host": "{{ user `vm_host` }}",
    "vm_user": "{{ user `vm_user` }}",
    "vm_pass": "{{ env `vm_pass` }}",
    "vm_dc":   "{{ user `vm_dc` }}",
    "vm_cluster": "{{user `vm_cluster`}}",
    "vm_datastore": "{{user `vm_datastore`}}",
    "vm_folder": "{{user `vm_folder`}}",
    "vm_name": "{{user `vm_name`}}", 
    "vm_network": "{{user `vm_network`}}"
  },
  "builders": [
    {
      "headless": true,
      "type": "vmware-iso",
      "boot_command": [
        " text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg"
      ],
      "boot_wait": "10s",
      "disk_size": 8192,
      "guest_os_type": "centos-64",
      "http_directory": "http",
      "iso_url": "{{user `iso_url`}}",
      "iso_checksum_type": "sha256",
      "iso_checksum": "{{user `iso_sha256`}}",
      "ssh_username": "vagrant",
      "ssh_private_key_file": "vagrant_rsa",
      "ssh_port": 22,
      "ssh_wait_timeout": "10000s",
      "shutdown_command": "echo '/sbin/halt -h -p' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
      "tools_upload_flavor": "linux",
      "tools_upload_path": "/tmp/vmware_tools_{{.Flavor}}.iso",
      "vmx_data": {
        "memsize": "1024",
        "numvcpus": "1",
        "cpuid.coresPerSocket": "1"
      }
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
      "override": {
        "vmware-iso": {
          "scripts": [
            "scripts/base.sh",
            "scripts/vmware.sh",
            "scripts/hgfs.sh",
            "scripts/cleanup.sh",
            "scripts/zerodisk.sh"
          ]
        }
      }
    }
  ],
  "post-processors": [
    {
      "type": "vagrant",
      "override": {
        "vmware": {
          "output": "centos-7.1-x64-vmware.box"
        }
      }
    },
    {
      "type": "vsphere",
      "host": "{{user `vm_host`}}",
      "username": "{{user `vm_user`}}",
      "password": "{{user `vm_pass`}}",
      "datacenter": "{{user `vm_dc`}}",
      "cluster": "{{user `vm_cluster`}}",
      "resource_pool": " ",
      "datastore": "{{user `vm_datastore`}}",
      "vm_folder": "{{user `vm_folder`}}",
      "vm_name": "{{user `vm_name`}}", 
      "vm_network": "{{user `vm_network`}}",
      "insecure" : "true"
    }
  ]
}

10) Re-Run Packer against the template.json file and watch it build the machine image an

$ packer build \
> -var ‘iso_url=/Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso’ \
> -var ‘vm_host=vc.example.com’ \
> -var ‘vm_user=john@example.com’ \
> -var ‘vm_pass=XXXXXXX’ \
> -var ‘vm_dc=vDC’ \
> -var ‘vm_cluster=Folder/Cluster’ \
> -var ‘vm_datastore=store’ \
> -var ‘vm_folder=Images’ \
> -var ‘vm_name=centos71’ \
> -var ‘vm_network=dvs-net1’ \
> -only=vmware-iso template.json

vmware-iso output will be in this color.

==> vmware-iso: Downloading or copying ISO
vmware-iso: Downloading or copying: file:///Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso
==> vmware-iso: Creating virtual machine disk


==> vmware-iso: Running post-processor: vagrant
==> vmware-iso (vagrant): Creating Vagrant box for ‘vmware’ provider
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s001.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s002.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s003.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.nvram
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmsd
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmxf
vmware-iso (vagrant): Compressing: Vagrantfile
vmware-iso (vagrant): Compressing: disk-s001.vmdk
vmware-iso (vagrant): Compressing: disk-s002.vmdk
vmware-iso (vagrant): Compressing: disk-s003.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
vmware-iso (vagrant): Compressing: packer-vmware-iso.nvram
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmsd
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmx
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmxf
==> vmware-iso: Running post-processor: vsphere
vmware-iso (vsphere): Uploading output-vmware-iso/packer-vmware-iso.vmx to vSphere
vmware-iso (vsphere): Opening VMX source: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vsphere): Opening VI target: vi://john%40example.com@vc.exmaple.com:443/vDC/host/Cluster/Resources/
vmware-iso (vsphere): Deploying to VI: vi://john%40example.com@vc.example.com:443/vDC/host/Cluster/Resources/
Transfer Completed
vmware-iso (vsphere): Completed successfully
vmware-iso (vsphere):
Build ‘vmware-iso’ finished.

==> Builds finished. The artifacts of successful builds are:
–> vmware-iso: ‘vmware’ provider box: centos-7.1-x64-vmware.box
–> vmware-iso:

Now you should have the same image running on both Vagrant and vSphere.

One of the most important things you should do to your systems is to ensure they have the right time.
In this post I will show how to check and ensure your systems have the correct time using PowerCli.

==> Login to vCenter:

$admin = Get-Credential –Credential EXAMPLE\john
Connect-VIServer -Server vc.example.com -Credential $admin

==> Check time settings:

Get-VMHost | Sort Name | Select Name, `
   @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
   Timezone, `
   @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
   @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
   @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
   @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object –ExpandProperty Enabled}} `
   | Format-Table -AutoSize

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Sort Name | Select Name, `
>>    @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
>>    Timezone, `
>>    @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
>>    @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
>>    @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
>>    @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object -ExpandProperty Enabled}} `
>>    | Format-Table -AutoSize
>>

Name                 NTPServer 					TimeZone CurrentTime         ServiceRunning StartUpPolicy FirewallException
----                 --------- 					-------- -----------         -------------- ------------- -----------------
esx1.example.com           					UTC      6/7/2015 3:25:39 PM          False off                       False
esx2.example.com           					UTC      6/7/2015 3:25:40 PM          False off                       False
esx3.example.com 	{192.168.10.1,192.168.11.1}	        UTC      6/7/2015 3:25:42 PM          False off                       False
esx4.example.com 	192.168.11.1 				UTC      6/7/2015 3:25:43 PM          False off                       False

==> Set time to correct time:

// Get time from the machine running PowerCli
$currentTime = Get-Date

// Update time on ESX hosts
$hosts_time = Get-VMHost | %{ Get-View $_.ExtensionData.ConfigManager.DateTimeSystem }
$hosts_time.UpdateDateTime((Get-Date($currentTime.ToUniversalTime()) -format u))

==> Remove old NTP servers (if any):

$old_ntp_server = '192.168.10.1'
Get-VMHost | Remove-VmHostNtpServer -NtpServer $old_ntp_server -Confirm

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Sort Name | Select Name, `
>>    @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
>>    Timezone, `
>>    @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
>>    @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
>>    @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
>>    @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object -ExpandProperty Enabled}} `
>>    | Format-Table -AutoSize
>>

Name                 NTPServer 	TimeZone CurrentTime         ServiceRunning StartUpPolicy FirewallException
----                 --------- 	-------- -----------         -------------- ------------- -----------------
esx1.example.com           	UTC      6/7/2015 3:25:39 PM          False off                       False
esx2.example.com           	UTC      6/7/2015 3:25:40 PM          False off                       False
esx3.example.com 		UTC      6/7/2015 3:25:42 PM          False off                       False
esx4.example.com		UTC      6/7/2015 3:25:43 PM          False off                       False

==> Change NTP to desired configuration:

$ntp_server = '192.168.10.1'
Get-VMHost | Add-VMHostNtpServer $ntp_server
Get-VMHost | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Set-VMHostFirewallException -Enabled:$true
Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Start-VMHostService
Get-VMhost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Set-VMHostService -policy "automatic"

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> $ntp_server = '192.168.10.1'
PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Add-VMHostNtpServer $ntp_server
192.168.10.1
192.168.10.1
192.168.10.1
192.168.10.1

==> Enable Firewall Exception

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Set-VMHostFirewallException -Enabled:$true

Name                 Enabled IncomingPorts  OutgoingPorts  Protocols  ServiceRunning
----                 ------- -------------  -------------  ---------  --------------
NTP Client           True                   123            UDP        True
NTP Client           True                   123            UDP        True
NTP Client           True                   123            UDP        False
NTP Client           True                   123            UDP        False

==> Start NTPd service

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Start-VMHostService

Key                  Label                          Policy     Running  Required
---                  -----                          ------     -------  --------
ntpd                 NTP Daemon                     on         True     False
ntpd                 NTP Daemon                     on         True     False
ntpd                 NTP Daemon                     off        True     False
ntpd                 NTP Daemon                     off        True     False

==> Ensure NTPd service starts automatically (via policy)

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMhost | Get-VmHostService | Where-Object {$_.key -eq "ntpd"} | Set-VMHostService -policy "automatic"

Key                  Label                          Policy     Running  Required
---                  -----                          ------     -------  --------
ntpd                 NTP Daemon                     automatic  True     False
ntpd                 NTP Daemon                     automatic  True     False
ntpd                 NTP Daemon                     automatic  True     False
ntpd                 NTP Daemon                     automatic  True     False

==> Verify all is set the way you expected

Get-VMHost | Sort Name | Select Name, `
   @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
   Timezone, `
   @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
   @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
   @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
   @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object –ExpandProperty Enabled}} `
   | Format-Table -AutoSize

Output:

PowerCLI C:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-VMHost | Sort Name | Select Name, `
>>    @{N="NTPServer";E={$_ |Get-VMHostNtpServer}}, `
>>    Timezone, `
>>    @{N="CurrentTime";E={(Get-View $_.ExtensionData.ConfigManager.DateTimeSystem) | Foreach {$_.QueryDateTime().ToLocalTime()}}}, `
>>    @{N="ServiceRunning";E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq "ntpd"}).Running}}, `
>>    @{N="StartUpPolicy";E={(Get-VMHostService -VMHost $_ |Where-Object {$_.Key -eq "ntpd"}).Policy}}, `
>>    @{N="FirewallException";E={$_ | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Select-Object -ExpandProperty Enabled}} `
>>    | Format-Table -AutoSize
>>

Name                 NTPServer  TimeZone CurrentTime         ServiceRunning StartUpPolicy FirewallException
----                 ---------  -------- -----------         -------------- ------------- -----------------
esx1.example.com 192.168.10.1 UTC      6/7/2015 3:34:49 PM           True automatic                  True
esx2.example.com 192.168.10.1 UTC      6/7/2015 3:34:51 PM           True automatic                  True
esx3.example.com 192.168.10.1 UTC      6/7/2015 3:34:52 PM           True automatic                  True
esx4.example.com 192.168.10.1 UTC      6/7/2015 3:34:54 PM           True automatic                  True

The following guide describes the neccessary steps to install and configure a pair of cisco nexus 1000v switches to be used in a vSphere cluster.
These will connect to Cisco Nexus 5020 Upstream Switches.

In this guide the hardware used consists of:

Hardware:
3x HPProliant DL380 G6 with 2 4-port NICs.
2x Cisco 5200Nexus Switches

Software:
vSphere 4 Update 1 Enterprise Plus (needed to use Cisco nexus1000v)
vCenter installed as a virtual machine – 192.168.10.10 (on VLAN 10)
Cisco Nexus 1000v 4.0.4.SV1.3b –
Primary 192.168.101.10 domain id 100 (on VLAN 101)

I am assuming you have already installed and configured vCenter and the ESX cluster.

Cisco recommends that you use 3 separate VLANs for Nexus traffic, I am using the following VLANs:

100 – Control – Control connectivity between Nexus 1000V VSM and VEMs (Non Routable)
101 – Management – ssh/telnet/scp to the cisco Nexux 1000v int mgmt0 (Routable)
102 – Packet – Internal connectivity between Nexus 1000v (Non Routable)

And I will also use VLAN 10 and 20 for VM traffic (10 for Production, 20 for Development)

1) Install vSphere (I assume you have done this step)

2) Configure Cisco Nexus 5020 Upstream Switchports

You need to configure the ports on the upstream switches in order to pass VLAN information to the ESX hosts’ uplink NICs

On the Nexus5020s, run the following:

// These commands give a description to the port and allow trunking of VLANs.
// The allowed VLANs are listed
// spanning-tree port type edge trunk is the recommended spanning-tree type

interface Ethernet1/1/10
description “ESX1-eth0”
switchport mode trunk
switchport trunk allowed vlan 10-20,100-102
spanning-tree port type edge trunk

3) Service Console VLAN !!!

When I installed the ESX server, I used the native VLAN, but after you change the switch port from switchport mode access to switchport mode trunk, the ESX server needs to be configured to send specific VLAN traffic to the Service Console.
My Service Console IP is 192.168.10.11 on VLAN 10, so you will need to console to the ESX host and enter the following:

[root@esx1]# esxcfg-vswitch -v 10 -p “Service Console” vSwitch0

4) Add Port Groups for the Control,Packet and Management VLANs.
I add these Port Groups to VMware Network Virtual Switch vSwitch0 on all the ESX hosts. Make sure to select the right VLANs for your environment.

5) Now that you have configured the Control,Packet and Management Port Groups with their respective VLANs, you can install the Cisco Nexus 1000v.
I chose to install the Virtual Appliace (OVA) file downloaded from Cisco. The installation is very simple, make sure to select to Manually Configure Nexus 1000v and to Map the VLANs to Control, Packet and Management. The rest is just like installing a regular virtual appliance.

6) Power on and open a console window to the Nexus1000v VM(appliance) you just installed. A setup script will start running and will ask you a few questions.

admin password
domain ID // This is used to identify the VSM and VEM. If you want to have 2 Nexus 1000v for high availability, both Nexus 1000v will use the same domain ID. I chose 100
High Availability mode // If you plan to use 2 Nexus 1000v for high availability, then for the first installation select primary, otherwise standalone
Network Information // Things like IP, netmask, gateway Disable Telnet! Enable SSH!
The other stuff we will configure later (Not from the Setup script)

7) Register vCenter Nexus 1000v Plug-in
Once you have the Nexus 1000v basics configured, you should be able to access it. Try to SSH to it (Hopefully you enabled SSH).
Open a browser and point it to the Nexus 1000v management IP address (in this case 192.168.101.10) and you will get a webpage like the following

  • Download the cisco_nexus_1000v_extension.xml
  • Open vSphere client and connect to the vCenter.
  • Go to Plug-ins > Manage Plug-ins
  • Right-click under Available Plug-ins and select New Plu-ins, Browse to the cisco_nexus_1000v_extension.xml
  • Click Register Plug-in (disregard security warning about new SSL cert)

You do NOT need to Download and Install the Plug-in, just Register it.

Now we can start the “advanced” configuration of the Nexus 1000v

8 ) Configure SVS domain ID on VSM

n1kv(config)# svs-domain
n1kv(config-svs-domain)# domain id 100
n1kv(config-svs-domain)# exit

9) Configure Control and Packet VLANs

n1kv(config)# svs-domain
n1kv(config-svs-domain)# control vlan 100
n1kv(config-svs-domain)# packet vlan 102
n1kv(config-svs-domain)# svs mode L2
n1kv(config-svs-domain)# exit

10) Connect Nexus 1000v to vCenter
In this step we are defining the SVS connection which is the link between the VSM and vCenter.

n1kv(config)# svs connection vcenter
n1kv(config-svs-conn)# protocol vmware-vim
n1kv(config-svs-conn)# vmware dvs datacenter-name myDatacenter
n1kv(config-svs-conn)# remote ip address 192.168.10.10
n1kv(config-svs-conn)# connect
n1kv(config-svs-conn)# exit
n1kv(config)# exit
n1kv# copy run start

//Verify the SVS connection

n1kv# show svs connections vcenter

connection vcenter:
    ip address: 192.168.10.10
    remote port: 80
    protocol: vmware-vim https
    certificate: default
    datacenter name: myDatacenter
    DVS uuid: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    config status: Enabled
    operational status: Connected
    sync status: Complete
    version: VMware vCenter Server 4.0.0 build-258672

12) Create the VLANs on the VSM

n1kv# conf t
n1kv(config)# vlan 100
n1kv(config-vlan)# name Control
n1kv(config-vlan)# exit
n1kv(config)# vlan 102
n1kv(config-vlan)# name Packet
n1kv(config-vlan)# exit
n1kv(config)# vlan 101
n1kv(config-vlan)# name Management
n1kv(config-vlan)# exit
n1kv(config)# vlan 10
n1kv(config-vlan)# name Production
n1kv(config-vlan)# exit
n1kv(config)# vlan 20
n1kv(config-vlan)# name Development
n1kv(config-vlan)# exit

// Verify VLANs

n1kv(config)# show vlan
VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
1    default                          active    
10   Production                       active    
20   Development                      active    
100  Control                          active 
101  Management                       active   
102  Packet                           active    


VLAN Type
---- -----
1    enet  
10   enet  
20   enet  
100  enet  
101  enet  
102  enet  

13) Create Uplink Port-Profile
The Cisco Nexus 1000v acts like the VMware DVS. Before you can add hosts to the Nexus1000v you will need to create uplink port-profiles; which will allow VEMs to connect with the VSM.

n1kv(config)# port-profile system-uplink
n1kv(config-port-prof)# switchport mode trunk
n1kv(config-port-prof)# switchport trunk allowed vlan 10,20,100-102
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# system vlan 100,102
n1kv(config-port-prof)# vmware port-group dv-system-uplink
n1kv(config-port-prof)# capability uplink
n1kv(config-port-prof)# state enabled

// Verify Uplink Port-Profile

n1kv(config-port-prof)# show port-profile name system-uplink
port-profile system-uplink
  description: 
  type: ethernet
  status: enabled
  capability l3control: no
  pinning control-vlan: -
  pinning packet-vlan: -
  system vlans: 100,102
  port-group: dv-system-uplink
  max ports: -
  inherit: 
  config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-20,100-102
    no shutdown
  evaluated config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-20,100-102
    no shutdown
  assigned interfaces:

14) It is now time to install the VEM on the ESX hosts.
The preferred way to do this is using VUM(VMware Update Manager). If you have VUM in the system the installation will be very simple.
Simply go to Home->Inventory->Networking
Right Click on the Nexus Switch and add host

// Verify that the task is successfull

// Also take a look at the VSM console

n1kv# 2011 Jan 14 14:43:03 n1kv %PLATFORM-2-MOD_PWRUP: Module 3 powered up (Serial number )

n1kv# show module
Mod  Ports  Module-Type                      Model              Status
---  -----  -------------------------------- ------------------ ------------
1    0      Virtual Supervisor Module        Nexus1000V         active *
3    248    Virtual Ethernet Module          NA                 ok

Mod  Sw               Hw      
---  ---------------  ------  
1    4.0(4)SV1(3b)    0.0    
3    4.0(4)SV1(3b)    1.20   

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
3    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.101.10   NA                                    NA
3    192.168.11.82    XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX  esx1


* this terminal session 

// Do the same for all the other ESX Hosts

15) Create the Port-Profile(s) (VMware Port-Groups)
Port-Profile configure interfaces on the VEM.
From the VMware point of view a port-profile is represented as a port-group.

// The Port-Profile below will be the VLAN 10 PortGroup on vCenter

n1kv# conf t
n1kv(config)# port-profile VLAN_10
n1kv(config-port-prof)# vmware port-group
n1kv(config-port-prof)# switchport mode access
n1kv(config-port-prof)# switchport access vlan 10
n1kv(config-port-prof)# vmware max-ports 200 // By default it has only 32 ports, I want 200 available
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# state enabled
n1kv(config-port-prof)# exit

16) Select the PortGroup you want your VM to connect to

17) Verify Port Profile/Port Groups from the VSM console

n1kv# show port-profile usage 

-------------------------------------------------------------------------------
Port Profile               Port        Adapter        Owner
-------------------------------------------------------------------------------
VLAN_10                    Veth1       Net Adapter 1  jeos_10                  
VLAN_20                    Veth2       Net Adapter 1  jeos_20                  
system-uplink              Eth3/5      vmnic4         esx1.example.com        
                           Eth3/6      vmnic5         esx1.example.com        
                           Eth3/9      vmnic8         esx1.example.com        
                           Eth3/10     vmnic9         esx1.example.com        
                           Eth4/5      vmnic4         esx2.example.com        
                           Eth4/6      vmnic5         esx2.example.com        
                           Eth4/9      vmnic8         esx2.example.com        
                           Eth4/10     vmnic9         esx2.example.com 

At this point you are ready to use the Cisco 1000v, but if you plan to run this in a production environment, it is strongly recommended you run the VSM in High Availability mode.
Follow this post to learn how to install and configure VSM High Availability:
cisco-nexus-1000v-vsm-high-availability