Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

Browsing Posts tagged vsphere

In a previous post I wrote about how to Create a custom Vagrant Box from scratch.

In this post I will walk through the usage of Packer to automate the creation of a CentOS 7 image that can be used in Vagrant or even vSphere.

Packer is a cool Hashicorp tool that helps you automate the creation of machine images.

You can download the project/git repo at https://github.com/parcejohn/packer-centos7

From their website:
“Packer is easy to use and automates the creation of any type of machine image. It embraces modern configuration management by encouraging you to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.”

I use Packer to continue the journey to Infrastructure As Code, where even my golden images/templates are automated and source controlled (on Git).

Requirements:

* Packer // I installed on Mac using $ brew install packer
* vagrant // I installed on Mac using $ brew install vagrant
* vmware fusion // I installed using $ brew cask install vmware-fusion
* CentOS 7 ISO file

Packer uses a JSON template file to orchestrate the image creation and there are different stages in the process, I will concentrate on the below three, but you should familiarize with all of them (https://www.packer.io/docs/basics/terminology.html)

Builders: Packer component to create a machine image for a single platform, in this case I will be using the VMware builder.
Provisioners: Packer component that installs and configures software within a running machine prior to that machine being turned into a static image. Example provisioners include shell scripts, Chef, Puppet, etc. I will be using the shell provisioner.
Post-Processors: Packer component that takes the result of the builder or another post-processor and process that to create a new artifact. Examples of post-processors are compress to compress artifacts, upload to upload artifacts, etc. I will be creating a Vagrant box as the artifact, and also demonstrate how to upload the image to vSphere.

It’s time to walk through the process of creating a CentOS 7 image using Packer.

1) Create directory structure
The http folder will host a kickstart file, Packer will use its built in web server to serve this kickstart file
The scripts folder will host the provisioning scripts that will define the machine

centos/
├── http
└── scripts

2) Populate with the following configuration files and scripts (the contents of these files are shown in the next steps)

centos/
├── centos-7.1-x64-vmware.json
├── http
│   └── ks.cfg
├── scripts
│   ├── base.sh
│   ├── cleanup.sh
│   ├── hgfs.sh
│   ├── vmware.sh
│   └── zerodisk.sh
├── template.json
└── vagrant_rsa_key

3) Create a SSH key pair, you will need this to login to the server to complete install and configuration

The Public Key will be injected to the ‘vagrant’ user as part of the kickstart, it will then be used in the Packer JSON template to allow Packer to login to the machine
The Private Key will stay on your system to allow Packer and Vagrant to login to the created machine.

4) Create a Packer JSON Template file (template.json)

Packer Components/Sections:

Variables:
This is just to centralize variables that will be used by other components (e.g. Builder, provisioner, etc)
I can also set a variable to use Environment variables or command line provided values:
User provided values:
“iso_url”: “{{user iso_url}}”,

Environment variable values:
“iso_url”: “{{env ISO_URL}}”,

Builders:
Here is an array of builders and the needed parameters for each builder, in this case the only builder is of type vmware-iso.
Parameters and some comments about them.

Provisioners:
This section uses the shell provisioner and runs all of those scripts listed, which are located in the scripts folder.

Post-processor:
This section states that I want a vagrant box and I am giving it the name.

5) Kickstart file (should be http/ks.cfg based on the template.json file)
This is a minimal install of CentOS 7
Note: The private key (XXXX) given to the vagrant user, that is the key that we created in a previous step

6 ) Provisioning Scrips
Packer calls these scripts (from the JSON template) on the created machine before making it a template/image.

base.sh // Install basic packages

vmware.sh // Install VMTools, I separated this from base as I may want to call different scripts on a VirtualBox/etc builder.

hgfs.sh // For vagrant to work properly with folders

cleanup.sh // Clean up before converting to image

zerodisk.sh // This is so that the resulting box is as small as possible

7) Run Packer against the template.json file created earlier and watch it build the machine image

$ packer build -var ‘iso_url=/Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso’ -only=vmware-iso template.json
vmware-iso output will be in this color.

==> vmware-iso: Downloading or copying ISO
vmware-iso: Downloading or copying: file:///Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso
==> vmware-iso: Creating virtual machine disk
==> vmware-iso: Building and writing VMX file
==> vmware-iso: Starting HTTP server on port 8101
==> vmware-iso: Starting virtual machine…
vmware-iso: The VM will be run headless, without a GUI. If you want to
vmware-iso: view the screen of the VM, connect via VNC without a password to
vmware-iso: 127.0.0.1:5989
==> vmware-iso: Waiting 10s for boot…
==> vmware-iso: Connecting to VM via VNC
==> vmware-iso: Typing the boot command over VNC…
==> vmware-iso: Waiting for SSH to become available…
==> vmware-iso: Connected to SSH!
==> vmware-iso: Uploading the ‘linux’ VMware Tools
==> vmware-iso: Provisioning with shell script: scripts/base.sh
vmware-iso: + sed -i ‘s/^.*requiretty/#Defaults requiretty/’ /etc/sudoers



==> vmware-iso: Provisioning with shell script: scripts/zerodisk.sh
vmware-iso: + dd if=/dev/zero of=/EMPTY bs=1M
vmware-iso: dd: error writing ‘/EMPTY’: No space left on device
vmware-iso: 5488+0 records in
vmware-iso: 5487+0 records out
vmware-iso: 5754265600 bytes (5.8 GB) copied, 5.27091 s, 1.1 GB/s
vmware-iso: + rm -f /EMPTY
==> vmware-iso: Gracefully halting virtual machine…
vmware-iso: Waiting for VMware to clean up after itself…
==> vmware-iso: Deleting unnecessary VMware files…
vmware-iso: Deleting: output-vmware-iso/564de113-2fc5-2010-8e5d-63fec92f12f7.vmem
vmware-iso: Deleting: output-vmware-iso/packer-vmware-iso.plist
vmware-iso: Deleting: output-vmware-iso/vmware.log
==> vmware-iso: Cleaning VMX prior to finishing up…
vmware-iso: Unmounting floppy from VMX…
vmware-iso: Detaching ISO from CD-ROM device…
vmware-iso: Disabling VNC server…
==> vmware-iso: Compacting the disk image
==> vmware-iso: Running post-processor: vagrant
==> vmware-iso (vagrant): Creating Vagrant box for ‘vmware’ provider
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s001.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s002.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s003.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.nvram
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmsd
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmxf
vmware-iso (vagrant): Compressing: Vagrantfile
vmware-iso (vagrant): Compressing: disk-s001.vmdk
vmware-iso (vagrant): Compressing: disk-s002.vmdk
vmware-iso (vagrant): Compressing: disk-s003.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
vmware-iso (vagrant): Compressing: packer-vmware-iso.nvram
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmsd
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmx
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmxf
Build ‘vmware-iso’ finished.

==> Builds finished. The artifacts of successful builds are:
–> vmware-iso: ‘vmware’ provider box: centos-7.1-x64-vmware.box

8) Add box to Vagrant

$ vagrant box add –name centos-7.1-010-x64-vmware.box centos-7.1-x64-vmware.box
==> box: Adding box ‘centos-7.1-010-x64-vmware.box’ (v0) for provider:
box: Downloading: file:///Users/john/packer/centos/centos-7.1-x64-vmware.box
==> box: Successfully added box ‘centos-7.1-010-x64-vmware.box’ (v0) for ‘vmware_desktop’!

The above is all you need to have your Infrastructure be Code that can be versioned and automatically create your images and use them on Vagrant.
The next item will show you how to use Packer to move the same image to your vSphere environment.

9) Sending Image to vSphere
To send the image created by Packer to vSphere you will need to add another post-provisioning entry to the JSON array:

As you can see I am using variables to make this portable, so that means you have to add variables to the variables section as well, below is the new template to do both vagrant and vSphere image provisioning.

10) Re-Run Packer against the template.json file and watch it build the machine image an

$ packer build \
> -var ‘iso_url=/Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso’ \
> -var ‘vm_host=vc.example.com’ \
> -var ‘vm_user=john@example.com’ \
> -var ‘vm_pass=XXXXXXX’ \
> -var ‘vm_dc=vDC’ \
> -var ‘vm_cluster=Folder/Cluster’ \
> -var ‘vm_datastore=store’ \
> -var ‘vm_folder=Images’ \
> -var ‘vm_name=centos71’ \
> -var ‘vm_network=dvs-net1’ \
> -only=vmware-iso template.json

vmware-iso output will be in this color.

==> vmware-iso: Downloading or copying ISO
vmware-iso: Downloading or copying: file:///Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso
==> vmware-iso: Creating virtual machine disk


==> vmware-iso: Running post-processor: vagrant
==> vmware-iso (vagrant): Creating Vagrant box for ‘vmware’ provider
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s001.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s002.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s003.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.nvram
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmsd
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmxf
vmware-iso (vagrant): Compressing: Vagrantfile
vmware-iso (vagrant): Compressing: disk-s001.vmdk
vmware-iso (vagrant): Compressing: disk-s002.vmdk
vmware-iso (vagrant): Compressing: disk-s003.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
vmware-iso (vagrant): Compressing: packer-vmware-iso.nvram
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmsd
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmx
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmxf
==> vmware-iso: Running post-processor: vsphere
vmware-iso (vsphere): Uploading output-vmware-iso/packer-vmware-iso.vmx to vSphere
vmware-iso (vsphere): Opening VMX source: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vsphere): Opening VI target: vi://john%40example.com@vc.exmaple.com:443/vDC/host/Cluster/Resources/
vmware-iso (vsphere): Deploying to VI: vi://john%40example.com@vc.example.com:443/vDC/host/Cluster/Resources/
Transfer Completed
vmware-iso (vsphere): Completed successfully
vmware-iso (vsphere):
Build ‘vmware-iso’ finished.

==> Builds finished. The artifacts of successful builds are:
–> vmware-iso: ‘vmware’ provider box: centos-7.1-x64-vmware.box
–> vmware-iso:

Now you should have the same image running on both Vagrant and vSphere.

One of the most important things you should do to your systems is to ensure they have the right time.
In this post I will show how to check and ensure your systems have the correct time using PowerCli.

==> Login to vCenter:

$admin = Get-Credential –Credential EXAMPLE\john
Connect-VIServer -Server vc.example.com -Credential $admin

==> Check time settings:

Output:

==> Set time to correct time:

==> Remove old NTP servers (if any):

Output:

==> Change NTP to desired configuration:

Output:

==> Enable Firewall Exception

==> Start NTPd service

==> Ensure NTPd service starts automatically (via policy)

==> Verify all is set the way you expected

Output:

The following guide describes the neccessary steps to install and configure a pair of cisco nexus 1000v switches to be used in a vSphere cluster.
These will connect to Cisco Nexus 5020 Upstream Switches.

In this guide the hardware used consists of:

Hardware:
3x HPProliant DL380 G6 with 2 4-port NICs.
2x Cisco 5200Nexus Switches

Software:
vSphere 4 Update 1 Enterprise Plus (needed to use Cisco nexus1000v)
vCenter installed as a virtual machine – 192.168.10.10 (on VLAN 10)
Cisco Nexus 1000v 4.0.4.SV1.3b –
Primary 192.168.101.10 domain id 100 (on VLAN 101)

I am assuming you have already installed and configured vCenter and the ESX cluster.

Cisco recommends that you use 3 separate VLANs for Nexus traffic, I am using the following VLANs:

100 – Control – Control connectivity between Nexus 1000V VSM and VEMs (Non Routable)
101 – Management – ssh/telnet/scp to the cisco Nexux 1000v int mgmt0 (Routable)
102 – Packet – Internal connectivity between Nexus 1000v (Non Routable)

And I will also use VLAN 10 and 20 for VM traffic (10 for Production, 20 for Development)

1) Install vSphere (I assume you have done this step)

2) Configure Cisco Nexus 5020 Upstream Switchports

You need to configure the ports on the upstream switches in order to pass VLAN information to the ESX hosts’ uplink NICs

On the Nexus5020s, run the following:

// These commands give a description to the port and allow trunking of VLANs.
// The allowed VLANs are listed
// spanning-tree port type edge trunk is the recommended spanning-tree type

interface Ethernet1/1/10
description “ESX1-eth0”
switchport mode trunk
switchport trunk allowed vlan 10-20,100-102
spanning-tree port type edge trunk

3) Service Console VLAN !!!

When I installed the ESX server, I used the native VLAN, but after you change the switch port from switchport mode access to switchport mode trunk, the ESX server needs to be configured to send specific VLAN traffic to the Service Console.
My Service Console IP is 192.168.10.11 on VLAN 10, so you will need to console to the ESX host and enter the following:

[root@esx1]# esxcfg-vswitch -v 10 -p “Service Console” vSwitch0

4) Add Port Groups for the Control,Packet and Management VLANs.
I add these Port Groups to VMware Network Virtual Switch vSwitch0 on all the ESX hosts. Make sure to select the right VLANs for your environment.

5) Now that you have configured the Control,Packet and Management Port Groups with their respective VLANs, you can install the Cisco Nexus 1000v.
I chose to install the Virtual Appliace (OVA) file downloaded from Cisco. The installation is very simple, make sure to select to Manually Configure Nexus 1000v and to Map the VLANs to Control, Packet and Management. The rest is just like installing a regular virtual appliance.

6) Power on and open a console window to the Nexus1000v VM(appliance) you just installed. A setup script will start running and will ask you a few questions.

admin password
domain ID // This is used to identify the VSM and VEM. If you want to have 2 Nexus 1000v for high availability, both Nexus 1000v will use the same domain ID. I chose 100
High Availability mode // If you plan to use 2 Nexus 1000v for high availability, then for the first installation select primary, otherwise standalone
Network Information // Things like IP, netmask, gateway Disable Telnet! Enable SSH!
The other stuff we will configure later (Not from the Setup script)

7) Register vCenter Nexus 1000v Plug-in
Once you have the Nexus 1000v basics configured, you should be able to access it. Try to SSH to it (Hopefully you enabled SSH).
Open a browser and point it to the Nexus 1000v management IP address (in this case 192.168.101.10) and you will get a webpage like the following

  • Download the cisco_nexus_1000v_extension.xml
  • Open vSphere client and connect to the vCenter.
  • Go to Plug-ins > Manage Plug-ins
  • Right-click under Available Plug-ins and select New Plu-ins, Browse to the cisco_nexus_1000v_extension.xml
  • Click Register Plug-in (disregard security warning about new SSL cert)

You do NOT need to Download and Install the Plug-in, just Register it.

Now we can start the “advanced” configuration of the Nexus 1000v

8 ) Configure SVS domain ID on VSM

n1kv(config)# svs-domain
n1kv(config-svs-domain)# domain id 100
n1kv(config-svs-domain)# exit

9) Configure Control and Packet VLANs

n1kv(config)# svs-domain
n1kv(config-svs-domain)# control vlan 100
n1kv(config-svs-domain)# packet vlan 102
n1kv(config-svs-domain)# svs mode L2
n1kv(config-svs-domain)# exit

10) Connect Nexus 1000v to vCenter
In this step we are defining the SVS connection which is the link between the VSM and vCenter.

n1kv(config)# svs connection vcenter
n1kv(config-svs-conn)# protocol vmware-vim
n1kv(config-svs-conn)# vmware dvs datacenter-name myDatacenter
n1kv(config-svs-conn)# remote ip address 192.168.10.10
n1kv(config-svs-conn)# connect
n1kv(config-svs-conn)# exit
n1kv(config)# exit
n1kv# copy run start

//Verify the SVS connection

12) Create the VLANs on the VSM

n1kv# conf t
n1kv(config)# vlan 100
n1kv(config-vlan)# name Control
n1kv(config-vlan)# exit
n1kv(config)# vlan 102
n1kv(config-vlan)# name Packet
n1kv(config-vlan)# exit
n1kv(config)# vlan 101
n1kv(config-vlan)# name Management
n1kv(config-vlan)# exit
n1kv(config)# vlan 10
n1kv(config-vlan)# name Production
n1kv(config-vlan)# exit
n1kv(config)# vlan 20
n1kv(config-vlan)# name Development
n1kv(config-vlan)# exit

// Verify VLANs

13) Create Uplink Port-Profile
The Cisco Nexus 1000v acts like the VMware DVS. Before you can add hosts to the Nexus1000v you will need to create uplink port-profiles; which will allow VEMs to connect with the VSM.

n1kv(config)# port-profile system-uplink
n1kv(config-port-prof)# switchport mode trunk
n1kv(config-port-prof)# switchport trunk allowed vlan 10,20,100-102
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# system vlan 100,102
n1kv(config-port-prof)# vmware port-group dv-system-uplink
n1kv(config-port-prof)# capability uplink
n1kv(config-port-prof)# state enabled

// Verify Uplink Port-Profile

14) It is now time to install the VEM on the ESX hosts.
The preferred way to do this is using VUM(VMware Update Manager). If you have VUM in the system the installation will be very simple.
Simply go to Home->Inventory->Networking
Right Click on the Nexus Switch and add host

// Verify that the task is successfull

// Also take a look at the VSM console

// Do the same for all the other ESX Hosts

15) Create the Port-Profile(s) (VMware Port-Groups)
Port-Profile configure interfaces on the VEM.
From the VMware point of view a port-profile is represented as a port-group.

// The Port-Profile below will be the VLAN 10 PortGroup on vCenter

n1kv# conf t
n1kv(config)# port-profile VLAN_10
n1kv(config-port-prof)# vmware port-group
n1kv(config-port-prof)# switchport mode access
n1kv(config-port-prof)# switchport access vlan 10
n1kv(config-port-prof)# vmware max-ports 200 // By default it has only 32 ports, I want 200 available
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# state enabled
n1kv(config-port-prof)# exit

16) Select the PortGroup you want your VM to connect to

17) Verify Port Profile/Port Groups from the VSM console

At this point you are ready to use the Cisco 1000v, but if you plan to run this in a production environment, it is strongly recommended you run the VSM in High Availability mode.
Follow this post to learn how to install and configure VSM High Availability:
cisco-nexus-1000v-vsm-high-availability