Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

Browsing Posts in Networking

Sometime you need to know how much data can be transferred per second on your network (LAN or WAN) and sometimes you need to find out how long it would take for X amount of data to be fully transferred.

For example you want to know how long it would take to seed data from a production site to a DR site or to the cloud.

The below aims to show how to find the theoretical numbers, I mention theoretical because the number is not exact as it could change based on network condition, jitter/etc/etc, but the below can give you a good indication of what you are dealing with.

Find theoretically how much data you can transfer per second:

Example:
If you have a 1Gbps network link.

How much data can you theoretically transfer per second?

# Network metric conversion uses decimal math

1 Gbps = 1000 Mbps = 1,000,000 Kbps = 1,000,000,000 bps

1 second = 1,000 milliseconds

Formula:
Bandwidth(Mpbs) / 8 = #MB that can be transferred

So it will be:
1,000Mbp/s / 8

= 125 MB can be transferred per second over a 1Gbps network link.

If you want to measure how long, theoretically it will take to transfer X amount of data you can do the follow:

Example:

20TB need to be transferred on a 150Mbps link.

How many days will it take to transfer that amount of data over the network?

# Data conversion uses base 2

20 TB = 20 x (2^10) GB = 20 x (2^20) MB = 20 x (2^30) KB = 20 x (2^40) Bytes =

Formula:
( DataSize(MB) x 8 ) / ( Bandwidth(Mbps) ) = # seconds it will take

So it will be:
( (20x(2^20)) x 8 ) / (150) = 1,118,481.0667 seconds

1,118,481.0667 / 3600 = 310 hours

310 / 24

= 12.95 days will take to transfer 20TB over a 150Mbps network link

VSM High Availability is optional but it is strongly recommended in a production environment.
High availability is accomplished by installing and configuring a secondary VSM.

For instructions on how to install and configure a Primary Cisco 1000v VSM on your vSphere environment please follow
configure-vsphere-and-cisco-nexus-1000v-connecting-to-nexus-5k-upstream-switches

Then come back to this post to learn how to install and configure a secondary VSM for high availability.

1) Check the redundancy status of your primary VSM

n1kv# show system redundancy status
Redundancy role
---------------
      administrative:   primary
         operational:   primary

Redundancy mode
---------------
      administrative:   HA
         operational:   None

This supervisor (sup-1)
-----------------------
    Redundancy state:   Active
    Supervisor state:   Active
      Internal state:   Active with no standby                  

Other supervisor (sup-2)
------------------------
    Redundancy state:   Not present

// Check Modules

n1kv# show module
Mod  Ports  Module-Type                      Model              Status
---  -----  -------------------------------- ------------------ ------------
1    0      Virtual Supervisor Module        Nexus1000V         active *
3    248    Virtual Ethernet Module          NA                 ok
4    248    Virtual Ethernet Module          NA                 ok
5    248    Virtual Ethernet Module          NA                 ok

Mod  Sw               Hw      
---  ---------------  ------  
1    4.0(4)SV1(3b)    0.0    
3    4.0(4)SV1(3b)    1.20   
4    4.0(4)SV1(3b)    1.20   
5    4.0(4)SV1(3b)    1.20   

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
3    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
4    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
5    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.10.10      NA                                    NA
3    192.168.16.82       xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  esx1.example.com
4    192.168.16.53       xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  esx2.example.com
5    192.168.16.149      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  esx3.example.com


* this terminal session 

// check HA status

n1kv# show system redundancy ha status
VDC No    This supervisor                         Other supervisor                        
------    ---------------                         ---------------
                        
vdc 1     Active with no standby                  N/A     

2) Install the secondary VSM from the OVF.
Select to Manually Configure Nexus 1000v and just like the primary installation select the right VLANs for Control, Packet and Management.

When you get to this properties page:

Do not fill in any of the fields, just click next and Finish

3) Power on the Secondary VSM
The system setup script will prompt for the following:

Admin password // Choose your password
VSM Role: secondary // VSM will reboot
Domain ID: 100 // This must be the same domain ID you gave to the primary, I used 100

Once a VSM is set to secondary it will reboot.

4) Verify VSM high availability
Login to VSM and run:

n1kv# show system redundancy status
Redundancy role
---------------
      administrative:   primary
         operational:   primary

Redundancy mode
---------------
      administrative:   HA
         operational:   HA

This supervisor (sup-1)
-----------------------
    Redundancy state:   Active
    Supervisor state:   Active
      Internal state:   Active with HA standby                  

Other supervisor (sup-2)
------------------------
    Redundancy state:   Standby

    Supervisor state:   HA standby
      Internal state:   HA standby
n1kv# show module
Mod  Ports  Module-Type                      Model              Status
---  -----  -------------------------------- ------------------ ------------
1    0      Virtual Supervisor Module        Nexus1000V         active *
2    0      Virtual Supervisor Module        Nexus1000V         ha-standby
3    248    Virtual Ethernet Module          NA                 ok
4    248    Virtual Ethernet Module          NA                 ok
5    248    Virtual Ethernet Module          NA                 ok

Mod  Sw               Hw      
---  ---------------  ------  
1    4.0(4)SV1(3b)    0.0    
2    4.0(4)SV1(3b)    0.0    
3    4.0(4)SV1(3b)    1.20   
4    4.0(4)SV1(3b)    1.20   
5    4.0(4)SV1(3b)    1.20   

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
2    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
3    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
4    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
5    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.10.10      NA                                    NA
2    192.168.10.10      NA                                    NA
3    192.168.16.82       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  esx1.example.com
4    192.168.16.53       XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  esx2.example.com
5    192.168.16.149      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  esx3.example.com


* this terminal session 
n1kv# show system redundancy ha status
VDC No    This supervisor                         Other supervisor                        
------    ---------------                         ---------------
                        
vdc 1     Active with HA standby                  HA standby 

VMware recommends that you run the Primary and the Secondary on different ESX hosts.

5) Test VSM switchover
From the VSM run system switchover to switch between the active and the standby VSMs.

That is it, now you have a highly available Cisco 1000v VSM infrastructure.

The following guide describes the neccessary steps to install and configure a pair of cisco nexus 1000v switches to be used in a vSphere cluster.
These will connect to Cisco Nexus 5020 Upstream Switches.

In this guide the hardware used consists of:

Hardware:
3x HPProliant DL380 G6 with 2 4-port NICs.
2x Cisco 5200Nexus Switches

Software:
vSphere 4 Update 1 Enterprise Plus (needed to use Cisco nexus1000v)
vCenter installed as a virtual machine – 192.168.10.10 (on VLAN 10)
Cisco Nexus 1000v 4.0.4.SV1.3b –
Primary 192.168.101.10 domain id 100 (on VLAN 101)

I am assuming you have already installed and configured vCenter and the ESX cluster.

Cisco recommends that you use 3 separate VLANs for Nexus traffic, I am using the following VLANs:

100 – Control – Control connectivity between Nexus 1000V VSM and VEMs (Non Routable)
101 – Management – ssh/telnet/scp to the cisco Nexux 1000v int mgmt0 (Routable)
102 – Packet – Internal connectivity between Nexus 1000v (Non Routable)

And I will also use VLAN 10 and 20 for VM traffic (10 for Production, 20 for Development)

1) Install vSphere (I assume you have done this step)

2) Configure Cisco Nexus 5020 Upstream Switchports

You need to configure the ports on the upstream switches in order to pass VLAN information to the ESX hosts’ uplink NICs

On the Nexus5020s, run the following:

// These commands give a description to the port and allow trunking of VLANs.
// The allowed VLANs are listed
// spanning-tree port type edge trunk is the recommended spanning-tree type

interface Ethernet1/1/10
description “ESX1-eth0”
switchport mode trunk
switchport trunk allowed vlan 10-20,100-102
spanning-tree port type edge trunk

3) Service Console VLAN !!!

When I installed the ESX server, I used the native VLAN, but after you change the switch port from switchport mode access to switchport mode trunk, the ESX server needs to be configured to send specific VLAN traffic to the Service Console.
My Service Console IP is 192.168.10.11 on VLAN 10, so you will need to console to the ESX host and enter the following:

[root@esx1]# esxcfg-vswitch -v 10 -p “Service Console” vSwitch0

4) Add Port Groups for the Control,Packet and Management VLANs.
I add these Port Groups to VMware Network Virtual Switch vSwitch0 on all the ESX hosts. Make sure to select the right VLANs for your environment.

5) Now that you have configured the Control,Packet and Management Port Groups with their respective VLANs, you can install the Cisco Nexus 1000v.
I chose to install the Virtual Appliace (OVA) file downloaded from Cisco. The installation is very simple, make sure to select to Manually Configure Nexus 1000v and to Map the VLANs to Control, Packet and Management. The rest is just like installing a regular virtual appliance.

6) Power on and open a console window to the Nexus1000v VM(appliance) you just installed. A setup script will start running and will ask you a few questions.

admin password
domain ID // This is used to identify the VSM and VEM. If you want to have 2 Nexus 1000v for high availability, both Nexus 1000v will use the same domain ID. I chose 100
High Availability mode // If you plan to use 2 Nexus 1000v for high availability, then for the first installation select primary, otherwise standalone
Network Information // Things like IP, netmask, gateway Disable Telnet! Enable SSH!
The other stuff we will configure later (Not from the Setup script)

7) Register vCenter Nexus 1000v Plug-in
Once you have the Nexus 1000v basics configured, you should be able to access it. Try to SSH to it (Hopefully you enabled SSH).
Open a browser and point it to the Nexus 1000v management IP address (in this case 192.168.101.10) and you will get a webpage like the following

  • Download the cisco_nexus_1000v_extension.xml
  • Open vSphere client and connect to the vCenter.
  • Go to Plug-ins > Manage Plug-ins
  • Right-click under Available Plug-ins and select New Plu-ins, Browse to the cisco_nexus_1000v_extension.xml
  • Click Register Plug-in (disregard security warning about new SSL cert)

You do NOT need to Download and Install the Plug-in, just Register it.

Now we can start the “advanced” configuration of the Nexus 1000v

8 ) Configure SVS domain ID on VSM

n1kv(config)# svs-domain
n1kv(config-svs-domain)# domain id 100
n1kv(config-svs-domain)# exit

9) Configure Control and Packet VLANs

n1kv(config)# svs-domain
n1kv(config-svs-domain)# control vlan 100
n1kv(config-svs-domain)# packet vlan 102
n1kv(config-svs-domain)# svs mode L2
n1kv(config-svs-domain)# exit

10) Connect Nexus 1000v to vCenter
In this step we are defining the SVS connection which is the link between the VSM and vCenter.

n1kv(config)# svs connection vcenter
n1kv(config-svs-conn)# protocol vmware-vim
n1kv(config-svs-conn)# vmware dvs datacenter-name myDatacenter
n1kv(config-svs-conn)# remote ip address 192.168.10.10
n1kv(config-svs-conn)# connect
n1kv(config-svs-conn)# exit
n1kv(config)# exit
n1kv# copy run start

//Verify the SVS connection

n1kv# show svs connections vcenter

connection vcenter:
    ip address: 192.168.10.10
    remote port: 80
    protocol: vmware-vim https
    certificate: default
    datacenter name: myDatacenter
    DVS uuid: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    config status: Enabled
    operational status: Connected
    sync status: Complete
    version: VMware vCenter Server 4.0.0 build-258672

12) Create the VLANs on the VSM

n1kv# conf t
n1kv(config)# vlan 100
n1kv(config-vlan)# name Control
n1kv(config-vlan)# exit
n1kv(config)# vlan 102
n1kv(config-vlan)# name Packet
n1kv(config-vlan)# exit
n1kv(config)# vlan 101
n1kv(config-vlan)# name Management
n1kv(config-vlan)# exit
n1kv(config)# vlan 10
n1kv(config-vlan)# name Production
n1kv(config-vlan)# exit
n1kv(config)# vlan 20
n1kv(config-vlan)# name Development
n1kv(config-vlan)# exit

// Verify VLANs

n1kv(config)# show vlan
VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
1    default                          active    
10   Production                       active    
20   Development                      active    
100  Control                          active 
101  Management                       active   
102  Packet                           active    


VLAN Type
---- -----
1    enet  
10   enet  
20   enet  
100  enet  
101  enet  
102  enet  

13) Create Uplink Port-Profile
The Cisco Nexus 1000v acts like the VMware DVS. Before you can add hosts to the Nexus1000v you will need to create uplink port-profiles; which will allow VEMs to connect with the VSM.

n1kv(config)# port-profile system-uplink
n1kv(config-port-prof)# switchport mode trunk
n1kv(config-port-prof)# switchport trunk allowed vlan 10,20,100-102
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# system vlan 100,102
n1kv(config-port-prof)# vmware port-group dv-system-uplink
n1kv(config-port-prof)# capability uplink
n1kv(config-port-prof)# state enabled

// Verify Uplink Port-Profile

n1kv(config-port-prof)# show port-profile name system-uplink
port-profile system-uplink
  description: 
  type: ethernet
  status: enabled
  capability l3control: no
  pinning control-vlan: -
  pinning packet-vlan: -
  system vlans: 100,102
  port-group: dv-system-uplink
  max ports: -
  inherit: 
  config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-20,100-102
    no shutdown
  evaluated config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-20,100-102
    no shutdown
  assigned interfaces:

14) It is now time to install the VEM on the ESX hosts.
The preferred way to do this is using VUM(VMware Update Manager). If you have VUM in the system the installation will be very simple.
Simply go to Home->Inventory->Networking
Right Click on the Nexus Switch and add host

// Verify that the task is successfull

// Also take a look at the VSM console

n1kv# 2011 Jan 14 14:43:03 n1kv %PLATFORM-2-MOD_PWRUP: Module 3 powered up (Serial number )

n1kv# show module
Mod  Ports  Module-Type                      Model              Status
---  -----  -------------------------------- ------------------ ------------
1    0      Virtual Supervisor Module        Nexus1000V         active *
3    248    Virtual Ethernet Module          NA                 ok

Mod  Sw               Hw      
---  ---------------  ------  
1    4.0(4)SV1(3b)    0.0    
3    4.0(4)SV1(3b)    1.20   

Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         
3    xx-xx-xx-xx-xx-xx to xx-xx-xx-xx-xx-xx  NA         

Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    192.168.101.10   NA                                    NA
3    192.168.11.82    XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX  esx1


* this terminal session 

// Do the same for all the other ESX Hosts

15) Create the Port-Profile(s) (VMware Port-Groups)
Port-Profile configure interfaces on the VEM.
From the VMware point of view a port-profile is represented as a port-group.

// The Port-Profile below will be the VLAN 10 PortGroup on vCenter

n1kv# conf t
n1kv(config)# port-profile VLAN_10
n1kv(config-port-prof)# vmware port-group
n1kv(config-port-prof)# switchport mode access
n1kv(config-port-prof)# switchport access vlan 10
n1kv(config-port-prof)# vmware max-ports 200 // By default it has only 32 ports, I want 200 available
n1kv(config-port-prof)# no shutdown
n1kv(config-port-prof)# state enabled
n1kv(config-port-prof)# exit

16) Select the PortGroup you want your VM to connect to

17) Verify Port Profile/Port Groups from the VSM console

n1kv# show port-profile usage 

-------------------------------------------------------------------------------
Port Profile               Port        Adapter        Owner
-------------------------------------------------------------------------------
VLAN_10                    Veth1       Net Adapter 1  jeos_10                  
VLAN_20                    Veth2       Net Adapter 1  jeos_20                  
system-uplink              Eth3/5      vmnic4         esx1.example.com        
                           Eth3/6      vmnic5         esx1.example.com        
                           Eth3/9      vmnic8         esx1.example.com        
                           Eth3/10     vmnic9         esx1.example.com        
                           Eth4/5      vmnic4         esx2.example.com        
                           Eth4/6      vmnic5         esx2.example.com        
                           Eth4/9      vmnic8         esx2.example.com        
                           Eth4/10     vmnic9         esx2.example.com 

At this point you are ready to use the Cisco 1000v, but if you plan to run this in a production environment, it is strongly recommended you run the VSM in High Availability mode.
Follow this post to learn how to install and configure VSM High Availability:
cisco-nexus-1000v-vsm-high-availability

NetApp Appliances support Link Aggregation of their network interfaces, they call the Link Aggregation a VIF (Virtual Interface) and this provides Fault Tolerance, Load Balancing and higher throughput.

NetApp supports the following Link Aggregation modes:

From the NetApp documentation:
Single-mode vif
In a single-mode vif, only one of the interfaces in the vif is active. The other interfaces are on standby, ready to take over if the active interface fails.
Static multimode vif
The static multimode vif implementation in Data ONTAP is in compliance with IEEE 802.3ad (static). Any switch that supports aggregates, but does not have control packet exchange for configuring an aggregate, can be used with static multimode vifs.
Dynamic multimode vif
Dynamic multimode vifs can detect not only the loss of link status (as do static multimode vifs), but also a loss of data flow. This feature makes dynamic multimode vifs compatible with high-availability environments. The dynamic multimode vif implementation in Data ONTAP is in compliance with IEEE 802.3ad (dynamic), also known as Link Aggregation Control Protocol (LACP).

In this guide I will set up a Dynamic multimode vif between the NetApp system and the Cisco switches using LACP.

I am working with following hardware:

  • 2x NetApp FAS3040c in an active-active cluster
    With Dual 10G Ethernet Controller T320E-SFP+
  • 2x Cisco WS-C6509 configured as one Virtual Switch (using VSS)
    With Ten Gigabit Ethernet interfaces

Cisco Configuration:

Port-Channel(s) configuration:
// I am using Port-Channel 8 and 9 for this configuration
// And I need my filers to be in VLAN 10

!
interface Port-channel8
description LACP multimode VIF for filer1-10G
switchport
switchport access vlan 10
switchport mode access
!
interface Port-channel9
description LACP multimode VIF for filer2-10G
switchport
switchport access vlan 10
switchport mode access
!

Interface Configuration:
// Since I am using VSS, my 2 Cisco 6509 look like 1 Virtual Switch
// For example: interface TenGigabitEthernet 2/10/4 means:
// interface 4, on blade 10, on the second 6509

!
interface TenGigabitEthernet1/10/1
description “filer1_e1a_net 10G”
switchport access vlan 10
switchport mode access
channel-group 8 mode active
spanning-tree portfast
!
!
interface TenGigabitEthernet2/10/1
description “filer1_e1b_net 10G”
switchport access vlan 10
switchport mode access
channel-group 8 mode active
spanning-tree portfast
!
!
interface TenGigabitEthernet1/10/2
description “filer2_e1a_net 10G”
switchport access vlan 10
switchport mode access
channel-group 9 mode active
spanning-tree portfast
!
!
interface TenGigabitEthernet2/10/2
description “filer2_e1b_net 10G”
switchport access vlan 10
switchport mode access
channel-group 9 mode active
spanning-tree portfast
!

Check the Cisco configuration

6509-1#sh etherchannel sum
...
Group  Port-channel  Protocol    Ports
------+-------------+-----------+-----------------------------------------------
...
8    Po8(SU)       LACP      Te1/10/1(P)     Te2/10/1(P)     
9    Po9(SU)       LACP      Te1/10/2(P)    Te2/10/2(P)    
...

NetApp Configuration:

filer1>vif create lacp net10G -b ip e1a e1b
filer1>ifconfig net10G 10.0.0.100 netmask 255.255.255.0
filer1>ifconfig net10G up

filer2>vif create lacp net10G -b ip e1a e1b
filer2>ifconfig net10G 10.0.0.200 netmask 255.255.255.0
filer2>ifconfig net10G up

Don’t forget to make the change persistant

Filer1:: /etc/rc
hostname FILER1
vif create lacp net10G -b ip e1b e1a
ifconfig net `hostname`-net mediatype auto netmask 255.255.255.0 partner net10G
route add default 10.0.0.1 1
routed on
options dns.domainname example.com
options dns.enable on
options nis.enable off
savecore

Filer2:: /etc/rc
hostname FILER2
vif create lacp net10G -b ip e1b e1a
ifconfig net `hostname`-net mediatype auto netmask 255.255.255.0 partner net10G
route add default 10.0.0.1 1
routed on
options dns.domainname example.com
options dns.enable on
options nis.enable off
savecore

Check the NetApp configuration

FILER1> vif status net10G
default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'
net10G: 2 links, transmit 'IP Load balancing', VIF Type 'lacp' fail 'default'
         VIF Status     Up      Addr_set 
        up:
        e1a: state up, since 05Nov2010 12:37:59 (00:06:23)
                mediatype: auto-10g_sr-fd-up
                flags: enabled
                active aggr, aggr port: e1b
                input packets 1338, input bytes 167892
                input lacp packets 101, output lacp packets 113
                output packets 203, output bytes 20256
                up indications 13, broken indications 6
                drops (if) 0, drops (link) 0
                indication: up at 05Nov2010 12:37:59
                        consecutive 0, transitions 22
        e1b: state up, since 05Nov2010 12:34:56 (00:09:26)
                mediatype: auto-10g_sr-fd-up
                flags: enabled
                active aggr, aggr port: e1b
                input packets 3697, input bytes 471398
                input lacp packets 89, output lacp packets 98
                output packets 153, output bytes 14462
                up indications 10, broken indications 4
                drops (if) 0, drops (link) 0
                indication: up at 05Nov2010 12:34:56
                        consecutive 0, transitions 17


Some time ago I built a secondary VMware cluster for doing some specific testing.
From the primary VMware cluster I copy a virtual machine over SCP to the new secondary VMware cluster.

I then boot up the virtual machine on the new secondary VMware cluster and I experienced some network connectivity issues.

The problem was that the MAC address of the virtual machine was the same MAC address the virtual machine on the main site had and they were running on the same VLAN.

When VMware prompts you about if you Copied or Moved a Virtual Machine make sure you enter that you copied, so that it generates the following unique attributes:

uuid.location
uuid.bios
ethernet0.generatedAddress

In this case there was no prompts so I had to make the following changes on the Virtual Machine configuration files so that the next time it boots new identifiers are generated.

1) Power off Virtual Machine

2) Go to the Service Console and open the configuration file for the virtual machine in question:

[root@esx4 ~]# vi /vmfs/volumes/[datastore]/[vmname]/[vmname].vmx

Delete the following lines:
uuid.location
uuid.bios
ethernet0.generatedAddress

3) Power on Virtual Machine and new values will be generated.

This guide helps measure the network throughput and bandwidth between two hosts in the same network, different networks and across different data centers.

This specifically helped me when I needed to know how much throughput the company network had between headquarters data center and the Disaster Recovery data center which where in different states and I wanted to calculate how long would it take to replicate our SAN between the sites, about 60TB of data.

The tool I used for this is called iPerf (http://sourceforge.net/projects/iperf/), this tool includes both the Server and Client, I am running this tool from a RHEL 5.3 system.

You can find a RHEL/Centos binary for this tool at http://dag.wieers.com/rpm/packages/iperf/

A Java based Graphical iperf tool can be found at http://code.google.com/p/xjperf/downloads/list, which can be run on a Windows system, with the Java runtime environment.

Now let’s get to the steps on how to measure network throughput.

1) Set up the iperf server

I am utilizing a RHEL 5.3 for the server.

Install iperf:

[root@remote]# rpm -Uvh http://dag.wieers.com/rpm/packages/iperf/iperf-2.0.2-2.el5.rf.x86_64.rpm

Run iperf as server:

[root@remote ~]# iperf -s
————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————

Now that the iperf server is running, install the client at the other office.

2) Set up the iperf client on RHEL

Install iperf on RHEL:

[root@local]# rpm -Uvh http://dag.wieers.com/rpm/packages/iperf/iperf-2.0.2-2.el5.rf.x86_64.rpm

Run iperf as client on RHEL:

iperf has many great options, you can see all the options by doing # iperf -h

I usually use the following options to determine throughput:

[root@local ~]# iperf -c 10.3.3.3 -fk // 10.3.3.3 is the remote server -fk is to present as kbps
————————————————————
Client connecting to 10.3.3.3, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 3] local 10.2.2.2 port 38124 connected with 10.3.3.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.3 sec 37680 KBytes 30029 Kbits/sec

Server screen Output:

————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[ 4] local 10.3.3.3 port 5001 connected with 10.2.2.2 port 38124
[ 4] 0.0-10.9 sec 36.8 MBytes 28.3 Mbits/sec

Using the -r option to “Do a bidirectional test individually”

[root@local ~]# iperf -c 10.3.3.3 -fk -r
————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
————————————————————
Client connecting to 10.3.3.3, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 5] local 10.2.2.2 port 41973 connected with 10.3.3.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.1 sec 27744 KBytes 22580 Kbits/sec
[ 4] local 10.2.2.2 port 5001 connected with 10.3.3.3 port 55521
[ 4] 0.0-10.1 sec 48160 KBytes 39093 Kbits/sec

Server screen Output:

————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[ 4] local 10.3.3.3 port 5001 connected with 10.2.2.2 port 41973
[ 4] 0.0-10.7 sec 27.1 MBytes 21.3 Mbits/sec
————————————————————
Client connecting to 10.2.2.2, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 4] local 10.3.3.3 port 55521 connected with 10.2.2.2 port 5001
[ 4] 0.0-10.0 sec 47.0 MBytes 39.3 Mbits/sec

Using the -d option to “Do a bidirectional test simultaneously”

[root@local ~]# iperf -c 10.3.3.3 -fk -d
————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
————————————————————
Client connecting to 10.3.3.3, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 5] local 10.2.2.2 port 41974 connected with 10.3.3.3 port 5001
[ 4] local 10.2.2.2 port 5001 connected with 10.3.3.3 port 40886
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.1 sec 37872 KBytes 30648 Kbits/sec
[ 4] 0.0-10.4 sec 12856 KBytes 10132 Kbits/sec

Server screen Output:

————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[ 4] local 10.3.3.3 port 5001 connected with 10.2.2.2 port 41974
————————————————————
Client connecting to 10.2.2.2, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 6] local 10.3.3.3 port 40886 connected with 10.2.2.2 port 5001
[ 6] 0.0-10.4 sec 12.6 MBytes 10.2 Mbits/sec
[ 4] 0.0-10.7 sec 37.0 MBytes 28.9 Mbits/sec

3) Set up the iperf client on Windows

Download the JPerf client http://xjperf.googlecode.com/files/jperf-2.0.2.zip

Unzip and Run the jperf.bat, you will see a graphical interface, just enter the ip of the server and you are good to go.

You can adjust your client’s options, Dual = -d , Trade= -r in the command line client:

Server screen Output:

————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[ 4] local 10.3.3.3 port 5001 connected with 10.4.4.4 port 2606
[ 4] 0.0-10.0 sec 2.52 MBytes 2.12 Mbits/sec