Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

This posting will help you configuring multipathing on RHEL 5.3 for LUNs carved from a NetApp SAN. For this guide I am using a C-Class blade system with QLogic HBA cards.

1) Make sure you have the packages needed by RHEL, otherwise install them.

rpm -q device-mapper
rpm -q device-mappermultipath
yum install device-mapper
yum install device-mapper-multipath

2) Install QLogic Drivers if needed, or utilize RHEL drivers. In my case I am using HP C-Class blades with Qlogic HBA cards. HP drivers can be found at the HP site, driver is called hp_sansurfer. I am utilizing RHEL built in drivers, but you can install the HP/QLogic drivers as follows:

rpm -Uvh hp_sansurfer-5.0.1b45-1.x86_64.rpm

3) If Qlogic HBA, install the SanSurfer CLI, this is very useful program for doing things with QLogic HBA cards, it can be downloaded at QLogic website, install as follows:

rpm -Uvh scli-1.7.3-14.i386.rpm

4) Install NetApp Host Utilities Kit, the package is a tar.gz file, you can find it at the now site http://now.netapp.com.

Open it and run the install shell script

netapp_linux_host_utilities_5_0.tar.gz

5) Once Everything is installed on the host, create the LUN and ZONE it from the NetApp, Brocade(SAN Fabric),Host

To find your WWPNs, use the scli as follows:
# scli –i all
// Use the WWPN numbers for the iGroup and Brocade Aliases

6) Once it has been Zoned and mapped correctly, verify if your RHEL host can see it.

// Rescan HBA for new SAN Luns

# modprobe -r qla2xxx
# modprobe qla2xxx
// Check the kernel can see it
# cat /proc/scsi/scsi
# fdisk –lu

7) Utilize NetApp tools to see LUN connectivity

// Check your host and utilities see the LUNs
[root@server ~]# sanlun lun show
controller:          lun-pathname          device filename  adapter  protocol          lun size                                      lun state
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdf         host6    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sda         host4    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sde         host6    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdc         host5    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdd         host5    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdb         host4    FCP          100g (107374182                             400)   GOOD
.

8 ) Utilize NetApp tools to check multipathing, not set yet

[root@server ~]# sanlun lun show -p
NETAPPFILER:/vol/servervol/serverlun (LUN 0)                Lun state: GOOD
Lun Size:    100g (107374182400) Controller_CF_State: Cluster Enabled
Protocol: FCP           Controller Partner: NETAPPFILER2
Multipath-provider: NONE
--------- ---------- ------- ------------ --------------------------------------------- ---------------
   sanlun Controller                                                            Primary         Partner
   path         Path   /dev/         Host                                    Controller      Controller
   state        type    node          HBA                                          port            port
--------- ---------- ------- ------------ --------------------------------------------- ---------------
     GOOD  primary       sdf        host6                                            0c              --
     GOOD  secondary     sda        host4                                            --              0c
     GOOD  secondary     sde        host6                                            --              0c
     GOOD  secondary     sdc        host5                                            --              0d
     GOOD  primary       sdd        host5                                            0d              --
     GOOD  primary       sdb        host4                                            0c              --

Time to configure multipathing

9) Start the multipath daemon

# service multipathd start

10) Find you WWID, this will be needed in the configuration if you want to alias it.

Comment out the blacklist in the default /etc/multipath.conf, otherwise you will NOT see anything.

#blacklist {
#        devnode "*"
#}
// Show your devices and paths, and record the WWID of the LUN
# multipath -v3
...
...
===== paths list =====
uuid                              hcil    dev dev_t pri dm_st  chk_st  vend/pr
360a98000486e576748345276376a4d41 4:0:0:0 sda 8:0   1   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 4:0:1:0 sdb 8:16  4   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 5:0:0:0 sdc 8:32  1   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 5:0:1:0 sdd 8:48  4   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 6:0:0:0 sde 8:64  1   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 6:0:1:0 sdf 8:80  4   [undef][ready] NETAPP,
...
...

11) Now you are ready to configure /etc/multipath.conf

Exclude (blacklist) all the devices that do not correspond to any
LUNs configured on the storage controller and which are mapped to
your Linux host. There are 2 methods:
Block by WWID
Block by devnode
In this case I am blocking by devnode since I am using HP and know my devnode RegEx
Also configure the device and alias(optional).
The full /etc/multipath.conf will look like this:


defaults
{
        user_friendly_names yes
        max_fds max
        queue_without_daemon no
}
blacklist {
        ###devnode "*"
           devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
           devnode "^hd[a-z]"
           devnode "^cciss!c[0-9]d[0-9]*"  # Note the cciss, usual in HP
}
multipaths {
        multipath {
                wwid    360a98000486e57674834527533455570    # You found this
                alias   netapp # This is how you want to name the device in your host
                               # server LUN on NETAPPFILER
        }
}
devices
{
        device
        {
        vendor "NETAPP"
        product "LUN"
        getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
        prio_callout "/sbin/mpath_prio_ontap /dev/%n"
        features "1 queue_if_no_path"
        hardware_handler "0"
        path_grouping_policy group_by_prio
        failback immediate
        rr_weight uniform
        rr_min_io 128
        path_checker directio
        flush_on_last_del yes
}
}

12) Restart multipath and make sure it starts automatically:

// Restart multipath
# service multipathd restart
// Add to startup
# chkconfig --add multipathd
# chkconfig multipathd on

13) Verify multipath is working

//RHEL tools 
[root@server scli]# multipath -l
netapp (360a98000486e576748345276376a4d41) dm-2 NETAPP,LUN
[size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 4:0:1:0 sdb 8:16  [active][undef]
 \_ 5:0:1:0 sdd 8:48  [active][undef]
 \_ 6:0:1:0 sdf 8:80  [active][undef]
\_ round-robin 0 [prio=0][enabled]
 \_ 4:0:0:0 sda 8:0   [active][undef]
 \_ 5:0:0:0 sdc 8:32  [active][undef]
 \_ 6:0:0:0 sde 8:64  [active][undef]
//NetApp utilities Tool 
 [root@server scli]# sanlun lun show -p
NETAPPFILER:/vol/servervol/serverlun (LUN 0)                Lun state: GOOD
Lun Size:    100g (107374182400) Controller_CF_State: Cluster Enabled
Protocol: FCP           Controller Partner: NETAPPFILER2
DM-MP DevName: netapp   (360a98000486e576748345276376a4d41)     dm-2
Multipath-provider: NATIVE
--------- ---------- ------- ------------ --------------------------------------------- ---------------
   sanlun Controller                                                            Primary         Partner

    state       type    node          HBA                                          port            port
--------- ---------- ------- ------------ --------------------------------------------- ---------------
     GOOD  primary       sdb        host4                                            0c              --
     GOOD  primary       sdd        host5                                            0d              --
     GOOD  primary       sdf        host6                                            0c              --
     GOOD  secondary     sda        host4                                            --              0c
     GOOD  secondary     sdc        host5                                            --              0d
     GOOD  secondary     sde        host6                                            --              0c
...

14) Now you can access the LUN by using the mapper

 [root@server scli]# ls -l /dev/mapper
total 0
crw------- 1 root root  10, 63 Sep 12 12:32 control
brw-rw---- 1 root disk 253,  2 Sep 16 10:54 netapp
brw-rw---- 1 root disk 253,  0 Sep 12 16:32 VolGroup00-LogVol00
brw-rw---- 1 root disk 253,  1 Sep 12 12:32 VolGroup00-LogVol01

15) Format it to your liking and mount it

# mkdir /mnt/netapp
# mkfs -t ext3 /dev/mapper/netapp
# mount /dev/mapper/netapp /mnt/netapp/
//verify it mounted
# mount
...
...
/dev/mapper/netapp on /mnt/netapp type ext3 (rw)
...

16 ) If you want it to be persistent after reboots put it on /etc/fstab and make sure multipathd start automatically.

# cat /etc/fstab
...
...
/dev/mapper/netapp      /mnt/netapp             ext3    defaults        0 0

17) If possible reboot to check it mounts correctly after reboots.

You have added a new disk or increased the size of your LUN, or increased the size of the virtual disk in case of virtual machines, and now you need to increase the partition, the Logical Volume and the filesystem in order to be able to use the new space.

In this post I go through the steps necessary to make this happen in a RHEL 5.3 system.

The LUN I will increase has 20GB and it had an LVM partition. I decided to increase the LUN size to 72GB and this is how it looks now.

[root@server~]# fdisk -lu
Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 2611 20971488 8e Linux LVM

I need to perform the following steps in order to be able to use the new space.

1. Increase the size of the partition using fdisk

[root@server ~]# fdisk /dev/sdb

Command (m for help): u //Change the display to sectors
Changing display/entry units to sectors
Command (m for help): p //Print the current partition table for that drive
Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders, total 150994944 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 64 41943039 20971488 8e Linux LVM
Command (m for help): d //Delete the partition information, we will recreate
Selected partition 1
Command (m for help): n //Create partition
Command action
e extended
p primary partition (1-4)
p //In this case it is primary
Partition number (1-4): 1 // In this case it is the first partition on the drive
First sector (63-150994943, default 63): 64 //Align partition if used on NetApp
Last sector or +size or +sizeM or +sizeK (64-150994943, default 150994943):
Using default value 150994943
Command (m for help): t //Change type from Linux(default) to Linux LVM
Selected partition 1
Hex code (type L to list codes): 8e //Linux LVM partition type
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): p //Print again to double check
Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 9400 75497440 8e Linux LVM
Command (m for help): w //Write the partition table
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

2. You need to reboot for the changes to take place, or just run

server# partprobe

3. Make LVM acknowledge the new space

[root@server ~]# pvresize /dev/sdb1

4. Check that the Volume group shows the new space

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 2 0 wz–n- 71.97G 52.00G

5. Extend the logical volume:
make it total of 28G in this example

[root@server~]# lvresize -L 28G /dev/mapper/vg0-lvhome
Extending logical volume lvhome to 28.00 GB
Logical volume lvswap successfully resized

You can also take all the free space available

[root@server ~]# lvresize -l +100%FREE /dev/mapper/vg0-lvhome
Extending logical volume lvhome to 67.97 GB
Logical volume lvhome successfully resized

6. Use the rest for whatever partition you want

[root@server~]# lvcreate -l 100%FREE -n lvdata vg0

7. Resize the Filesystem

[root@server~]# resize2fs /dev/mapper/vg0-lvhome
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/vg0-lvhome is mounted on /home; on-line resizing required
Performing an on-line resize of /dev/mapper/vg0-lvhome to 9953280 (4k) blocks.
The filesystem on /dev/mapper/vg0-lvhome is now 9953280 blocks long.

When deploying vmware virtual machine on top of VMFS on top of a NetApp SAN, you need to make sure to align it properly otherwise you will end up with performance issues. File system misalignment is a known issue when virtualizing. Also, when deploying LUNs from a NetApp appliance, you need to make sure no to reformat the LUN, or you will lose the alignment, just create a filesystem on top of the LUN.

NetApp provides a great technical paper about this at: http://media.netapp.com/documents/tr-3747.pdf

In this post Iwill show you how to align an empty vmdk disk/LUN using the open source utility GParted. This is for new vmdk disks/LUNs, dont do it on disk that contain data as you will lose it. This is for Golden Templates that you want aligned, so subsequent virtual machines will inherit the right alignment, or for servers that need a NetApp LUN attached.

The resulting partition works for Linux and Windows, just create a filesystem on top of it.

You can find GParted at: http://sourceforge.net/projects/gparted/files/

1. Boot the VM from the GParted CD/Iso. Click on the terminal icon to open a terminal:

2. Check the partition Starting Offsets, in this case I have 3 disks 2 are already aligned to the 64k offset, I will align the new disk as well.

3. Create an aligned partition on the drive using fdisk

gparted# fdisk /dev/sdc

Below is a screenshot of the answers to fdisk, the important option is to select to start the offset at 64k, as indicated.

4. Now check again and the partition should be aligned

[root@server ~]# fdisk -lu

Disk /dev/sda: 209 MB, 209715200 bytes
64 heads, 32 sectors/track, 200 cylinders, total 409600 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 64 409599 204768 83 Linux

Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders, total 150994944 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 64 41943039 20971488 8e Linux LVM

Disk /dev/sdc: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 64 209715199 104857568 83 Linux

VMware Update Manager is a tool automate and streamline the process of applying updates, patches or upgrades to a new version. VUM is fully integrated within vCenter Server and offers the ability to scan and remediate ESX/ESXi hosts, virtual appliances, virtual machine templates, and online and offline virtual machines running certain versions of Windows, Linux, and some Windows applications.

In this post you will learn how to install VMware Update Manager.

  1. Create and configure Database
  2. Create an (ODBC) DSN for the Database
  3. Install VMware Update Manager
  4. Install the VUM plug-in for the vSphere Client

1. Create and configure Database

To create the database I use the free SQL Server Manager Studio Express.

You can find it at:

http://www.microsoft.com/downloads/details.aspx?FamilyId=C243A5AE-4BD1-4E3D-94B8-5A0F62BF7796&displaylang=en

Open SQL Server Manager Studio Express, Right Click on Database and click on New Database…, enter the name of the database and the owner. (For the owner, you can use a Windows account or a SQL account), then Click Ok and you should see the new Database

2) Create an (ODBC) DSN for the Database

Go to Start menu->Administrative Tools->Data Sources (ODBC)

Select the System DSN tab and click on Add…, select SQL Native

Give it a name, description and select the server that hosts the database, in this case is localhost

Select the authentication that matches the owner of the Database, in this case I used SQL Authentication. Click Next

Make sure you select to Change the default database to the previously create database(VUM), Click Next…

Leave defaults, unless you have a need to change any of these settings

Click Finish and OK.

3) Install VMware Update Manager

From the Iso/ZIP vCenter Installation package select to install vCenter Update Manager and follow the wizard

Select your vCenter, Username and Password.

Select the Existing Database you created previously, click Next…

Enter the username/password for the database user

Select the port settings, you can leave defaults unless there is a need not to do so

Select where to install VUM and where to download the patches/updates

Make sure you have enough disk space and then click Install, then Finish.

4) Install the VUM plug-in for the vSphere Client

In order to be able to use VMware vCenter Update Manager, you need to install a plug-in into the vSphere client.

Open the vSphere Client and got to Plug-ins-> Manage Plug-ins…

You will see the VMware vCenter Update Manager plug-in available, click on Download and Install

Follow the simple wizard and you will have it now installed!

You have ESX 3.5 hosts and you want to upgrade your virtual center to the new vCenter 4.

vCenter 4 offers a lot more than regular virtual center 2.5. vCenter 4 was built to manage vSphere hosts, but you can also manage ESX3.5.

vCenter 4 does NOT have a license server which is needed by ESX 3.5 hosts. You need to get the license server from the virtual center 2.5 installation Iso/ZIP package.

1) On the package you will find VMware-licenseserver.exe, Double click it to begin installation wizard:

2) . Accept the Agreement and select where to install the License server, leave defaults.

3) Enter the location of your VMware License file

4) Click Next and Finish.

5) Open up your vSphere Client and go to Administration->vCenter Server Settings->Licensing and enter the hostname/IP of the License Server which in this case is the same as the vCenter Server 4.

6) thats it, now you can add your ESX3.5 hosts and use whatever features you license file allows.

This is the guide for installing a production ready vCenter Server 4.

vCenter Server is an application that runs on top of Windows to manage your ESX servers and provides extra functionality to your ESX farm, such as clustering, HA, DRS, Failover, and much more.

VMware recommends 64bit version of Windows.

For this guide I will use:

Virtual Machine with 2 vCPU and 8GB RAM and 50GB HDD

Windows 2003 Standard Edition 64-Bit

SQL Server 2005 Standard 32-bit(I don’t have 64bit edition)

  1. Install Windows 2003 Standard 64 bit
  2. Prepare to install SQL server 2005
  3. Install SQL Server 2005 32bit
  4. Install SQL Server Management Studio Express(Optional)
  5. Create vCenter Database
  6. Create vCenter Schema
  7. Change Database Recovery Mode
  8. Create ODBC connection (32 bit)
  9. Install vCenter Server 4


1) Install Windows 2003 Standard Edition 64-Bit

Update it using windows update and make sure it has .NET 3.5, this should be straight forward, nothing related to vCenter at this point.


2) Prepare to install SQL Server 2005

Remove MSXML Core Services 6.0

“If Microsoft SQL Server 2005 is not already installed and the machine has MSXML Core Services 6.0 installed, remove MSXML Core Services 6.0 before installing Microsoft SQL Server 2005. If you cannot remove it using the Add or Remove Programs utility, use the Windows Installer CleanUp utility.” –http://support.microsoft.com/kb/968749

I am using the Windows Installer CleanUp utility to remove it. The utility can be found at http://support.microsoft.com/kb/290301

Install the Windows Installer CleanUp utility:

Go to All Programs and Run the Windows CleanUp utility:

Once it has been removed we can proceed with the installation of SQL Server 2005


3) Install SQL Server 2005 Standard 32-bit

a. Fire up the installation and click on Install Server and accept the End User Agreement

b. Click on install

c. Click next on the Wizard

d. You should see success and a couple of warnings that can be ignored since you don’t need IIS. SQL can use IIS for reporting, but you don’t need it for vCenter

e. Continue with the wizard and enter your license key and name information and click next

f. The only thing you need is “SQL Server Database Services”

g. Select the default instance unless you have already installed another instance of SQL Server

h. Select Service account, I am selecting the built-in system account, but if you want you can use a domain service account. Also ensure that SQL server is checked, that way it will start the service.

i. Select Mixed Mode to allow both Windows and SQL authentication

j. Leave default Coallition Settings

k. Select if you want to help microsoft by sending reports, then click next

l. You are done, click Install

m. You should see this screen, then click next

n. At the end you get a summary and you can also use the “Surface Area Configuration Tool” to select how to login to the SQL Server from remote locations

o. Now update SQL to the latest VMware supported SQL Server Service Pack, I updated to SP3.

p. vCenter will give you a warning about remote connections, and they suggest using both TCP/IP and named pipes. Open the SQL Server Surface Area Configuration found in All Programs->Microsoft SQL Server-> Configuration Tools

Click on Surface Area Configuration for Services and Connections

p. Click on Remote Connections and select: Using both TCP/IP and named pipes

Once you click OK, you will need to restart your database

q. You will also need to start and automate the start up of SQL Server Agent(MSSQLSERVER), under All Programs->Microsoft SQL Server-> Configuration Tools->SQL Server Configuration Manager

Right Click and Start the service, then go to properties->service and select to Start Mode Automatic




4) Install SQL Server Management Studio Express(Optional, it’s Free!)

I use it to run SQL scripts to automatically create the vCenter Database and Schema

I installed the 64bit version, located at:

http://www.microsoft.com/downloads/details.aspx?FamilyId=C243A5AE-4BD1-4E3D-94B8-5A0F62BF7796&displaylang=en


5) Create vCenter Database

VMware provides a SQL script where you need to change the location of the Database files and the password (the items in red):

=========================

use [master]

go

CREATE DATABASE [VCDB] ON PRIMARY

(NAME = N’vcdb’, FILENAME = N’C:\VCDB.mdf‘ , SIZE = 2000KB , FILEGROWTH = 10% )

LOG ON

(NAME = N’vcdb_log’, FILENAME = N’C:\VCDB.ldf‘ , SIZE = 1000KB , FILEGROWTH = 10%)

COLLATE SQL_Latin1_General_CP1_CI_AS

go

use VCDB

go

sp_addlogin @loginame=[vpxuser], @passwd=N’XXXXXXXXXX‘, @defdb=’VCDB’, @deflanguage=’us_english’

go

ALTER LOGIN [vpxuser] WITH CHECK_POLICY = OFF

go

CREATE USER [vpxuser] for LOGIN [vpxuser]

go

sp_addrolemember @rolename = ‘db_owner’, @membername = ‘vpxuser’

go

use MSDB

go

CREATE USER [vpxuser] for LOGIN [vpxuser]

go

sp_addrolemember @rolename = ‘db_owner’, @membername = ‘vpxuser’

go

=============


6) Use vCenter dbschema scripts provided by VMware in the iso/zip file to create the necessary Tables

Locate the dbschema scripts in the vCenter Server installation package VMware-VIMSetup-all-4.xxx\vpx\dbschema directory.

Run the scripts in the following sequence on the database:

  1. VCDB_mssql.SQL
  2. purge_stat1_proc_mssql.sql
  3. purge_stat2_proc_mssql.sql
  4. purge_stat3_proc_mssql.sql
  5. purge_usage_stats_proc_mssql.sql
  6. stats_rollup1_proc_mssql.sql
  7. stats_rollup2_proc_mssql.sql
  8. stats_rollup3_proc_mssql.sql
  9. cleanup_events_mssql.sql
  10. delete_stats_proc_mssql.sql
  11. upsert_last_event_proc_mssql.sql
  12. job_schedule1_mssql.sql
  13. job_schedule2_mssql.sql
  14. job_schedule3_mssql.sql
  15. job_cleanup_events_mssql.sql


7) Change Recovery Mode from Full to Simple to reduce Database size footprint (Optional)

This prevents the database to fill up too quickly. vCenter suggests to have it on Simple Recovery Mode unless you really need to have it in Full Mode for whatever reason. Open SQL Server Management Studio Express and change the Properties of the VCDB database:


8 ) Create ODBC Connection ( 32-bit )

“Even though vCenter Server is supported on 64-bit operating systems, the vCenter Server system must have a 32-bit DSN.
This requirement applies to all supported databases. By default, any DSN created on a 64-bit system is 64 bit.” –vmware

Run the 32-bit ODBC Administrator application, located at:

C:\WINDOWS\SysWOW64\odbcad32.exe
a. Go to the System DSN tab and add a new SQL Native Client data source:
b. Name it whatever you want and point it to the SQL Server(in this case local)
c. Authenticate using SQL using the user created previously
d. Make sure the default database is VCDB
e. Leave all the rest as is and test the connection and then if successful click OK and OK again.


9 ) Install vCenter Server 4

Now that the Database has been provisioned, you can proceed with the installation of vCenter Server.
a.Use your ISO/ZIP file and run it to start the vCenter Server installation wizard
Select vCenter Server

b. Follow the Wizard
c. Enter your license information
d. Use the existing database you have created previously and the DSN (ODBC) you created
e. Select the SQL user and password you used in the SQL script
f. Select the user that will be used for the services to run under. I am using the local SYSTEM account, but if your organization uses a Domain Service account then use that
g. Select the location where to install vCenter Server
h. Create a standalone vCenter server. If you want Linked mode, then install more vCenter Servers and select Linked Mode on them.
i. Leave the default ports, unless you have a need to set them differently
j. Click Next and go drink some coffee/tea/beer and come back and click Finish
That’s it, you have installed a vCenter Server running on Windows 2003 Standard 64bit with SQL Server 2005 32-bit.
I recommend to install the vSphere client on the server to test locally. But it is not necessary, you can install it on your workstation and manage it from there.

Problem:

I moved a virtual machine from one datacenter to a remote datacenter using SCP. I shut it down and from the ESX host I SCP’ed over to the new site. Then I fixed its network and since I had much more resources in the new datacenter, I gave it the virtual machine more RAM. When I powered on the virtual machine it was very slow and it is impossible to work with it, since it is very slow. Also I am getting warnings about “fault.MemorySizeNotRecommended.summary

Solution:

When you migrated the virtual machine and import it to the new datacenter, the amount of RAM the virtual machine had was set as the “sched.mem.max“, thus limiting the amount of RAM. If you plan to increase the amount of RAM you want for a virtual machine to have after migration then adjust the setting either from the .vmx configuration file or from the GUI as follows:

From the .vmx confuguration file:

sched.cpu.min = “0”
sched.cpu.max = “72320”
sched.cpu.units = “mhz”
sched.cpu.shares = “normal”
sched.mem.minsize = “0”
// sched.mem.max = “1024” //change to amount of RAM you want.

sched.mem.max = “4096”
sched.mem.shares = “normal”

From the GUI:

Set the limit to what you want

Change RAM limit after migration