Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

Browsing Posts tagged rhel

In a previous post I wrote about how to Create a custom Vagrant Box from scratch.

In this post I will walk through the usage of Packer to automate the creation of a CentOS 7 image that can be used in Vagrant or even vSphere.

Packer is a cool Hashicorp tool that helps you automate the creation of machine images.

You can download the project/git repo at https://github.com/parcejohn/packer-centos7

From their website:
“Packer is easy to use and automates the creation of any type of machine image. It embraces modern configuration management by encouraging you to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.”

I use Packer to continue the journey to Infrastructure As Code, where even my golden images/templates are automated and source controlled (on Git).

Requirements:

* Packer // I installed on Mac using $ brew install packer
* vagrant // I installed on Mac using $ brew install vagrant
* vmware fusion // I installed using $ brew cask install vmware-fusion
* CentOS 7 ISO file

Packer uses a JSON template file to orchestrate the image creation and there are different stages in the process, I will concentrate on the below three, but you should familiarize with all of them (https://www.packer.io/docs/basics/terminology.html)

Builders: Packer component to create a machine image for a single platform, in this case I will be using the VMware builder.
Provisioners: Packer component that installs and configures software within a running machine prior to that machine being turned into a static image. Example provisioners include shell scripts, Chef, Puppet, etc. I will be using the shell provisioner.
Post-Processors: Packer component that takes the result of the builder or another post-processor and process that to create a new artifact. Examples of post-processors are compress to compress artifacts, upload to upload artifacts, etc. I will be creating a Vagrant box as the artifact, and also demonstrate how to upload the image to vSphere.

It’s time to walk through the process of creating a CentOS 7 image using Packer.

1) Create directory structure
The http folder will host a kickstart file, Packer will use its built in web server to serve this kickstart file
The scripts folder will host the provisioning scripts that will define the machine

centos/
├── http
└── scripts

2) Populate with the following configuration files and scripts (the contents of these files are shown in the next steps)

centos/
├── centos-7.1-x64-vmware.json
├── http
│   └── ks.cfg
├── scripts
│   ├── base.sh
│   ├── cleanup.sh
│   ├── hgfs.sh
│   ├── vmware.sh
│   └── zerodisk.sh
├── template.json
└── vagrant_rsa_key

3) Create a SSH key pair, you will need this to login to the server to complete install and configuration

The Public Key will be injected to the ‘vagrant’ user as part of the kickstart, it will then be used in the Packer JSON template to allow Packer to login to the machine
The Private Key will stay on your system to allow Packer and Vagrant to login to the created machine.

$ ssh-keygen -t rsa -b 4096 -C "vagrant" -N '' -q -f ./vagrant_rsa_key

4) Create a Packer JSON Template file (template.json)

{
  "variables": {
    "vm_name": "centos-7.1-vmware",
    "iso_url": "{{env `ISO_URL`}}",
    "iso_sha256": "f90e4d28fa377669b2db16cbcb451fcb9a89d2460e3645993e30e137ac37d284"
  },
  "builders": [
    {
      "headless": true,
      "type": "vmware-iso",
      "boot_command": [
        " text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg"
      ],
      "boot_wait": "10s",
      "disk_size": 8192,
      "guest_os_type": "centos-64",
      "http_directory": "http",
      "iso_url": "{{user `iso_url`}}",
      "iso_checksum_type": "sha256",
      "iso_checksum": "{{user `iso_sha256`}}",
      "ssh_username": "vagrant",
      "ssh_private_key_file": "vagrant_rsa",
      "ssh_port": 22,
      "ssh_wait_timeout": "10000s",
      "shutdown_command": "echo '/sbin/halt -h -p' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
      "tools_upload_flavor": "linux",
      "tools_upload_path": "/tmp/vmware_tools_{{.Flavor}}.iso",
      "vmx_data": {
        "memsize": "1024",
        "numvcpus": "1",
        "cpuid.coresPerSocket": "1"
      }
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
      "override": {
        "vmware-iso": {
          "scripts": [
            "scripts/base.sh",
            "scripts/vmware.sh",
            "scripts/hgfs.sh",
            "scripts/cleanup.sh",
            "scripts/zerodisk.sh"
          ]
        }
      }
    }
  ],
  "post-processors": [
    {
      "type": "vagrant",
      "override": {
        "vmware": {
          "output": "centos-7.1-x64-vmware.box"
        }
      }
    }
  ]
}

Packer Components/Sections:

Variables:
This is just to centralize variables that will be used by other components (e.g. Builder, provisioner, etc)
I can also set a variable to use Environment variables or command line provided values:
User provided values:
“iso_url”: “{{user `iso_url`}}”,

Environment variable values:
“iso_url”: “{{env `ISO_URL`}}”,

Builders:
Here is an array of builders and the needed parameters for each builder, in this case the only builder is of type vmware-iso.
Parameters and some comments about them.

      "headless": true, // Means not to open the VMware Fusion Console
      "type": "vmware-iso", // VMware type builder
      "boot_command": [
        " text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg"  // Send these key combination to the VMWare Fusion Console
      ],
      "boot_wait": "10s",
      "disk_size": 8192,
      "guest_os_type": "centos-64",
      "http_directory": "http",  // Directory where the kickstart file is placed
      "iso_url": "{{user `iso_url`}}", // Location of ISO image, defined at runtime, more on that later
      "iso_checksum_type": "sha256", 
      "iso_checksum": "{{user `iso_sha256`}}",
      "ssh_username": "vagrant",  // Packer will log in to resulting machine for provisioning, this user must exist (user created from the Kickstart) 
      "ssh_private_key_file": "vagrant_rsa", // Packer will log in using ssh key created earlier
      "ssh_port": 22,
      "ssh_wait_timeout": "10000s",
      "shutdown_command": "echo '/sbin/halt -h -p' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
      "tools_upload_flavor": "linux",
      "tools_upload_path": "/tmp/vmware_tools_{{.Flavor}}.iso", 
      "vmx_data": {
        "memsize": "1024",
        "numvcpus": "1",
        "cpuid.coresPerSocket": "1"

Provisioners:
This section uses the shell provisioner and runs all of those scripts listed, which are located in the scripts folder.

      "type": "shell",
      "execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
      "override": {
        "vmware-iso": {
          "scripts": [
            "scripts/base.sh",
            "scripts/vmware.sh",
            "scripts/hgfs.sh",
            "scripts/cleanup.sh",
            "scripts/zerodisk.sh"

Post-processor:
This section states that I want a vagrant box and I am giving it the name.

      "type": "vagrant",
      "override": {
        "vmware": {
          "output": "centos-7.1-x64-vmware.box"

5) Kickstart file (should be http/ks.cfg based on the template.json file)
This is a minimal install of CentOS 7
Note: The private key (XXXX) given to the vagrant user, that is the key that we created in a previous step

install
cdrom
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp --noipv6
rootpw  --plaintext vagrant
firewall --enabled --service=ssh
authconfig --enableshadow --passalgo=sha512
selinux --disabled
timezone --utc America/New_York
bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet"

text
skipx
zerombr

clearpart --all --initlabel
autopart

auth  --useshadow  --enablemd5
firstboot --disabled
reboot

%packages --nobase --ignoremissing 
@core
bzip2
kernel-devel
kernel-headers
-ipw2100-firmware
-ipw2200-firmware
-ivtv-firmware
%end

%post
# Install SUDO
/usr/bin/yum -y install sudo

# Create vagrant user
/usr/sbin/useradd vagrant
/bin/mkdir /home/vagrant/.ssh
/bin/chmod 700 /home/vagrant/.ssh
cat >           /home/vagrant/.ssh/authorized_keys <<'VAGRANT_RSA'
ssh-rsa XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX vagrant
VAGRANT_RSA

/bin/chmod 600 /home/vagrant/.ssh/authorized_keys
/bin/chown -R vagrant /home/vagrant/.ssh

# Add vagrant user to SUDO
echo "vagrant        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/vagrant
echo "Defaults:vagrant !requiretty"                 >> /etc/sudoers.d/vagrant
chmod 0440 /etc/sudoers.d/vagrant
%end

6 ) Provisioning Scrips
Packer calls these scripts (from the JSON template) on the created machine before making it a template/image.

base.sh // Install basic packages

#!/usr/bin/env bash
set -x

sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
yum -y install gcc make gcc-c++ kernel-devel-`uname -r` perl

vmware.sh // Install VMTools, I separated this from base as I may want to call different scripts on a VirtualBox/etc builder.

#!/usr/bin/env bash
set -x

yum install -y fuse-libs open-vm-tools

hgfs.sh // For vagrant to work properly with folders

#!/usr/bin/env bash
set -x

VMWARE_ISO=/tmp/vmware_tools_linux.iso
VMWARE_MNTDIR=$(mktemp --tmpdir=/tmp -q -d -t vmware_mnt_XXXXXX)
VMWARE_TMPDIR=$(mktemp --tmpdir=/tmp -q -d -t vmware_XXXXXX)

# Extract tools
mount -o loop $VMWARE_ISO $VMWARE_MNTDIR
tar zxf $VMWARE_MNTDIR/VMwareTools*.tar.gz -C $VMWARE_TMPDIR
umount $VMWARE_MNTDIR

# Install tools
$VMWARE_TMPDIR/vmware-tools-distrib/vmware-install.pl -d

# Clean up
rm -f $VMWARE_ISO
rm -rf $VMWARE_MNTDIR
rm -rf $VMWARE_TMPDIR

cleanup.sh // Clean up before converting to image

#!/usr/bin/env bash
set -x

yum -y erase gtk2 libX11 hicolor-icon-theme avahi freetype bitstream-vera-fonts
rpm --rebuilddb
yum -y clean all

zerodisk.sh // This is so that the resulting box is as small as possible

#!/usr/bin/env bash
set -x

dd if=/dev/zero of=/EMPTY bs=1M
rm -f /EMPTY

7) Run Packer against the template.json file created earlier and watch it build the machine image

$ packer build -var ‘iso_url=/Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso’ -only=vmware-iso template.json
vmware-iso output will be in this color.

==> vmware-iso: Downloading or copying ISO
vmware-iso: Downloading or copying: file:///Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso
==> vmware-iso: Creating virtual machine disk
==> vmware-iso: Building and writing VMX file
==> vmware-iso: Starting HTTP server on port 8101
==> vmware-iso: Starting virtual machine…
vmware-iso: The VM will be run headless, without a GUI. If you want to
vmware-iso: view the screen of the VM, connect via VNC without a password to
vmware-iso: 127.0.0.1:5989
==> vmware-iso: Waiting 10s for boot…
==> vmware-iso: Connecting to VM via VNC
==> vmware-iso: Typing the boot command over VNC…
==> vmware-iso: Waiting for SSH to become available…
==> vmware-iso: Connected to SSH!
==> vmware-iso: Uploading the ‘linux’ VMware Tools
==> vmware-iso: Provisioning with shell script: scripts/base.sh
vmware-iso: + sed -i ‘s/^.*requiretty/#Defaults requiretty/’ /etc/sudoers



==> vmware-iso: Provisioning with shell script: scripts/zerodisk.sh
vmware-iso: + dd if=/dev/zero of=/EMPTY bs=1M
vmware-iso: dd: error writing ‘/EMPTY’: No space left on device
vmware-iso: 5488+0 records in
vmware-iso: 5487+0 records out
vmware-iso: 5754265600 bytes (5.8 GB) copied, 5.27091 s, 1.1 GB/s
vmware-iso: + rm -f /EMPTY
==> vmware-iso: Gracefully halting virtual machine…
vmware-iso: Waiting for VMware to clean up after itself…
==> vmware-iso: Deleting unnecessary VMware files…
vmware-iso: Deleting: output-vmware-iso/564de113-2fc5-2010-8e5d-63fec92f12f7.vmem
vmware-iso: Deleting: output-vmware-iso/packer-vmware-iso.plist
vmware-iso: Deleting: output-vmware-iso/vmware.log
==> vmware-iso: Cleaning VMX prior to finishing up…
vmware-iso: Unmounting floppy from VMX…
vmware-iso: Detaching ISO from CD-ROM device…
vmware-iso: Disabling VNC server…
==> vmware-iso: Compacting the disk image
==> vmware-iso: Running post-processor: vagrant
==> vmware-iso (vagrant): Creating Vagrant box for ‘vmware’ provider
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s001.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s002.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s003.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.nvram
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmsd
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmxf
vmware-iso (vagrant): Compressing: Vagrantfile
vmware-iso (vagrant): Compressing: disk-s001.vmdk
vmware-iso (vagrant): Compressing: disk-s002.vmdk
vmware-iso (vagrant): Compressing: disk-s003.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
vmware-iso (vagrant): Compressing: packer-vmware-iso.nvram
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmsd
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmx
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmxf
Build ‘vmware-iso’ finished.

==> Builds finished. The artifacts of successful builds are:
–> vmware-iso: ‘vmware’ provider box: centos-7.1-x64-vmware.box

8) Add box to Vagrant

$ vagrant box add –name centos-7.1-010-x64-vmware.box centos-7.1-x64-vmware.box
==> box: Adding box ‘centos-7.1-010-x64-vmware.box’ (v0) for provider:
box: Downloading: file:///Users/john/packer/centos/centos-7.1-x64-vmware.box
==> box: Successfully added box ‘centos-7.1-010-x64-vmware.box’ (v0) for ‘vmware_desktop’!

The above is all you need to have your Infrastructure be Code that can be versioned and automatically create your images and use them on Vagrant.
The next item will show you how to use Packer to move the same image to your vSphere environment.

9) Sending Image to vSphere
To send the image created by Packer to vSphere you will need to add another post-provisioning entry to the JSON array:

    {
      "type": "vsphere",
      "host": "{{user `vm_host`}}",
      "username": "{{user `vm_user`}}",
      "password": "{{user `vm_pass`}}",
      "datacenter": "{{user `vm_dc`}}",
      "cluster": "{{user `vm_cluster`}}",
      "resource_pool": " ",
      "datastore": "{{user `vm_datastore`}}",
      "vm_folder": "{{user `vm_folder`}}",
      "vm_name": "{{user `vm_name`}}", 
      "vm_network": "{{user `vm_network`}}",
      "insecure" : "true"
    }

As you can see I am using variables to make this portable, so that means you have to add variables to the variables section as well, below is the new template to do both vagrant and vSphere image provisioning.

{
  "variables": {
    "vm_name": "centos-7.1-vmware",
    "iso_url": "{{env `ISO_URL`}}",
    "iso_sha256": "f90e4d28fa377669b2db16cbcb451fcb9a89d2460e3645993e30e137ac37d284",
    "vm_host": "{{ user `vm_host` }}",
    "vm_user": "{{ user `vm_user` }}",
    "vm_pass": "{{ env `vm_pass` }}",
    "vm_dc":   "{{ user `vm_dc` }}",
    "vm_cluster": "{{user `vm_cluster`}}",
    "vm_datastore": "{{user `vm_datastore`}}",
    "vm_folder": "{{user `vm_folder`}}",
    "vm_name": "{{user `vm_name`}}", 
    "vm_network": "{{user `vm_network`}}"
  },
  "builders": [
    {
      "headless": true,
      "type": "vmware-iso",
      "boot_command": [
        " text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg"
      ],
      "boot_wait": "10s",
      "disk_size": 8192,
      "guest_os_type": "centos-64",
      "http_directory": "http",
      "iso_url": "{{user `iso_url`}}",
      "iso_checksum_type": "sha256",
      "iso_checksum": "{{user `iso_sha256`}}",
      "ssh_username": "vagrant",
      "ssh_private_key_file": "vagrant_rsa",
      "ssh_port": 22,
      "ssh_wait_timeout": "10000s",
      "shutdown_command": "echo '/sbin/halt -h -p' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
      "tools_upload_flavor": "linux",
      "tools_upload_path": "/tmp/vmware_tools_{{.Flavor}}.iso",
      "vmx_data": {
        "memsize": "1024",
        "numvcpus": "1",
        "cpuid.coresPerSocket": "1"
      }
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
      "override": {
        "vmware-iso": {
          "scripts": [
            "scripts/base.sh",
            "scripts/vmware.sh",
            "scripts/hgfs.sh",
            "scripts/cleanup.sh",
            "scripts/zerodisk.sh"
          ]
        }
      }
    }
  ],
  "post-processors": [
    {
      "type": "vagrant",
      "override": {
        "vmware": {
          "output": "centos-7.1-x64-vmware.box"
        }
      }
    },
    {
      "type": "vsphere",
      "host": "{{user `vm_host`}}",
      "username": "{{user `vm_user`}}",
      "password": "{{user `vm_pass`}}",
      "datacenter": "{{user `vm_dc`}}",
      "cluster": "{{user `vm_cluster`}}",
      "resource_pool": " ",
      "datastore": "{{user `vm_datastore`}}",
      "vm_folder": "{{user `vm_folder`}}",
      "vm_name": "{{user `vm_name`}}", 
      "vm_network": "{{user `vm_network`}}",
      "insecure" : "true"
    }
  ]
}

10) Re-Run Packer against the template.json file and watch it build the machine image an

$ packer build \
> -var ‘iso_url=/Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso’ \
> -var ‘vm_host=vc.example.com’ \
> -var ‘vm_user=john@example.com’ \
> -var ‘vm_pass=XXXXXXX’ \
> -var ‘vm_dc=vDC’ \
> -var ‘vm_cluster=Folder/Cluster’ \
> -var ‘vm_datastore=store’ \
> -var ‘vm_folder=Images’ \
> -var ‘vm_name=centos71’ \
> -var ‘vm_network=dvs-net1’ \
> -only=vmware-iso template.json

vmware-iso output will be in this color.

==> vmware-iso: Downloading or copying ISO
vmware-iso: Downloading or copying: file:///Users/john/iso/CentOS-7-x86_64-Minimal-1511.iso
==> vmware-iso: Creating virtual machine disk


==> vmware-iso: Running post-processor: vagrant
==> vmware-iso (vagrant): Creating Vagrant box for ‘vmware’ provider
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s001.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s002.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk-s003.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/disk.vmdk
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.nvram
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmsd
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vagrant): Copying: output-vmware-iso/packer-vmware-iso.vmxf
vmware-iso (vagrant): Compressing: Vagrantfile
vmware-iso (vagrant): Compressing: disk-s001.vmdk
vmware-iso (vagrant): Compressing: disk-s002.vmdk
vmware-iso (vagrant): Compressing: disk-s003.vmdk
vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json
vmware-iso (vagrant): Compressing: packer-vmware-iso.nvram
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmsd
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmx
vmware-iso (vagrant): Compressing: packer-vmware-iso.vmxf
==> vmware-iso: Running post-processor: vsphere
vmware-iso (vsphere): Uploading output-vmware-iso/packer-vmware-iso.vmx to vSphere
vmware-iso (vsphere): Opening VMX source: output-vmware-iso/packer-vmware-iso.vmx
vmware-iso (vsphere): Opening VI target: vi://john%40example.com@vc.exmaple.com:443/vDC/host/Cluster/Resources/
vmware-iso (vsphere): Deploying to VI: vi://john%40example.com@vc.example.com:443/vDC/host/Cluster/Resources/
Transfer Completed
vmware-iso (vsphere): Completed successfully
vmware-iso (vsphere):
Build ‘vmware-iso’ finished.

==> Builds finished. The artifacts of successful builds are:
–> vmware-iso: ‘vmware’ provider box: centos-7.1-x64-vmware.box
–> vmware-iso:

Now you should have the same image running on both Vagrant and vSphere.

This guide aims to help administrators bind Red Hat Enterprise Linux systems to Sun One LDAP Directory server.

This is assuming you already have a working and populated Sun One LDAP Directory Server.

For this guide I am using:

LDAP Server:
Sun One LDAP Directory Server 5.2

LDAP Client:
RHEL 5.3 64bit

Sun ONE LDAP Server setup:
You will need a unique number for the UID and GID of every user. Think of a number that will be unique in your organization. Once you have agreed on what is going to be the unique number for each user then:

1) Open your SUN One Server Console and login

2) From the SUN One Console Go to “Users and Groups” and search for the user you want to be able to login to the RHEL system. Double click the user and go to Posix User Option and enter the following information:

Check Enable Posix User Attributes:
And enter the unique number for UID and GID
Also fill in:
/home/john
/bin/bash
Gecos:

Click OK and that should be it on the server side

RHEL configuration:

1) Ensure The following packages are installed
mozldap.x86_64
nss_ldap.i386
nss_ldap.x86_64
openldap.i386
openldap.x86_64
openldap-clients.x86_64
python-ldap.x86_64

2) Backup the following files
[root@rhelclient ~]# cp /etc/ldap.conf /etc/ldap.conf.orig
[root@rhelclient ~]# cp /etc/openldap/ldap.conf /etc/openldap/ldap.conf.orig
[root@rhelclient ~]# cp /etc/nsswitch.conf /etc/nsswitch.conf.orig
[root@rhelclient ~]# cp /etc/pam.d/system-auth /etc/pam.d/system-auth.orig

3) Configure authconfig to use the LDAP server:
[root@rhelclient ~]#  authconfig --enableldap --enableldapauth --ldapserver="ip_of_LDAP_server" --ldapbasedn="dc=example,dc=com" –kickstart

4) Check the files to make sure the changes took place (optional)
a. sed -e ‘/^#.*/d’ /etc/ldap.conf | sed -e ‘/^$/d’
b. sed -e ‘/^#.*/d’ /etc/openldap/ldap.conf | sed -e ‘/^$/d’
c. sed -e ‘/^#.*/d’ /etc/pam.d/system-auth | sed -e ‘/^$/d’
d. sed -e ‘/^#.*/d’ /etc/nsswitch.conf | sed -e ‘/^$/d’

5) Add the following to /etc/ssh/sshd_config to allow PAM authentication
PAMAuthenticationViaKbdInt yes

6) Now try to login the RHEL system using the LDAP user:
ssh john@rhelclient.example.com
Last login: Sat May 1 20:01:37 2010 from linuxbox.example.com
Could not chdir to home directory /home/john: No such file or directory
-bash-3.2$

The message “Could not chdir to home directory /home/john: No such file or directory” is because there is no home directory for the user, you can create a directory under /home for the user on the RHEL client and change the ownership to the UID: GID of the LDAP user.
Also copy the default skeleton files to the new home directory for the user.

[root@rhelclient ~]# mkdir /home/john
[root@rhelclient ~]# chown 2100:2100 /home/john
[root@rhelclient ~]# cp /etc/skel/.bash* /home/john/

A much elegant approach is to have the /home/* folders on a centralized location, like a NFS server and map them on the client automatically when a user logs in using the automounter. For this approach please see:
Automount Home Directories on NFS server

This posting will help you configuring multipathing on RHEL 5.3 for LUNs carved from a NetApp SAN. For this guide I am using a C-Class blade system with QLogic HBA cards.

1) Make sure you have the packages needed by RHEL, otherwise install them.

rpm -q device-mapper
rpm -q device-mappermultipath
yum install device-mapper
yum install device-mapper-multipath

2) Install QLogic Drivers if needed, or utilize RHEL drivers. In my case I am using HP C-Class blades with Qlogic HBA cards. HP drivers can be found at the HP site, driver is called hp_sansurfer. I am utilizing RHEL built in drivers, but you can install the HP/QLogic drivers as follows:

rpm -Uvh hp_sansurfer-5.0.1b45-1.x86_64.rpm

3) If Qlogic HBA, install the SanSurfer CLI, this is very useful program for doing things with QLogic HBA cards, it can be downloaded at QLogic website, install as follows:

rpm -Uvh scli-1.7.3-14.i386.rpm

4) Install NetApp Host Utilities Kit, the package is a tar.gz file, you can find it at the now site http://now.netapp.com.

Open it and run the install shell script

netapp_linux_host_utilities_5_0.tar.gz

5) Once Everything is installed on the host, create the LUN and ZONE it from the NetApp, Brocade(SAN Fabric),Host

To find your WWPNs, use the scli as follows:
# scli –i all
// Use the WWPN numbers for the iGroup and Brocade Aliases

6) Once it has been Zoned and mapped correctly, verify if your RHEL host can see it.

// Rescan HBA for new SAN Luns

# modprobe -r qla2xxx
# modprobe qla2xxx
// Check the kernel can see it
# cat /proc/scsi/scsi
# fdisk –lu

7) Utilize NetApp tools to see LUN connectivity

// Check your host and utilities see the LUNs
[root@server ~]# sanlun lun show
controller:          lun-pathname          device filename  adapter  protocol          lun size                                      lun state
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdf         host6    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sda         host4    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sde         host6    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdc         host5    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdd         host5    FCP          100g (107374182                             400)   GOOD
NETAPPFILER:  /vol/servervol/serverlun  /dev/sdb         host4    FCP          100g (107374182                             400)   GOOD
.

8 ) Utilize NetApp tools to check multipathing, not set yet

[root@server ~]# sanlun lun show -p
NETAPPFILER:/vol/servervol/serverlun (LUN 0)                Lun state: GOOD
Lun Size:    100g (107374182400) Controller_CF_State: Cluster Enabled
Protocol: FCP           Controller Partner: NETAPPFILER2
Multipath-provider: NONE
--------- ---------- ------- ------------ --------------------------------------------- ---------------
   sanlun Controller                                                            Primary         Partner
   path         Path   /dev/         Host                                    Controller      Controller
   state        type    node          HBA                                          port            port
--------- ---------- ------- ------------ --------------------------------------------- ---------------
     GOOD  primary       sdf        host6                                            0c              --
     GOOD  secondary     sda        host4                                            --              0c
     GOOD  secondary     sde        host6                                            --              0c
     GOOD  secondary     sdc        host5                                            --              0d
     GOOD  primary       sdd        host5                                            0d              --
     GOOD  primary       sdb        host4                                            0c              --

Time to configure multipathing

9) Start the multipath daemon

# service multipathd start

10) Find you WWID, this will be needed in the configuration if you want to alias it.

Comment out the blacklist in the default /etc/multipath.conf, otherwise you will NOT see anything.

#blacklist {
#        devnode "*"
#}
// Show your devices and paths, and record the WWID of the LUN
# multipath -v3
...
...
===== paths list =====
uuid                              hcil    dev dev_t pri dm_st  chk_st  vend/pr
360a98000486e576748345276376a4d41 4:0:0:0 sda 8:0   1   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 4:0:1:0 sdb 8:16  4   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 5:0:0:0 sdc 8:32  1   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 5:0:1:0 sdd 8:48  4   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 6:0:0:0 sde 8:64  1   [undef][ready] NETAPP,
360a98000486e576748345276376a4d41 6:0:1:0 sdf 8:80  4   [undef][ready] NETAPP,
...
...

11) Now you are ready to configure /etc/multipath.conf

Exclude (blacklist) all the devices that do not correspond to any
LUNs configured on the storage controller and which are mapped to
your Linux host. There are 2 methods:
Block by WWID
Block by devnode
In this case I am blocking by devnode since I am using HP and know my devnode RegEx
Also configure the device and alias(optional).
The full /etc/multipath.conf will look like this:


defaults
{
        user_friendly_names yes
        max_fds max
        queue_without_daemon no
}
blacklist {
        ###devnode "*"
           devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
           devnode "^hd[a-z]"
           devnode "^cciss!c[0-9]d[0-9]*"  # Note the cciss, usual in HP
}
multipaths {
        multipath {
                wwid    360a98000486e57674834527533455570    # You found this
                alias   netapp # This is how you want to name the device in your host
                               # server LUN on NETAPPFILER
        }
}
devices
{
        device
        {
        vendor "NETAPP"
        product "LUN"
        getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
        prio_callout "/sbin/mpath_prio_ontap /dev/%n"
        features "1 queue_if_no_path"
        hardware_handler "0"
        path_grouping_policy group_by_prio
        failback immediate
        rr_weight uniform
        rr_min_io 128
        path_checker directio
        flush_on_last_del yes
}
}

12) Restart multipath and make sure it starts automatically:

// Restart multipath
# service multipathd restart
// Add to startup
# chkconfig --add multipathd
# chkconfig multipathd on

13) Verify multipath is working

//RHEL tools 
[root@server scli]# multipath -l
netapp (360a98000486e576748345276376a4d41) dm-2 NETAPP,LUN
[size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 4:0:1:0 sdb 8:16  [active][undef]
 \_ 5:0:1:0 sdd 8:48  [active][undef]
 \_ 6:0:1:0 sdf 8:80  [active][undef]
\_ round-robin 0 [prio=0][enabled]
 \_ 4:0:0:0 sda 8:0   [active][undef]
 \_ 5:0:0:0 sdc 8:32  [active][undef]
 \_ 6:0:0:0 sde 8:64  [active][undef]
//NetApp utilities Tool 
 [root@server scli]# sanlun lun show -p
NETAPPFILER:/vol/servervol/serverlun (LUN 0)                Lun state: GOOD
Lun Size:    100g (107374182400) Controller_CF_State: Cluster Enabled
Protocol: FCP           Controller Partner: NETAPPFILER2
DM-MP DevName: netapp   (360a98000486e576748345276376a4d41)     dm-2
Multipath-provider: NATIVE
--------- ---------- ------- ------------ --------------------------------------------- ---------------
   sanlun Controller                                                            Primary         Partner

    state       type    node          HBA                                          port            port
--------- ---------- ------- ------------ --------------------------------------------- ---------------
     GOOD  primary       sdb        host4                                            0c              --
     GOOD  primary       sdd        host5                                            0d              --
     GOOD  primary       sdf        host6                                            0c              --
     GOOD  secondary     sda        host4                                            --              0c
     GOOD  secondary     sdc        host5                                            --              0d
     GOOD  secondary     sde        host6                                            --              0c
...

14) Now you can access the LUN by using the mapper

 [root@server scli]# ls -l /dev/mapper
total 0
crw------- 1 root root  10, 63 Sep 12 12:32 control
brw-rw---- 1 root disk 253,  2 Sep 16 10:54 netapp
brw-rw---- 1 root disk 253,  0 Sep 12 16:32 VolGroup00-LogVol00
brw-rw---- 1 root disk 253,  1 Sep 12 12:32 VolGroup00-LogVol01

15) Format it to your liking and mount it

# mkdir /mnt/netapp
# mkfs -t ext3 /dev/mapper/netapp
# mount /dev/mapper/netapp /mnt/netapp/
//verify it mounted
# mount
...
...
/dev/mapper/netapp on /mnt/netapp type ext3 (rw)
...

16 ) If you want it to be persistent after reboots put it on /etc/fstab and make sure multipathd start automatically.

# cat /etc/fstab
...
...
/dev/mapper/netapp      /mnt/netapp             ext3    defaults        0 0

17) If possible reboot to check it mounts correctly after reboots.

You have added a new disk or increased the size of your LUN, or increased the size of the virtual disk in case of virtual machines, and now you need to increase the partition, the Logical Volume and the filesystem in order to be able to use the new space.

In this post I go through the steps necessary to make this happen in a RHEL 5.3 system.

The LUN I will increase has 20GB and it had an LVM partition. I decided to increase the LUN size to 72GB and this is how it looks now.

[root@server~]# fdisk -lu
Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 2611 20971488 8e Linux LVM

I need to perform the following steps in order to be able to use the new space.

1. Increase the size of the partition using fdisk

[root@server ~]# fdisk /dev/sdb

Command (m for help): u //Change the display to sectors
Changing display/entry units to sectors
Command (m for help): p //Print the current partition table for that drive
Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders, total 150994944 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 64 41943039 20971488 8e Linux LVM
Command (m for help): d //Delete the partition information, we will recreate
Selected partition 1
Command (m for help): n //Create partition
Command action
e extended
p primary partition (1-4)
p //In this case it is primary
Partition number (1-4): 1 // In this case it is the first partition on the drive
First sector (63-150994943, default 63): 64 //Align partition if used on NetApp
Last sector or +size or +sizeM or +sizeK (64-150994943, default 150994943):
Using default value 150994943
Command (m for help): t //Change type from Linux(default) to Linux LVM
Selected partition 1
Hex code (type L to list codes): 8e //Linux LVM partition type
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): p //Print again to double check
Disk /dev/sdb: 77.3 GB, 77309411328 bytes
255 heads, 63 sectors/track, 9399 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 9400 75497440 8e Linux LVM
Command (m for help): w //Write the partition table
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

2. You need to reboot for the changes to take place, or just run

server# partprobe

3. Make LVM acknowledge the new space

[root@server ~]# pvresize /dev/sdb1

4. Check that the Volume group shows the new space

[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 2 0 wz–n- 71.97G 52.00G

5. Extend the logical volume:
make it total of 28G in this example

[root@server~]# lvresize -L 28G /dev/mapper/vg0-lvhome
Extending logical volume lvhome to 28.00 GB
Logical volume lvswap successfully resized

You can also take all the free space available

[root@server ~]# lvresize -l +100%FREE /dev/mapper/vg0-lvhome
Extending logical volume lvhome to 67.97 GB
Logical volume lvhome successfully resized

6. Use the rest for whatever partition you want

[root@server~]# lvcreate -l 100%FREE -n lvdata vg0

7. Resize the Filesystem

[root@server~]# resize2fs /dev/mapper/vg0-lvhome
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/vg0-lvhome is mounted on /home; on-line resizing required
Performing an on-line resize of /dev/mapper/vg0-lvhome to 9953280 (4k) blocks.
The filesystem on /dev/mapper/vg0-lvhome is now 9953280 blocks long.