Technologist

Tech stuff about Cloud, DevOps, SysAdmin, Virtualization, SAN, Hardware, Scripting, Automation and Development

Browsing Posts in Scripting

In this guide I go over how to use Vagrant with AWS and in the process have an automated way to install Puppet Enterprise.
I am separating data and code by having a generic Vagrantfile with the code and have a servers.yaml file with all the data that will change from user to user.
For installing the Puppet Enterprise server I am including the automated provisioning script I am using with Vagrant and using AWS Tags to set the hostname of the launched server.

Pre-requisites:

  • Vagrant
  • vagrant-aws plug-in:
  • AWS pre-requisites:
  • While you can add your AWS credentials to the Vagrantfile, it is not recommended. A better way is to have the AWS CLI tools installed and configured

    TL;DR To get started right away you can download the project from github vagrant-aws-puppetserver, otherwise follow the guide below.

    Create Vagrantfile

    The below Vagrantfile utilizes a yaml file (servers.yaml) to provide the data, it allows you to control data using the yaml file and not have to modify the Vagrantfile code – separating code and data.

    // Vagrantfile

    Create servers.yaml

    This file contains the information that will be used by the Vagrantfile, this includes
    AWS region: Which region will this EC2 server run
    AWS keypair: Key used to connect to your launched EC2 instance
    AWS subnet id: Where will this EC2 instance sit in the AWS network
    AWS associate public ip: Do you need a public IP? true or false
    AWS security group: What AWS security group should be associated, should allow Puppetserver needed ports and whatever else you need (ssh, etc)
    AWS ami: Which AMI will you be using I am using a CentOS7
    AWS instance type: Puppetserver needs enough CPU/RAM, during my testing m3.xlarge was appropriate
    AWS SSH username: The EC2 instance user (depends on which AMI you choose), the CentOS AMI expects ec2-user
    AWS SSH private key path: The local path to the SSH key pair
    AWS User Data: I am adding user data which will execute a bash script that allows Vagrant to interact with the launched EC2 instance
    AWS Tags: This is not required for Vagrant and AWS/EC2, but in my provision script I am using the AWS Name Tag to be the system’s hostname, the other 2 tags are there for demonstration purposes
    provision: This is a provisioning script that will be run on the EC2 instance – this is the script that install the Puppet Enterprise server
    AWS IAM Role: You don’t need to add a role when working with Vagrant and AWS/EC2, but I am using a specific IAM role to allow the launched EC2 instance to be able to get information about its AWS Tags, so it is important that you provide it with a Role that allows DescribeTags, see below IAM policy:

    Now use your data in the servers.yaml file
    // servers.yaml

    At this point you can spin up EC2 instances using the above Vagrantfile and servers.yaml file. If you add provision: install_puppetserver.sh to the servers.yaml file as I did and add the below script you will have a Puppet Enterprise server ready to go.

    // install_puppetserver.sh

While working on my video library I noticed that I had to rename a bunch of files to have correct file names that include Show name and Episode name and number.

I have about 275 files named from 000-274, but I needed to rename them to specify the season and episode they correspond to.

For example, files from 125 – 150 are from season 5 of said show, and that is what I will use for this post.

An example of a file in the 125 – 150 range:
‘SHOW – 135 – Some Episode _www.unneededStuff.mp4’

As you can see the filename has the following structure (which I will modify later in this post)

(ShowName) – (filename #) – (EpisodeName)_(Extra unneeded stuff).(extension)

I will be using the Linux/Mac rename command to perform the renaming of these files (125-150).
The end result should be that these files are renamed to have numbers from 501- 526 (for Season 5), so I will use arithmetic manipulate the numbers to be what I want.
I will also clean up the name a little bit so that it does not include the ‘extra unneeded stuff’.
The name of a file should be of form:
‘SHOW – 511 – Some Episode.mp4’

First make sure you work with the specific files you want to modify and nothing else:

// The above will show only files between 125 – 150 will be touched/modified.
// Using Bash brace expansion will help create the list between 125 – 150
// the brace expansion will use anything that has numbers 125-129, and any 13*, 14*, 150

Use the rename command with a RegEx expression that will modify the name the way you want.
I have the following regex expression:

Explanation:
// (^.+)\s-\s is the Show name plus ‘ – ‘, I am grouping just the name of the show to use it later
// (\d\d\d) is the incorrect episode number used in the filename
// (.+)_www.+ is the Episode name plus ‘unneeded stuff’, I am grouping just the episode name to use it later
// (\..+$) is the file extension, I am grouping it to use it later

// Now what was found with the above Regex will be substituted as follows:
$1 will have the Show name
“ – ” I am concatenating literal “ – “ after the show name
($2+376) I am adding the episode number + 376, which will give me the correct episode number. For example for 135, now it will be 511 (which is Season 5 episode 11)
$3 The Episode name captured from the previous regex
$4 The file extension captured from the previous regex

Run rename with -n (dry run) using the regex and file list created above to verify it will give you what you want/expect.

Output:

‘SHOW – 135 – Some Episode_www.unneededStuff.mp4’ would be renamed to ‘SHOW – 511 – Some Episode.mp4’

Now run it again without -n:

One of the most important things you should do to your systems is to ensure they have the right time.
In this post I will show how to check and ensure your systems have the correct time using PowerCli.

==> Login to vCenter:

$admin = Get-Credential –Credential EXAMPLE\john
Connect-VIServer -Server vc.example.com -Credential $admin

==> Check time settings:

Output:

==> Set time to correct time:

==> Remove old NTP servers (if any):

Output:

==> Change NTP to desired configuration:

Output:

==> Enable Firewall Exception

==> Start NTPd service

==> Ensure NTPd service starts automatically (via policy)

==> Verify all is set the way you expected

Output:

Snapshots are a great feature, probably one of the coolest in virtualization. They can become a problem if they are not used appropriately, unfortunately sometimes we let them grow to an unmanageable size, which can bring performance issues and give us headaches when we need to delete them.

In this post, I will show you how to find out what snapshots are present in your environment, along with some other useful information, like size.

To run the commands below you will need to install PowerCLI (on windows), which is a way to manage a VMware environment programmatically using PowerShell scripting.

To get PowerCLI, go to: www.vmware.com/go/powercli

1) Once you have PowerCLI, open it up, a command prompt will appear:

// At this point you have a session open with your vCenter

2) Query your vCenter to find out what snapshots are present:

Let me explain what is going on:
Get-VM‘ asks for the VMs that are running on your vCenter, PowerCLI returns an object for each VM and you then asks for the snapshots of each returned VM object by using ‘Get-Snapshot‘, then you take that output and format it by using ‘Format-list‘, but you are only asking for the information about ‘vm,name,sizeGB,create,powerstate

You can request any of the following:
Description
Created
Quiesced
PowerState
VM
VMId
Parent
ParentSnapshotId
ParentSnapshot
Children
SizeMB
SizeGB
IsCurrent
IsReplaySupported
ExtensionData
Id
Name
Uid

3) The above will give you the info you want, but I prefer CSV reports that I can share with the team or management. To get a good CSV report run the following:

I recommend taking a look at VMware’s best practices around snapshots:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1025279

The following PowerShell one liners will help you in getting a list of files and/or folders in a given folder.
It can be useful to capture this information in a text file for later processing or even a spreadsheet.

PowerShell is very powerful in that it returns objects that you can manipulate.

For Example, to return the list of ONLY directories on the C:\ drive:

PS C:\> Get-ChildItem | where {$_.PsIsContainer}

Mode LastWriteTime Length Name
—- ————- —— —-
d—- 3/12/2009 12:28 PM Projects
d—- 6/10/2010 9:38 AM cygwin
d—- 9/21/2010 7:02 PM Documents and Settings

Get-ChildItem can be run with no arguments to get all items, just like a ‘dir’ command.
In fact, dir, ls, gci are aliases of the Get-ChildItem cmdlet.

In the following example, I will get the list of ONLY the directories and then from this I will take only the name. Also, I will use the alias ‘dir’

PS C:\> dir | where {$_.PsIsContainer} | Select-Object Name

Name
—-
Projects
cygwin
Documents and Settings

If you wish to get the list of ONLY files, you just need to negate the where condition:
where {!$_.PsIsContainer}

PS C:\> Get-ChildItem | where {!$_.PsIsContainer} | Select-Object Name

Name
—-
.rnd
AUTOEXEC.BAT
CONFIG.SYS
cygwin.lnk
install_log

You can redirect this output to a text file for later processing:

PS C:\> Get-ChildItem | where {!$_.PsIsContainer} | Select-Object Name > onlyFiles.txt

Now let’s take it one step further and send this output to a CSV(Comma Separated Values) file.

PS C:\> Get-ChildItem | where {!$_.PsIsContainer} | Select-Object Name | Export-Csv onlyFiles.csv