Replace Red Hat Repo EPL7 With EPL6…….

April 30th, 2015

New job uses Red Hat (yuk !!) so will need to store some notes here until they seep into long term memory (or work switch to Ubuntu :o)

While trying to coax some new software on from the EPEL repo using YUM, the system spat out a load of errors and dependency issues.

Running

yum repoplist

showed that EPEL7 had been installed, but this was a Red Hat 6 box, it should have EPEL6 installed.

Figured out to remove the bad EPEL7 I needed to use

yum remove epel-release

I then installed the EPEL6 repo from an .prm package with

rpm -ivh epel-release-6-8.noarch.rpm

As EPEL7 was no longer installed, this went without an problem. But when trying to add packages I was still getting dependency errors because YUM was still trying to use EPEL7.

Turns out when you have removed and replaced the repo, you then need to clean up the metadata after you have installed the correct repo

So this does the trick:

yum remove epel-release
rpm -ivh epel-release-6-8.noarch.rpm
yum clean all

‘yum clean’ has multiple options available if you look into it……’all’ obviously cleans the lot.

Job done.

redhat

MBRalign, not all offsets are equal…….

February 22nd, 2015

I recently had to deal with misaligned VMDK files for older legacy VMs on a NetApp. How old ? Well the VMs in question were running Windows server 2003, but anything prior to Vista will most likely be misaligned unless you took steps when you created the VM. This article explains the misalignment issue better than I could, but in a nutshell, the misaligned blocks mean increased read and write operations owing to incorrect boundaries.

As the backend was a NetApp, it was decided that we would use the temporary fix of creating an optimised datastore, and to migrate all affected VMs onto it (the optimised datastore is itself offset by a certain amount). The theory goes that two wrongs can make a right, and that by offsetting the offset, the additional read/write requests can be negated. You can read more about that here.

I followed the NetApp Virtual Storage Console plugin (VSC) wizzard, selected my first VM, and completed the process. The wizard took my responses and created a offset optimised datastore, and then moved the VM onto it. Job done…….or so I thought.

I proceeded to migrated additional VMs, and all was going swimmingly until I reached one that would not migrate. The wizard gave the following unhelpful response.

datastore-says-no

datastore-says-no

I was a little puzzled as to why this VM would not migrate with the rest. I bypassed it and carried on, until I met another VM that would not migrate, with the same error msg. After a bit of comparison an head scratching, I spotted the difference. Hovering over various VMDK files I then spotted, the popup data indicated the VMDK files themselves had different offsets ! You can see these below.

vmdk-offset5-smudged

vmdk-offset6-smudged

vmdk-offset7-smudged

The first VM that failed with an offset of 5 was a P2V of an older Dell server, and the original server most likely had a recovery partition that meant that the offset of the actual boot partition was non standard.

The second VM that failed with an offset of 6 was an old Linux VM, which most likely had an old LILO partition with it’s own partition offset value.

All of the VMs that successfully migrated were plain Windows 2003 with an offset of 7.

You should note that when you run the console wizard to create the functionally aligned datastore, it creates with an offset to correct the vmdk of that vm. The datastore created will only then be able to have vms of the same offset onto it.

netapp

Format Mediawiki <PRE> With Wrap Around

October 24th, 2014

Been a little while since I used/worked with Mediawiki.

Trying to include script output in a <PRE> section, and of course by default it doesn’t wrap the content to fit the container.

Spent half and hour figuring this out *again* (last time it probably took twice that to figure it out, so getting better)

Making a note of it here for next time.

Adjust this

<pre style=”white-space:nowrap;”>

To this

<pre style=”white-space:normal;”>

This will cause the content of a <PRE> section to wrap around within the container

wiki

iLO2 passthrough authentication stops working after firmware upgrade…….

June 16th, 2014

If, after upgrading HP iLO2 firmware via the web administration console, your SSO passthrough authentication stops working try accessing it via a different browser from the one you did the update work with.

If the other browser works ok, then you may need to purge the browser history for the browser that you did the work from

Note to self: remember this in the future and not make the same panic call to the datacenter engineer….:o/

iLO_icon

Improving The Default Nagios/Icinga Remote_Procs Check…….

October 31st, 2013

When attempting to check processes on a remote Linux server using Nagios or Icinga, you will most likely use NRPE to call the built in nagios-plugin ‘check_procs’ on the remote host.

This plugin, along with all the other nagios-plugins will typically be installed into /usr/lib/nagios/chek_procs/ on the remote host file system.

But you don’t actually call the plugin directly, you use the server check_nrpe plugin to execute a command defined in /etc/nagios/nrpe.cfg on the remote host.

By default, the file contains sample command definitions on or around line 196 as shown below:

# The following examples use hardcoded command arguments...

command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200


# The following examples allow user-supplied arguments and can
# only be used if the NRPE daemon was compiled with support for
# command arguments *AND* the dont_blame_nrpe directive in this
# config file is set to '1'.  This poses a potential security risk, so
# make sure you read the SECURITY file before doing this.

#command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$
#command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$
#command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$
#command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$

Looking at the ‘#command[check_procs]…’ on the last line, note how it only takes x3 arguments by default. This would be fine if we were only worried about overall system process numbers in a certain state (-s takes a variable that filters to only include processes in a certain state)

But what if I want to monitor say, only the number of apache processes in a sleeping state ? I need to filter on process name AND state. From the documentation for this plugin, it can take an array of strings of process names to search and filter on. So we rewrite the command deifnition to add another parameter, so:

command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -a $ARG3$ -s $ARG4$

Now the first param -w is the lower threshold (alert if count less than this), second param -c is the upper threshold (alert if count more than this), third param -a is the process name filter (only count processes that match this string) and forth param -s is the state filter (zombie, sleeping, etc.)

Now on the server side, we can define a command to make NRPE trigger this definition like this:

# 'check_remote_procs' command definition
define command{
        command_name    check_remote_procs
        command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_procs -a $ARG1$ $ARG2$ $ARG3$ $ARG4$
        }

And finally we can define the actual service check for the host like this:

# Check remote node apache proxy process
define service{
        use                     generic-service,service-pnp
        host_name               web01
        service_description     httpd Process
        check_command           check_remote_procs!1:!:40!httpd!S
        }

Note the ‘check_command’ line, which says to run the local ‘check_remote_procs’ command, which in turn causes the local server ‘check_nrpe’ plugin to be executed against the remote host and call the remote ‘-c check_procs’ command, passing $ARG1$ ‘1:’ (so if less than 1 alert), $ARG2$ ‘:40′ (so if more than 40 alert), $ARG3$ ‘httpd’ (the name of the apache process) and $ARG4$ ‘S’ (the process status for sleeping).

You can also adapt this to search all running process for zombie process by using ‘*’ for the third param and ‘Z’ for the forth param and adjusting the count to 1: and :1 (not the colon changes location on those !).

Another thing I change in the default remote host nrpe.cfg file is to comment out the ‘check_load’ and ‘check_disk’ commands that use hardcoded values and uncomment the variables versions below it (far more useful).

Enjoy.

icinga_logo

Working with JKS SSL Keystores…….

October 25th, 2013

When working with a Java Key Store (JKS) make sure to keep the initial .jks keystore file that you create the certificate signing request from (CSR).

When you create a JKS, it gets seeded with the private key (which you cannot really see or get at it, except with 3rd party tools/utilities). This private key is used to create the CSR, they are related. You cannot use the signed public certificate you get back with any other JKS ! Soemthing like this:

keytool -genkey -alias mystuff -keyalg RSA -keysize 2048 -keystore mystuff.jks -validity 1095 -dname CN=*.mystuff.com,OU=IT,O=MyOrg,L=London,T=London,C=GB

The CN= part of the -dname parameters is the url that you wish to encrypt/protect with SSL, make sure you get it correct or your SSL cert will be useless. Provided the above is all good, it will prompt you for a password of at least x6 characters, and then again to confirm. It will then prompt you twice for a password for the -alias that you specified. This should match the password you just used for the JKS keystore. It will then create a file ‘mystuff.jks’ that contains the embedded private key.

From the above JKS keystore, you will need to create a CSR to send away for use to sign your public key cert. Something like this:

keytool -certreq -alias mystuff -keystore mystuff.jks -file mystuff.csr

The -alias must match the -alias used when creating the JKS keystore. Again, this will prompt for the JKS keystore password. When complete, this will produce a CSR file called ‘mystuff.csr’ that you can send to a vendor to sign an SSL cert.

When the signed public key cert comes back, it *may* have chain trust certificated with it. If so, you should simply paste the plain text contents of all the certificates into a single plain ASCII file using an text editor. You should paste them in the order below:

-----BEGIN CERTIFICATE-----
your public key in Base-64 encoded X.509
-----END CERTIFICATE-----

-----BEGIN CERTIFICATE-----
primary chain cert in Base-64 encoded X.509
-----END CERTIFICATE-----

-----BEGIN CERTIFICATE-----
secondary chain cert in Base-64 encoded X.509
-----END CERTIFICATE-----

Save the file as something like mystuff.cer

When you import the public cert, you must supply the same alias name that you used to create the private key and CSR, and the passwords for the JKS and the alias must all match. To import use:

keytool -import -alias mystuff -file mystuff.cer -keystore mystuff.jks

If all goes well, you should have x1 certificate in the store with a chain length of 3. To verify:

keytool -list -keystore mystuff.jks -alias mystuff -v
Enter keystore password:
Alias name: mystuff
Creation date: 21-Oct-2013
Entry type: PrivateKeyEntry
Certificate chain length: 3
Certificate[1]:

The -v output should make it print out the details for all x3 certificates.

If you don’t like command line methods, then this tool is rather good

java

Chroot SFTP Permissions Reminder……

October 24th, 2013

Note to self: When chrooting SFTP users, the initial folder that the user gets chrooted into needs to be owned by root:root and have permissions of 755.

Upon landing in this folder, the user will be unable to do anything, so you need to create an initial top level sub-folder here for them that their logon owns and has permissions on. Something like this:

mkdir /home/sftp/user
chown root:root /home/sftp/user
chmod 755 /home/sftp/user

mkdir /home/sftp/user/realuser
chown realuser:sftp-only /home/sftp/user/realuser
chmod 700 /home/sftp/user/realuser

You’ll also need something along the following in sshd.conf

#Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp internal-sftp -f auth -l info

Match group sftp-only
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp -f auth -l info

You need to make sure each user has their home folder path set to /home/sftp/user/ and that they are a member of the sftp-only group in order to get matched for chrooting.

Now when the user logs in, they will land in /home/sftp/user, they will be unable to do anything in this location, they will be unable browse outside of this location, and the first thing they should see is a ‘realuser’ folder within which they can create anything they like.

sftp

Think Deep Deep Thoughts…….

September 27th, 2013

A friend of mine has decided to get into blogging for similar reasons to myself. While I wanted somewhere to store the IT bits I picked up that aren’t in the user manuals, Bruce wanted somewhere to records his thoughts on…….well human thought, and so was born.

He’s one smart cookie, and I have to admit he sometimes looses me in places, but he does a very good job of noticing stuff that we do every day without realising it and then explaining why we do it.

For example, his lottery numbers. Mathematically, he has the same odds as winning as anyone else who plays with x6 random digits. Yet when I first read his numbers, I fell into the thought pattern that he describes.

It your clever, and you want to be made even cleverer, then go have a read.

electia

Keeping NTP Clock Time On Servers…….

September 4th, 2013

Not sure why I never made notes about this before now……but just had to spend an hour tracking down how to make sure Windows syncs with an external NTP time source…..and may as well add Linux while I’m here so they are both in the same place.

Linux first as it’s easiest. Use a daily cronjob to resync time (in this case I use the public pool ntp server):

0 9 * * * /usr/sbin/ntpdate -s -b -p 4 -u 0.pool.ntp.org > dev/null 2>&1

This is a bit of a hack fix though, better to properly setup ntpd on the system, but this will keep you going.

Windows is a little more involved. Use the w32tm program to configure and then check the time.

w32tm /config /manualpeerlist:0.pool.ntp.org,1.pool.ntp.org,0x8 /syncfromflags:manual /update

To understand the ‘0x8′ flag, check out this page.

Once you have configured the ntp, you can test it using the following:

w32tm /stripchart /computer:0.pool.ntp.org /samples:5 /dataonly

This should compare your clock time against the pool ntp server and display the offset/difference, it should be tiny.

 

it's only time.......

it’s only time…….

Check SSL Key Files Match…….

September 4th, 2013

Came across the following commands to check that CSR, PRIV and PUB SSL keys all relate to each other.

Works by extracting the modulus and then MD5’ing it for comparison.

-bash-3.00# openssl rsa -noout -modulus -in priv_key.txt | openssl md5                                    
adc0ac0d6e1a92df80ee84bf7aa5e987

-bash-3.00# openssl req -noout -modulus -in csr.txt | openssl md5                                         
adc0ac0d6e1a92df80ee84bf7aa5e987

-bash-3.00# openssl x509 -noout -modulus -in _pub.txt | openssl md5                                   
d180bf08405617f1fd79a4b15795723e