Benchmark: Rackspace’s block storage SATA vs. SSD vs. VM disk

Rackspace offers 2 types of block storage:

Standard (SATA) @ $015/GBandHigh-Performance (SSD) @ $0.70/GB

Seeing that the SSD storage is 4x the cost of SATA I decided to see if the performance is also 4x.

Lets see.

The setup:

An 8GB(RAM) system running ubuntu 12.04&2 100GB volumes with the xfs file system mounted with the following options:

/dev/xvdb   /fast     xfs noatime,nodiratime,allocsize=512m   0   0  /dev/xvdd   /slow     xfs noatime,nodiratime,allocsize=512m   0   0

Basic test using dd:

I’ve benchmarked lots of storage systems in the past and I always like to start out with dd.I do this because it doesn’t take anytime to set up and should give you some idea of how it performs.

In this test I create a 20GB file on each mounted filesystem using the following command:

dd if=/dev/zero of=10GB.file bs=1M count=20k

The results are a little surprising:

Volume write performance:

standard            105 MB/shigh-performance        103 MB/sthe hosts's own volume      135 MB/s

Wow, not what are were hoping for.I ran this test several times and the “high-performance” storage was always the slowest.To quote Homer Simpson “Doh!!”

bonnie++

I ran bonnie with the following args, basically I specified double the amount of RAM for the test.

bonnie++ -s 16g

For sequential reads and writes they were about the same, this is expected as dd already showed this:

Volume                sequential reads            sequential writes  standard              95981/sec                   16564/sec  high-performance      95927/sec                   15633/sec  localVM               108463/sec                  1208/sec

The results now show where the high-performance excels which is random seeks.

Volume                random seeks  standard              473.4/sec  high-performance      1969/sec  localVM               578.6/sec

Conclusion

The question was:Does the 4x cost of high-performance storage perform 4x?

The answer is yes.

Nice job rackspace.

However, as with the sequential numbers from above it doesn’t always out perform standard or local disk. So before you decide to use the more expensive option benchmark your application on it.

Advertisements

Compiling and packaging php 5.4.10 for ubuntu 12.04

We are in the midst of upgrading from 5.3.10 to 5.4 and couldn’t find a debian package for it.The current stable version is 5.4.10 but this changes often and I wanted to automate the compiling and packaging process.First thanks to Jordan Sissel who is a bad ass sysadmin/developer and who wrote fpm which I’m using here to create the debian package.

The end result is a script that will install prerequisite pacakges, download, compile and package which ever php version you specify.

The basic process is:

1 install the prerequisite packages needed to compile

2 download the php source

3 uncompress

4 configure

5 make

6 make install but do this while changing its destination directory

7 create the package

Step 6. is where php got a little tricky.In the fpm wiki page which describes how to package something that uses make install ( https://github.com/jordansissel/fpm/wiki/PackageMakeInstall )It has you changing the destination directory in the make install process by specifying:

make install DESTDIR=/tmp/installdir

However this didn’t work with php, instead I had to specify the install_dir:

INSTALL_ROOT=/tmp/installdir make install

FPM is really simply to use, also because its a ruby gem is easy to install.To create the package I’m using the following command:

fpm -s dir -t deb -n php -v 5.4.10 -p "libmcrypt-dev" -p "libt1-dev" --deb-user root -p php-VERSION_ARCH.deb --description 'php 5.4.10 compiled for apache2' -C /tmp/installdir etc usr

Its pretty self explaintory but a few things I’ll point out are “-p” which are packages that are dependancies, and “etc” & “usr” are the sub directories to /tmp/installdir which you want packaged up.

Script to download, compile and package php

php 5.4.10 debian package for ubuntu 12.04

What I do in DevOps

I take ownership of a companies infrastructure.
To manage it I write software.
My language of choice is ruby and the framework is chef.
Along with these skills I also bring an expertise in many technologies.
MySQL, apache, Linux, mongodb, data centers, cloud providers, etc……..
I also pair with engineering to streamline processes such as deployments, metrics and performance, scaling, security and continuous integration and testing.

rackspace: their service blows but they have high uptime

I started using rackspace managed hosting back in 2005. This is before the “cloud” so they were all physical hosts.
Since then I’ve used them for a mix of cloud and physical for several different companies.
I have also build data centers from the ground up and managed a lot of services on AWS( amazon web services).
After being a customer of rackspace over the last seven years I have formed the following opinion.

1. Their service blows
If you think the cloud is “scary” and mysql gives you the willies then their support might seem like wizards but in reality they are not.
I found out from an inside source that many of their support team are high school graduates who are run through a boot camp.

Instant wizards.

I have had countless dealings with support where I found zero value in the exchange.
This isn’t true of their whole team of course, at some point you might get the contact information of someone who can actually help you.
When you do hold on to it.
A great example of this is their “managed mysql”, what it amounts to is mysql installed on a supposedly faster file system and backed up.
Of course its tuned which means the innodb_buffer_size is changed per RAM size.
Thats it.
If you really have a problem with MySQL no way are the bootcamp kids going to be of much use to you.

2. They have high uptime
In both cases of physical and cloud rackspace has very good uptime.
Especially in comparison to EC2, and I’m not just talking about the big EC2 outages I’m talking about the day to day.
Its pretty common in EC2 to have a server freeze up and need to be rebooted or have it reboot on its own.
Of course when you decide to move to the cloud this is something you need to plan for but in the case of rackspace I can only think of a few times when one of my cloud instances ever went offline unexpectedly.

simple way to get notified when a cronjob fails

Every place I’ve ever worked had cronjobs running all over the place.Some are simple tasks like clearing out a temp directory.Others end up being a critical piece of the infrastructure that a developer wrote with out telling anyone about.I like to call this type of scheduled job the glue as its usually holding your company together.

True story I once found a cronjob running on a cluster of 200 servers named brett.sh that restarted an app every 30 seconds!!

In most cases the “glue” cronjob is unknown to anyone as to where the job runs, how often and most importanlty when it fails.There are a few tools out there to put all of your scheduled jobs in one spot and will take actions on failure.Some of those include opswise (http://www.opswise.com/) which I’ve used in the past and had a lot of success with and Amazon’s Simple Workflow Service (http://aws.amazon.com/swf/) which I haven’t used yet.

There is also an opensource project sponsered by yelp called tron which does most of this already except for notifying when it fails.BTW there is a feature request for this already, ( https://github.com/Yelp/Tron/issues/25 )

Anyway as a quick work around I just add a check for the exit code in my crontab which will alert me if the job doesn’t exit zero.

Example:

1 0 * * * touch /home/dodell/foobar|| if [ $? -ne 0 ] ; then mail -s 'touch_file failed' dodell@workobee.com < /etc/hostname ;exit 1

add timestamps to your standard out and standard error

A lot of time when executing a cronjob or a long running command I capture the standard out and standard out to a log file.This works okay but without time stamps it isn’t really useful especially for a job that runs many times a day which makes it difficult to tell which lines in the log match the run.What I do now is copy a script to all my systems (using chef of course) which will annotate any output I pipe to it.A command line example:

dodell@spork/etc$ cat resolv.conf | /usr/local/bin/annotate.sh   Thu Sep  6 14:39:59 PDT 2012: # Automatically generated, do not edit  Thu Sep  6 14:39:59 PDT 2012: nameserver 173.203.4.8  Thu Sep  6 14:39:59 PDT 2012: nameserver 173.203.4.9

Okay not a super useful example but you get my point.This is even more useful when added to a cronjob:

1 0 * * * /usr/local/bin/percona_backup_and_restore.sh backup 2>&1| /usr/local/bin/annotate.sh  >> /var/log/mysql/xtrabackup.log

and the output:

Thu Sep  6 00:01:02 PDT 2012:   Thu Sep  6 00:01:02 PDT 2012: InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy  Thu Sep  6 00:01:02 PDT 2012: and Percona Inc 2009-2012.  All Rights Reserved.  Thu Sep  6 00:01:02 PDT 2012:   Thu Sep  6 00:01:02 PDT 2012: This software is published under  Thu Sep  6 00:01:02 PDT 2012: the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.  Thu Sep  6 00:01:02 PDT 2012:   Thu Sep  6 00:01:02 PDT 2012: 120906 00:01:02  innobackupex: Starting mysql with options:  --password=xxxxxxxx --user='debian-sys-maint' --unbuffered --  Thu Sep  6 00:01:02 PDT 2012: 120906 00:01:02  innobackupex: Connected to database with mysql child process (pid=19867)  Thu Sep  6 00:01:08 PDT 2012: 120906 00:01:08  innobackupex: Connection to database server closed  Thu Sep  6 00:01:08 PDT 2012: IMPORTANT: Please check that the backup run completes successfully.  Thu Sep  6 00:01:08 PDT 2012: At the end of a successful backup run innobackupex  Thu Sep  6 00:01:08 PDT 2012: prints "completed OK!".

Ah, how beautiful standard out and error with time stamps…….magic.

The code:

#!/bin/bash  while read line  do     echo "$(date): ${line}"   done

Knowing when its time to leave

I spent a brief amount of time at a start-up where the culture sucked.

Basically unless you were there from day one your opinion didn’t matter.

The reason for this rotten culture was too many people in key positions hated their jobs.

They had been there too long, were misrable but weren’t mature enough or had other reasons for not leaving.

If you work in tech and live in the bay area and don’t like your job the solution is easy.

Leave.

Most likely you will find a much better gig and end up in a culture where your opinion matters.

In the end we just want to help our company be successful by a combination of the skills that we have plus the lessons that we learn along the way.

Life is way to short to spend you time, energy and brain on a company that doesn’t make you happy.