Tuesday, July 24, 2012

Backup your MySQL to Amazon S3

Motivation: If ever database gets corrupted, or the MySQL goes boink, the data loss can be minimized.
  1. Familiarity with shell script
  2. S3 bucket and credentials
  3. Any S3 tool that you're familiar with. Here I have used s3cmd . You may use weapon of your choice, may be JetS3t or anything else.
  1. Create a clean directory where all the operations will take place. Call it staging_dir
  2. Take mysqldump of desired database. Or all the database. Learn here what you want: mysqldump usage document
    mysqldump --user MYSQL_USER --password=MYSQL_PASSWORD --add-locks --flush-privileges \
    --add-drop-table --complete-insert --extended-insert --single-transaction \

    • --add-locks: Surround each table dump with LOCK TABLES and UNLOCK TABLES statements. This results in faster inserts when the dump file is reloaded.
    • --flush-privileges: Send a FLUSH PRIVILEGES statement to the server after dumping the mysql database. This option should be used any time the dump contains the mysql database and any other database that depends on the data in the mysql database for proper restoration.
    • --complete-insert: Use complete INSERT statements that include column names. 
    • --extended-insert: Use multiple-row INSERT syntax that include several VALUES lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.

      you may also want to use -h HOSTNAME, if your cron job is going to run on a remote machine.
  3.  Make a tarball of this file.
  4. Upload to S3, (I assume you have configured your S3 tool whatever you use). With s3cmd, it's simple. s3cmd put FILENAME s3://BUCKETNAME/SUB/DIRECTORY/
  5. The tricky part is how would you know, what file was uploaded X days ago? With richer (scripting) languages like Python, or Groovy, it's lot more easy to enumerate the files in the backup folder. And if files are more than X, delete the excessive files. To me, it's tricky to this in shell script. So, I ended up in having smart filename instead.

    I name the files as www.mywebsite.com.YYYY-MM-DD.sql.tar. So, to delete the file that was created X days ago. All I have to do it to generate the date of X day ago and place in similar filename structure and then call s3cmd del s3://BUCKETNAME/SUB/DIRECTORY/OLD_FILENAME
Here is the complete script:
set -e




today=$( date '+%Y-%m-%d' )
remove_bkp_on=$( date -d "${today} -${keep_for_days} days" '+%Y-%m-%d' )


# conditinal printing
function echome(){
    if [ "${is_quiet}" != "true" ] ; then 
        echo -e "\033[0;34m\xE2\x9C\x93 $1\033[0m"

function start_bkp(){
    echome "--- MySQL Backup on $( date '+%b %d, %Y' ) ---"
    #delete staging dir if any
    echome "Cleaning staging directory: ${staging_dir}"
    rm -rf ${staging_dir}
    mkdir ${staging_dir}

    #take a MySQL dump
    echome "Starting MySQL backup..."
    echome "mysqldump --user ${mysql_user} --password=${mysql_pass} --add-locks --flush-privileges --add-drop-table --complete-insert --extended-insert --single-transaction --database ${mysql_db} > ${fullpath_sqlfile}"

    mysqldump --user ${mysql_user} --password=${mysql_pass} --add-locks --flush-privileges --add-drop-table --complete-insert --extended-insert --single-transaction --database ${mysql_db} > ${fullpath_sqlfile}

    echome "Creating tar file..."
    echome "tar cf ${fullpath_tarfile} --directory=${staging_dir} ${write_to}"
    tar cf ${fullpath_tarfile} --directory=${staging_dir} ${write_to}

    #upload to S3
    echome "Uploading to S3..."
    echome "s3cmd put ${fullpath_tarfile} \"s3://${s3_bucket}/\""
    s3cmd put ${fullpath_tarfile} "s3://${s3_bucket}/"

    echome "Deleting the file uploaded ${keep_for_days} days ago..."
    echome "s3cmd del \"s3://${s3_bucket}/${delete_file}\""
    s3cmd del "s3://${s3_bucket}/${delete_file}"

    #delete staging folder
    echome "Removing staging directory: ${staging_dir}"
    rm -rf ${staging_dir}
    echome "--- MySQL Backup Completed ---"
    echome "   "

start_bkp >> ${logfile}

Setting up Cron job: Place this script somewhere safe. Provide appropriate permission to it. I kept it under /opt/scripts/mysql_backup.sh, and appended this line to crontab -e

0 4 * * * /opt/scripts/mysql_backup.sh
this runsthe script daily at 4AM.

Thursday, July 19, 2012

Updating Version in Maven pom.xml

Sure you can do it manually. And it worked flawlessly until now. Wait until you have a multi-module project and yet dared to do it manually. Sooner than later you will see you have made a typo -- you renamed the version to 1.2.3-SHAPSHOT

Anyway, the most common Google search results into mvn release:update-revisions which requires the pom.xml to be in SNAPSHOT revision, and it's release, you may not want to perform a release.

A better (and correct) alternative is to use Maven Versions plug-in. It updates the versions of submodules too. Here is how to use it
mvn versions:set -DnewVersion=1.2.3-SNAPSHOT

Saturday, July 7, 2012

Change Nagios' Default Home Page

My last install of Nagios on CentOS went all fine except the home page of Nagios was not loading the main container with welcome page. It was blank. I had to change default home page to something meaningful. I changed it to "tactical overview" page.

In /usr/local/nagios/share/index.php or /usr/share/nagios/htdocs/index.php file, change this line
If you do not find your index.php in above two locations, try using this command to locate it on your disc:
locate index.php | grep nagios

Here are a couple of paths that you may be interested in
"Tactical Summary": cgi-bin/tac.cgi
"Map"             : cgi-bin/statusmap.cgi?host=all
"Hosts"           : cgi-bin/status.cgi?hostgroup=all&style=hostdetail
"Services"        : cgi-bin/status.cgi?host=all
"Summary"         : cgi-bin/status.cgi?hostgroup=all&style=summary
"Problems"        : cgi-bin/status.cgi?host=all&servicestatustypes=28
"Availability"    : cgi-bin/avail.cgi
"Trends"          : cgi-bin/trends.cgi
"Summary"         : cgi-bin/summary.cgi

Tuesday, July 3, 2012

Convert EBS Backed AMI into Instace Store Backed

We had following requirements
  1. Have test environment all on t1.micro instances. The problem with t1.micro at that time was it was available for EBS backed instances only.
  2. We did not want to use EBS backed images for Cassandra or production machines as some standard tests complain about EBS' IO performance. Refer this.
  3. Even in production, we had a mix of t1.micro and m1.large machines.
This forced the following design
  1. Unified images -- all images will have all software. We start just the required ones on any instance.
  2. Two types of AMIs -- we had to have identical EBS AMI and Instance Store AMI.
The obvious solution was to make the updates on both the AMIs separately. It was painfully frustrating. Good for us that we could make an EBS backed AMI into Instance store backed. It's pretty simple.

  1. You have Amazon EC2 AMI Tool and Amazon EC2 API Tool -- installed and configured.
  2. Proper Access Key Id, Secret Access Key, X.509 Certificate, and Private Key. Refer.
Procedure It's same as creating a normal Instance Store backed AMI.

Upload the certificate and private key file to a location that does not go into image file. One such place is /mnt, and make sure they have read-only permission.
[root@domU-12-34-56-AA-AA-78 ~]# chmod 400 /mnt/*pem
[root@domU-12-34-56-AA-AA-78 ~]# ls -l /mnt
total 24
-r-------- 1 root root   916 Jul  3 09:05 cert-xxxxxxxx.pem
-r-------- 1 root root   926 Jul  3 09:05 pk-xxxxxxxx.pem
Next, bundle the instance using ec2-bundle-vol command

[root@domU-12-34-56-AA-AA-78 ~]# ec2-bundle-vol -d /mnt -k /mnt/pk-xxxxxxxx.pem -c /mnt/cert-xxxxxxxx.pem -u 123456789012 -r x86_64 -p naishe_ami
Copying / into the image file /mnt/naishe_ami...
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.001876 seconds, 559 MB/s
mke2fs 1.39 (29-May-2006)
Bundling image file...
 Splitting /mnt/naishe_ami.tar.gz.enc...
Created naishe_ami.part.000
Created naishe_ami.part.001
Created naishe_ami.part.002
Created naishe_ami.part.140
Generating digests for each part...
Creating bundle manifest...
ec2-bundle-vol complete.

Upload to your S3 bucket, using ec2-upload-bundle

[root@domU-12-34-56-AA-AA-78 ~]#  ec2-upload-bundle -b amibucket/naishe_ami -m /mnt/naishe_ami.manifest.xml -a 123456789012 -s th3SecRETkey10nGGGs7rinG
Uploading bundled image parts to the S3 bucket brtctx00 ...
Uploaded naishe_ami.part.000
Uploaded naishe_ami.part.001
Uploaded naishe_ami.part.002
Uploaded naishe_ami.part.140
Uploading manifest ...
Uploaded manifest.
Bundle upload completed.

Finally, register your newly bundled AMI using ec2-register command

[root@domU-12-34-56-AA-AA-78 ~]# ec2-register -C /mnt/cert-xxxxxxxx.pem -K /mnt/pk-xxxxxxxx.pem  amibucket/naishe_ami/naishe_ami.manifest.xml -n naishe_ami_20120703
IMAGE    ami-flac07ka