How to Monitor Your Server Backup Using ServerOwl™ Reporting?

Last modified: March 21, 2019

Normally there are two major components to your website, the database and your file system. With this stereotypical arrangement, we would normally represent the production server in SeverOwl as the Account level, the DR server as a Campaign, and each of the scripts as a Property of the DR Campaign.

Monitoring your SQL Database Backup

The first step in backing up your database is to create a master-slave database replication between your production system and a remote disaster recovery (DR) server. This will create a real-time link between the two databases where moments after the data has been updated in the master, it will be replicated on the slave.

The master-slave replication is very good except for two things. First is if the replication halts for whatever reason and you don't notice, so when it comes time to recover the master from the salve, the slave is months old. And the second is when the disaster you are recovering from is something like when someone typed into the system - drop database, or delete * from table…, and this command was instantly reproduced on the slave to be executed there. To protect against the second disaster we can create a nightly snapshot from the slave.

To protect against each of these two potential problems, we can create two separate bash script that will be executed by the cron service to keep a close eye on what's happening and report any issues as soon as they arise. The first step is to create a bash script to execute every 5 minutes on the DR to monitor the slave SQL database as previous demonstrated.

The next step is to create another script that will execute on slave every night to take a snapshot of the slave database and store it locally:-

#!/bin/bash

### VARIABLES ###
DB_NAME="<Your Database Name>"
DB_USERS="<Your Database Username>"
DB_PASS="<Your Database Password>"
BACKUP_PATH="/var/backups/"
TYPE="info"
COUNTER=30
MESSAGE=$(mysqldump -u $DB_USERS -p$DB_PASS $DB_NAME > $BACKUP_PATH$DB_NAME.sql)
rm $BACKUP_PATH$DB_NAME.sql.$COUNTER.gz
while [ $COUNTER -gt 0 ]; do
    let NEW=COUNTER-1
    mv $BACKUP_PATH$DB_NAME.sql.$NEW.gz $BACKUP_PATH$DB_NAME.sql.$COUNTER.gz
    let COUNTER=COUNTER-1
done
TEMP=$(gzip -S ".0.gz" -f $BACKUP_PATH$DB_NAME.sql)
if [ ! -z "$MESSAGE" ]
then
    TYPE="error"
else
    TYPE="info"
fi

MESSAGE_CODED=""
MESSAGE_LENGTH="${#MESSAGE}"
for (( i = 0; i < MESSAGE_LENGTH; i++ )); do
    C="${MESSAGE:i:1}";
    case $C in
        [a-zA-Z0-9.~_-]) MESSAGE_CODED+="$C";;
        *) MESSAGE_CODED+=$(printf "%%%02X" "'$C'");;
    esac
done

wget -O- --header="Connection: close" --header="Authorization: Basic <Your Authorization Code>" --post-data="type=${TYPE}&message=${MESSAGE_CODED}" https://www.serverowl.net/rest.php/pulse/<Your Account Slug>/<Your Campaign Slug>/<Your Property Slug> > /dev/null 2>&1

For the example above, you'll need to replace the data inside the tags with your own personal information to work correctly.

Monitoring your Source File Synchronization

Once you have properly backup your database and protected it from being accidently deleted, the next step is to create a cron script that will run on the production system each night and copy over all the files that have been uploaded to the server during the day.

This step is fairly straight forward, as there is already a console program rsync, that will do the majority of the work for us and we only need to monitor the success or failure of the program. This can be easily done with the following script:-

#!/bin/bash

### VARIABLES ###
LOCAL_DIR="/var/www/html/source/"
REMOTE_DIR="/var/www/html/source/"
REMOTE_USER="admin"
REMOTE_KEY="/etc/ssl/private/dr-server.pem"
REMOTE_SERVER="dr.server.com.au" #replace with your remote server's URL
MESSAGE=""
TYPE="info"

(
    flock -s 20
    MESSAGE=$(rsync --exclude "app/etc/env.php" --exclude ".htaccess" --exclude "var/*" --exclude "pub/static/*" --exclude "pub/media/catalog/product/cache/*" --perms --chmod=Dugo=rwx,Fugo=rw --rsh="ssh -i $REMOTE_KEY" --recursive --delete $LOCAL_DIR $REMOTE_USER\@$REMOTE_SERVER:$REMOTE_DIR);
) 20>/tmp/rsync-dr.lockfile

if [ ! -z "$MESSAGE" ]
then
    TYPE="error"
fi

MESSAGE_CODED=""
MESSAGE_LENGTH="${#MESSAGE}"
for (( i = 0; i < MESSAGE_LENGTH; i++ )); do
    C="${MESSAGE:i:1}";
    case $C in
        [a-zA-Z0-9.~_-]) MESSAGE_CODED+="$C";;
        *) MESSAGE_CODED+=$(printf "%%%02X" "'$C'");;
    esac
done

wget -O- --header="Connection: close" --header="Authorization: Basic <Your Authorization Code>" --post-data="type=${TYPE}&message=${MESSAGE_CODED}" https://www.serverowl.net/rest.php/pulse/<Your Account Slug>/<Your Campaign Slug>/<Your Property Slug> > /dev/null 2>&1

For the example above, you'll need to replace the data inside the tags with your own personal information to work correctly.

← Back to FAQs

Comments (0)