Script to take tablewise mysqldump and store in s3
In this post, we are going to explain on how to take mysqldump of individual table for all the databases. As a bonus, we will add feature to store the dumps to s3(with Versioning) and encrypt the backup.
Prerequisites:
- s3cmd command on linux machine.
- Mysql Root Password
- S3 Bucket name with Versioning enabled
- Aws Access key & Secret Key with write privileges to the bucket.
#!/bin/sh
# support@pheonixsolutions.com
# script to take tablewise backup and upload it to a versioning enabled s3 bucket.
s3cmd --version &> /dev/null;
if [ $? -ne 0 ];
then
echo -e "s3cmd not found; Please install s3cmd"
exit 0;
fi
S3BACKUP="s3-bucket-name";
DB_USER=root
DB_PASS="DBPassword"
BASE_BAK_FLDR=/backup/sqldailybackups
DATA=`date|shasum|base64|head -c 16`
BACKUP_LOG=$BASE_BAK_FLDR/sqlbackuplog
FROM_EMAIL='from-email@domain.tld'
TO_EMAIL='to-email@domain.tld'
if [ -d /backup/sqldailybackups/];
then
echo -e "Backup folder : /backup/sqldailybackups/";
else
mkdir -p /backup/sqldailybackups/
fi
> $BACKUP_LOG
DBS_LIST=$(echo "show databases;"|mysql -u $DB_USER $DB_PASS -N 2>> $BACKUP_LOG)
if [ -z "$DATA" ];
then
echo -e "Base64 Failed: Hash Empty"| mail -r $EMAIL -s "SQL backup failed : `date` : $TO_EMAIL; exit 0;
fi
echo -e "`date` : $DATA" >> $BASE_BAK_FLDR/../logdate_OIFS=$IFS; IFS=$'\n'
for DB in $DBS_LIST;
do
DB_BKP_FLDR=$BASE_BAK_FLDR/$(date +%d-%m-%Y)/"$DB"
[ ! -d "$DB_BKP_FLDR" ] && mkdir -p "$DB_BKP_FLDR"
for table in $(echo "show tables;"|mysql "$DB" -u $DB_USER $DB_PASS -N 2>> $BACKUP_LOG);
do
if [ "$table" == event ];
then
mysqldump -u $DB_USER $DB_PASS --events "$DB" "$table" 2>> $BACKUP_LOG|gzip > "$DB_BKP_FLDR"/"$table".sql.gz
elif [ "$table" == general_log ] || [ "$table" == slow_log ];
then
mysqldump -u $DB_USER $DB_PASS --skip-lock-tables "$DB" "$table" 2>> $BACKUP_LOG|gzip > "$DB_BKP_FLDR"/"$table".sql.gz
else
mysqldump -u $DB_USER $DB_PASS "$DB" "$table" 2>> $BACKUP_LOG|gzip> "$DB_BKP_FLDR"/"$table".sql.gz
fi
openssl enc -aes-256-cbc -salt -a -in "$DB_BKP_FLDR"/"$table".sql.gz -out "$DB_BKP_FLDR"/"$table".sql.gz.enc -k $DATA
s3cmd put "$DB_BKP_FLDR"/"$table".sql.gz.enc s3://$S3BACKUP/"$DB"/"$table".sql.gz.enc
rm -f "$DB_BKP_FLDR"/"$table".sql.gz "$DB_BKP_FLDR"/"$table".sql.gz.enc
done
echo "$DB" >> $BACKUP_LOG
done
IFS=$OIFS
find $BASE_BAK_FLDR/-maxdepth 1 -mtime +5 -type d -exec rm -rf {} \;
CUR_VER=`s3cmd info s3://$S3BACKUP/mysql/user.sql.gz.enc | grep "Last mod" | awk -F", " '{print $2}'`
echo -e "List of databases & mysqldump errors(if any):\n-----\n`grep -v "Using a password on the command line interface can be insecure" $BACKUP_LOG`\n-----\nBackup location: s3://$S3BACKUP/\n\nCurrent Version on s3: $CUR_VER\n\nPrevious Version on s3: $OLD_VER\n\nData: $DATA" | mail -r $EMAIL -s "SQL Backup Completed @ `date` : $TO_EMAIL
The script will upload the table backup in a encrypted format to s3 bucket.
Restoration Process:
Inorder to restore the table/database from s3, follow the below mentioned steps.
- Download the encrypted SQL backup from s3. We can use aws cli or other ways to restore the download the backup.
- Once its downloaded, execute the below command to decrypt the backup. The decryption key will be available on /backup/logdate_OIFS with date and decryption key.
openssl enc -aes-256-cbc -d -a -in filename.sql.gz.enc -out filename.sql.gz -k Data
Where,
filename.sql.gz.enc – Encrypted filename(input file. Note that this file should exist from where you are executing)
filename.sql.gz – Decrypted filename
Data – Encryption key. Should be get it from backup location.
Wow. Congratualtions 🙂 You have completed the backup setup of mysql database on s3 bucket.
