As I am now hosting about 8 different sites I was wanting to make restoring from backups a bit easier and less manual. The ultimate goal is to login to my second server and have it up and running with as few key-strokes as possible. Scripting was the answer. It took me several hours of fanangling with syntax in a scratchpad and digging through google to finally put this together. I am sharing my completed work here for anyone else that could possibly make use of it as a spring-board for automating their own restoration process. I just got a new/upgraded backup server so it seemed like a good time to straighten it all out again and streamline things even more.

This is somewhat of a continuation from my earlier posts on automating “off-system” database backups. You can read those posts here:

Setting up automated database backups for your WordPress, Drupal, or Joomla website


I have two servers. One server is my primary server that hosts all of my live sites.

The second server is a backup server that I pull a copy of my web directory (with sub-directories for each site) and a compressed database dump of each database.

I had the backups setup before hand. I push the database dumps from the primary server to the backup server. I pull the webfiles to the backup server from the primary server directly into the web directory so the web directory on the backup server mirrors the primary server.

I use ZIP to compress the database dumps (zip format is just easier to work with for me because I started out on Windows).

My zip archives are individual for each site. So I have a site1 zip file, site2 zip file, etc.

The zip archives are setup such that when they are decompressed they also include the folder paths to the actual .sql dump file. Newbie mistake but I have almost a years worth of daily backups at this point so I am not changing it.

The naming convention for each zip archive is as follows (this is important).


So I have a folder full of these things, 1 per day for each database equals hundreds of files as I have about 8 separate sites. The important thing to note as far as my bash script goes is that the exact database name is included in the file name.

Here is the script (some things have been changed for security purposes):

dte=`date -d "yesterday 13:00 " '+%Y%m%d'`
cd $zipfolder
zips=`ls -la $zipfolder | grep $dte | awk '{print $9}'`
rm -rf $dumpfolder
for zipfile in $zips; do unzip $zipfile; done
cd $dumpfolder
dumps=`ls -la $dumpfolder | grep $dte | awk '{print $9}'`
dbnames=`for dbs in $dumps; do echo ${dbs%-*-*.*}; done`
for dbname in $dbnames; do mysqladmin -f drop $dbname; done
for dbname in $dbnames; do mysqladmin -f create $dbname; done
for sqlfile in $dumps; do cat $sqlfile | mysql ${sqlfile%-*-*.*}; done
cd $zipfolder
rm -rf $dumpfolder

Here is a high-level description of what it does, line by line…

1. declares a variable called “dte” which contains yesterday’s date in the format used in my file name convention. This script restores all of my sites databases from yesterday’s backups.

2. declares a variable called “zipfolder” that contains the directory where my zip archive files are stored (this all runs on the backup server)

3. declares a variable called “dumpfolder” where the actual .sql files will end up when each archive is decompressed.

4. changes into the directory that is listed in the “zipfolder” variable

5. declares a new variable called “zips” — uses “`” syntax to say I want the variable to contain the resultant output of the command in the ` marks. The command in the ` marks lists all of the contents of the zip folder directory, then it pipes that to grep which means only the files with yesterday’s date in the name display, it then pipes that to awk which cuts the output down just the file names. This could be simplified I am sure (probably use a less verbose version of the LS command and forego the need for AWK). I guess I like to make things difficult.

6. This command removes the existing dump folder, if it exists. This is a precautionary measure to ensure the script has a clean workspace.

7. This is a “for do” loop. I am not going to explain the syntax but basically it iterates through each filename stored in the “zips” variable and then runs the “unzip” command against the filename. Effectively, it unzips all of the zip files stored in the “zips” variable.

8. this command changes into the directory where my uncompressed .sql files are now stored.

9. declares a variable called “dumps”, similar to “zips” in that it stores the filenames of all of the .sql files stored in my directory. I kept the grep command with the date just to be safe even though we wiped this directory out a few commands earlier.

10. declares a variable called “dbnames” – this is important – As I stated earlier the exact database name is part of the filename of each archive backup. This strips off the rest of the filename so that I end up with an array of just database names… listed in the same order (alphabetically) as the filenames stored in the “dumps” variable. It uses a “for do” loop again to iterate through each filename, strip off the trailing junk, and leave me with just the database name.


If at this point in time I were to echo the “dumps” and “dbnames” array variables, I would get something like this:

echo $dumps
kilorootdb-backup-20150709.sql site2db-backup-20150709.sql site3db-backup-20150709.sql

echo $dbnames
kilorootdb site2db site3db


11. This is where it gets more fun and why I needed the database names in their own array. This command is again a “for do” loop which iterates through each database name and feeds it to a mysql command that drops the table if it already exists.

12. Likewise to 11, this command re-creates each database by name. So after steps 11 and 12 I end up with a bunch of empty databases with the correct names. Permissions in MySQL are “sticky” to the database name. As long as the database name is the same, the permissions for the users for each database are still present.

13. This is the money… this command took me a while to figure out. It uses a “for do” loop again to iterate through each .sql file in the “dumps” variable. It “cat” (reads) each one and pumps that into the correct database. This was tricky because I initially wanted to use my other variable array that contained the database names for the mysql command. Everything I tried -at-best- restored ALL tables from ALL sql files into ALL of the databases. So I ended up with a mess. Rather than go that route, I used the “dumps” array only and to get the database name I used “${sqlfile%-*-*.*}” syntax again to strip all the trailing junk in the filename. This worked well and when it looped it would correctly move from one .sql file to the next and dump each into a separate database with the correct name.

14 – 15. This is just another change directory command and then cleaning up all the sql files I no longer needed by deleting that directory.

The end result is that I now have the ability to quickly restore site databases for a bunch of different sites from individual backup files with one short command. All of my articles on site backup have been concerned with the database. However for a truly turnkey solution you need to backup your web directory as well. I will try to put together another article in the near future dealing with how to use rsync to perfectly (and very quickly) mirror your web directory to another server which is what I do. It’s fast, secure, and RSYNC can intelligently keep things mirrored and only sync changes.

PS – Something you may have noted is that the mysql command doesn’t have a username or password specified. To accomplish this I used an extremely locked-down mysql config/answer/preferences file in “root’s” home directory that has the username and password stored. This isn’t the best security-practice but if you want automation and don’t want your username and password stored in your script and sitting in your process logs, it is a much better way to go. The answer file is permission locked so only root can read or make changes to it. As a result, I have to “sudo su” before I can run this script.

PSS – Unlike most of my articles, no references for this one. The finalized script is an “original work,” I am providing it here without warranty or license. Take it, use it, abuse it.

1 of 1

One comment on: Bash Script for Quick Restore of Multiple Databases

  1. Pingback: Starting and Stopping Service in Ubuntu – Adding & Removing Services from Autostart at Boot « KiloRoot

Join the discussion

Your email address will not be published. Required fields are marked *