Scheduling database backups with cron jobs

← Back
Some text from a computer terminal

We’re going to create a bash script which runs mysqldump (a MySQL backup program) to generate a database backup with a filename containing the date. We’re then going to set up a cron job (a utility for running scheduled tasks) to run the script at regular intervals. We’ll do all of this securely so that other users on the server can’t access your MySQL username and password. This guide assumes that you’re using a Unix based server (e.g. Ubuntu) with terminal access and a MySQL database.

Step 1: Configure mysqldump with a user and password

Before we can set up our cron job we need to edit our MySQL configuration file so that mysqldump doesn’t require user and password arguments. We could include these arguments in the bash script but it would be better to avoid exposing these details to other users. MySQL has a global configuration file which usually lives in /etc/mysql/my.cnf, but in this case we want to create a user-specific configuration file, which we can do by creating a .my.cnf (note the “.” at the start of the file name) file in your user’s home directory. Open the file and entering the the following:

[mysqldump]
user=root
password=my-root-password

Now, when our user runs the mysqldump command it will take the user and password values from our configuration file. We should then change the permissions of the .my.cnf file so that only our user has read and write permissions:

chmod 600 ~/.my.cnf

We can now use the mysqldump command to create a database dump without providing user or password arguments:

mysqldump database > path.sql

We probably don’t want to overwrite the last backup by using the same file name every time the script runs, so it would be a good idea to include the date in the file name. We can do that using the “date” user command in bash. The format can be added preceded by a “+”. There are many formatting options available but in this case we’ll go with %F, which gives the full date in YYYY-MM-DD format. If you’re interested in using a different date format, the full formatting commands can be found in the manual page by entering”man date” in the terminal. We can interpolate the date into the filename by using a dollar sign followed by brackets with the date command inside. So our final mysqldump command looks like this:

mysqldump my-database > /path/to/my/backups/database-backup-$(date +%F).sql

Now would be a good time to test the command by running it in the terminal before we setup our cron job.

Step 2: Create a bash script

Once we know the command works, all that remains is to create a bash script:

touch backup.sh

We can then edit the file and paste in the mysqldump command. We also need to make sure the file is executable, so that cron can run it, which we can do with the following:

chmod +x /path/to/your/backup.sh

At this point, it would be a good idea to test your script by running it in the terminal. You can do that by entering:

bash /path/to/your/backup.sh

Step 3: Create the cron job

We’re going to create a new cron job using crontab, which we can open by entering:

crontab -e

You should find the comments in the file explain everything you need to know next. You simply need to add a new line with values for the time you want the script to run followed by the path to the script. You need values for the minute, hour, day of the month, month and day day of the week. You can use “*” for “any”. Let’s say we want our script to run at 00:00 every Monday, so we’ll add the following line to the bottom of our crontab file:

0 0 * * 1 /path/to/my/backup.sh

The first two 0s are the values for the minute and hour, the *’s mean that we want the script to run every day of the month, every month, and the 1 is the day of the week. The day of the week values are 0-6 with 0 being Sunday, so a value of 1 means the script will run every Monday. Depending on how important your data is and how regularly it’s updated you may want to adjust these values to run the backup script more often. You may also want to add a line in your backup.sh script to delete your backups older than some amount of time to prevent the backups taking up too much space on your server. The following will delete all .sql files more than 30 days old:

find /path/to/your/backups -type f -mtime +30 -name "*.sql"  -execdir rm -- '{}' \;

We can now sleep at night knowing that if something goes wrong with our database, it isn’t the end of the world because we should always have a recent backup. If you’d like to know how to transfer your database files from the server to your local machine then you can find out how in my post on SCP.

Recent Blog Posts


How to improve your Google PageSpeed score with Webp images

If you're still serving images in long-time standard formats such as JPEG and PNG then a Google PageSpeed Insights analysis of your site will identify an opportunity to improve your page loading times by serving images in "next-gen" formats. There are several so-called "next-gen" image formats which offer superior compression to JPEG and PNG but in this post we will… Continue reading »

A puzzle with one piece missing

What are WebSockets?

What do WebSockets do? Hypertext Transfer Protocol (HTTP) is unidrectional, meaning requests can only be made by the client. This is a limitation which means that a certain class of web application cannot be built using HTTP alone. WebSockets are a separate communication protocol, fully compatible with HTTP, which allow for bidirectional communication between a client and a server. A protocol… Continue reading »