Server Setup 14: Backups

If this is your first time looking at a post from this series, have a look at this summary: Server Setup 0: Contents

There’s lots of ways you could lose your website – a hard drive can fail, the server could be damaged by a lightning strike, or even stolen in a burglary. To get back up and running you’ll need to fix or replace the hardware, but this post is meant to make sure you have the data you need restore all the applications too.

Backup Options

There are a few ways to approach backups, but the key things are:

  • What to back up
  • Where to back up

I’ll be focussing on What, but will mention a few of your options for Where at the end.

What to back up

Whole system

The first thing I should mention is that you might have the option of backing up the whole system in one go. Many systems administrators don’t install their servers directly onto a computer, they use Virtualisation, which means first you install a special kind of operating system called a hypervisor, whose only job is to create and run virtual machines (VM) – a computer within the computer. You can then install the OS you really want for your server into one of these virtual machines.

It has many advantages, but I’ve brought it up now because many hypervisors have the option to create snapshots. This is a complete picture of the installation, so if you need to restore your system, it’s just a case of loading the most recent snapshot and you’re back up and running.

However a big disadvantage of this system is performance, especially on older computers with limited resources, which is why I haven’t done it in this tutorial. Virtualisation is a huge topic, so if it sounds interesting, then go read up on it.

There are also utilites that can clone your entire hard drive, so restoring your system should be a case of of putting that cloned hard drive into new server and starting it up. This approach has it’s disadvantages too – the backup will be as big as your hard drive, so your backup location needs a lot of storage space. It also works best if the system is off while the cloning occurs, so you’ll need to plan some regular downtime to use this backup strategy.

Specific Items

An advantage of backing up specific items is knowledge – you know what you’ve got. The tools for managing those pieces are often more common and more widely compatible than those for managing whole system backups.

Our strategy has two steps:

  • Create and populate a staging directory to hold all of the data we want to back up.
  • Actually do the backup to get the data off the server to somewhere else. (Backups stored on the same computer, especially on the same HDD, are a waste of time)

Create the backup staging area:

sudo mkdir /var/backup_staging
sudo chmod -R go-rwx /var/backup_staging
sudo mkdir /var/backup_staging/scripts

We’ll write a Shell script. You might not realise it, but the shell is the program you’re interracting with when you are entering commands. This means our script can use the same commands.

sudoedit /var/backup_staging/scripts/backup.sh

Enter the following as the first lists of this file:

#!/bin/sh
# Backup scripts

Like a lot of files we’ve worked on, the # at the beginning of a line means a comment, so wouldn’t normally matter. However, in this case, the first line is important to tell the system how to run this file. The second line is a real comment – a title for our script.

General

You probably what to back up these areas:

  • We’ve done a lot of work configuring Apache – we should definitely save those modifed configuration files: /etc/apache2
  • Getting new SSL certificates isn’t too hard, but it doesn’t hurt to have them if speed things along : /etc/letsencrypt
  • If you’re doing other things with this server you might have saved files your home directory: /home/[your username].

Create backup directories for these:

sudo mkdir --parents /var/backup_staging/etc/apache2
sudo mkdir /var/backup_staging/etc/letsencrypt
sudo mkdir --parents /var/backup_staging/home/[your username]

The idea is we’ll mirror the structure of the filesystem, which means you won’t get any clashes if you add to this in future. Add these lines our script /var/backup_staging/scripts/backup.sh

rsync -Aax --delete /etc/apache2/ /var/backup_staging/etc/apache2/
rsync -Aax --delete /etc/letsencrypt/ /var/backup_staging/etc/letsencrypt
rsync -Aax --delete /home/[your username]/ /var/backup_staging/home/[your username]/

Each of these lines runs the rsync program, which basically copies one folder into another. The options we’ve selected mean that the backup should be an exact mirror of the original, but it’ll only move or update the files that have changed. rsync can do a lot more than this, including transferring files to another computer, but you can research that on your own.

WordPress

There are two main pieces to the backup, files and the database.

Files

  • WordPress uploads folder. Any images you upload to your site are saved in here, and that’s the only copy. Find it at /var/www/wordpress/wp-content/uploads.
  • At the time of writing the entire size of my wordpress folder is only 114 megabytes, including the uploads folder mentioned above. You might as well back up the whole thing. /var/www/wordpress.
  • We moved the wordpress configuration into a file outsite the web root, so make sure that’s backed up: /var/www_config.

Create directories for these:

sudo mkdir --parents /var/backup_staging/www/wordpres
sudo mkdir /var/backup_staging/www_config

Add add these lines to the script to copy them into the backup staging location.

rsync -Aax --delete /var/www/wordpress /var/backup_staging/var/www/wordpress
rsync -Aax --delete /var/www_config/ /var/backup_staging/var/www_config/

Database

The database is a critical component of both WordPress and Nextcloud, so we need to back this up, but it takes a little planning.

We’re going to use mysqldump, which logs into your database server, and saves the whole database (or even multiple databases) in a sql file. The idea is you can just execute this sql file on a database server and the database will be recreated.

The command history is considered insecure, so we don’t want to put our database password in on the command line. The database credentials go in a separate file:

sudoedit /var/backup_staging/scripts/wordpress.cnf

Add this to wordpress.cnf, filling in your database username and password, (with single quote round it in case of special characters)

[mysqldump]
user=wp_usr
password='wordpress_database_password'

And now add this line to your backup script:

mysqldump --defaults-file=/var/backup_temp/scripts/wordpress.cnf --skip-dump-date --add-drop-table sswp > /var/backup_temp/sswp-current.sql

Explanation:

  • mysqldump is the utility that spits out a sql that’s a backup of a database.
    • --defaults-file tells mysqldump to read the username and password the file we specified
    • --skip-dump-date means the sql won’t include the date the dump was done, so if the database hasn’t changed between dumps, the sql file will be identical
    • –add-drop-table means the recovery can be more reliable, if riskier for the data you have on the system you’re installing the recovered data on.
    • > means the data is written the file specified.

Nextcloud

Nextcloud also has files and data that we want to back up. There’s a greater chance of inconsistencies between the files and database, and it’s down to you how much care you take over this.

Maintenance Mode

Like we did for WordPress, create a file containing the Nextcloud database login details. I’d call it /var/backup_staging/scripts/nextcloud.cnf. Use the same format, just with the Nextcloud database details instead of the WordPress ones.

First, we’ll put Nextcloud into maintenance mode, add this line to your script:

sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on

This runs the occ script, which performs various utility tasks on a Nextcloud installation.

Database

Now we can safely dump the database contents. Add this line to the backup script:.

mysqldump --defaults-file=/var/backup_staging/scripts/nextcloud.cnf --single-transaction --skip-dump-date --add-drop-table nextcloud > /var/backup_staging/nextcloud.sql

Files

We have a choice here:

  1. Minimum Downtime. Copy the enitire Nextcloud Data folder into the staging folder, then turn off maintenance mode. The downside of this you need twice as much storage space.
  2. Minimum Storage: Back up the Nextcloud data folder in place, then turn off maintenance mode. Your site will be down for as long as it takes the backup to complete.

Pick an option, and add the relevant lines to the end of your backup script

Option 1:

rsync -Aax --delete /var/www/wordpress /var/backup_staging/var/www/wordpress
rsync -Aax --delete /var/nc_data/ /var/backup_staging/var/nc_data/
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off
# # # # # # # # # # # # # # # # # #
# Placeholder for backup commands
# Back up /var/backup_staging only
# # # # # # # # # # # # # # # # # #

Option 2:

rsync -Aax --delete /var/www/wordpress /var/backup_staging/var/www/wordpress
# # # # # #
# Placeholder for backup commands
# Back up /var/backup_staging and /var/nc_data
# # # # # #
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off

Where to back up

All we’ve done so far is to stage all the useful data from our websites in a single folder.

We need to get it off your server to different location, preferably serveral locations. A common rule about backups is the 3-2-1 rule:

  • 3 copies of your data
  • 2 of those copies on different of media
  • 1 copy is offsite.

The “2 different media” rule protects against hard drive failure, and the offsite rule can protect against a whole host of ways to lose your data: physical theft, fire, lightning damage and more.

So how can you do this?

  • A few portable HDDs can be a simple option – you’ll need to remember to plug them in, you may need to configure your backup script to mount it. Keep one at your mum’s house and swap it out every week or so.
  • Another server. If you (or a friend) have another server with plenty of space, you could back up to that. rsync doesn’t just copy folders on one computer – it’s really meant for transfering files to a remote computer over SSH. A sample command might look like this:
rsync -az /var/backup_staging/ username@remoteserver:/var/remote_backups
  • For either of these options, consider using software that encrypts the data. Duplicity works a bit like rsync, but allows for encryption and incremental backups.
  • Use a backup service. These companies usually have their own software and servers. Look for encryption and trustworthiness (I would never use a free service for something like backup). It’s also essential that they offer a Linux client with a command line interface so we can add it to our script. I use SpiderOak – there are plenty other others out there, do your research!

Once you’ve picked your method, replace the placeholder in our backup script with whatever commands you need to make that run.

Run it regularly

Our backup script is great and all, but unless you run it regularly, it’s pretty pointless. We’ll use cron to run this script every night (or whenever you want).

Edit the system crontab:

sudo crontab -e

Add this line to the bottom, then save and exit the file.

  5 2   *   *   *  /bin/sh /var/backup_staging/scripts/backup.sh

The lines at the top of the crontab script tell you how to use it. The 5 and 2 means the command will run at 5 minutes past 2am (02:05 24 hour clock), and the stars means it won’t consider the other options (day of week, day or month or month) are ignored. /bin/sh is the program that will run our script.

Test!

This is critical – after you’ve done your first backup, take a good look at:

  • Is everything you expected there?
  • Nothing is corrupted

IT professionals regularly do “recovery drills”. i.e. practice what they’d do to get IT systems up and running after a distaster, which helps find problems with the backups. Have a go at using your backups to rebuild you site, maybe in a VM on your client computer.

Good luck!

Leave a Reply

Your email address will not be published. Required fields are marked *