Mastodon – Running duplicity backups again

Running duplicity backups again

Posted on ma 27 december 2021 in servers

Some years back I had a rather complete puppet setup managing relatively small set of servers for personal use and some small business customers. In that setup I had a completely automated way of configuring duplicity backups to some server I rented somewhere safe, using puppet code and puppet external resources.

As time went on the puppet based management became overkill and the nice automated setup went the way of the Dodo.

But as I have recently installed and racked my first physical server in ages, I needed to think about backups again. My existing backup setup did not fit this new host so some adjustments were needed.

The puppet code is still on my gitlab host as an archived project. This is a good thing, as I remember getting the shell wrapper script to work exactly as intended took some time and testing.

So I used the old puppet code to jumpstart my ansible based reimplementation. The secrets are stores as variables, which I can set on node basis or group basis, depending on my (security) needs.

I just finished the code and testing today, so it will take me a couple of days to check the results and such.

To accommodate for the new backup data, I upgraded my Hetzner storage box from 2 to 5 TB. It was just one click with my mouse. Nice :-)

Transfers now run over webdav, as it is easier to setup with my current choice of backup storage.

Output from the first succesful backup:

Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
Last full backup is too old, forcing full backup
--------------[ Backup Statistics ]--------------
StartTime 1640588894.01 (Mon Dec 27 07:08:14 2021)
EndTime 1640589832.62 (Mon Dec 27 07:23:52 2021)
ElapsedTime 938.61 (15 minutes 38.61 seconds)
SourceFiles 140717
SourceFileSize 9392663854 (8.75 GB)
NewFiles 140717
NewFileSize 9373177527 (8.73 GB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 140717
RawDeltaSize 9389049454 (8.74 GB)
TotalDestinationSizeChange 5525963543 (5.15 GB)
Errors 0

The ansible code itself is not that interesting, but maybe sharing the shell script might help someone fiddling with duplicity. Please note that this is the ansible templated version, supporting both ssh and webdav as shipping method. Ansible role available on request.

# Handle PGP encrypted remote backups
# using sftp and duplicity
# Based on puppet managed setup from 2016
# Can handle transfers over SSH and webdav currently

function mitreport()

  logger "$1"
  #echo "$1"


# perform a random delay of 0-20 minutes, to not hammer the backup host.
sleep $(( $RANDOM % 1200 ))

EXCLUDES=" --exclude /proc --exclude /dev --exclude /sys --exclude /home/backup --exclude=/mnt "

SIGN_KEY=" --sign-key={{ backup_gpg_key_id }} "

ENCRYPT_KEY=" --encrypt-key={{ backup_gpg_key_id }} "

FULLDUMP_INTERVAL="{{ backup_full_interval }}"



mitreport "$0 getting ready for work"

if [ ! -x $DUPLICITY ];
  mitreport "Unable to execute $DUPLICTY, fatal"
  exit 1

if [[ -e $LOGFILE ]];

{% if backup_transfer_method == "ssh" %}

REMOTE_TARGET="pexpect+sftp://{{ inventory_hostname_short }}@{{ backup_targethost }}/data/"
SSH_OPTIONS="--ssh-options='-oIdentityFile=/root/.ssh/id_rsa_backup' "


{% else %}

WEBDAV_URL="webdavs://{{ backup_webdav_user }}@{{ backup_webdav_url }}/{{ backup_webdav_folder }} "

FTP_PASSWORD="{{ backup_webdav_password }}" SIGN_PASSPHRASE="{{ backup_gpg_key_passphrase }}" $DUPLICITY $SIGN_KEY $ENCRYPT_KEY $EXCLUDES --full-if-older-than $FULLDUMP_INTERVAL / $WEBDAV_URL > $LOGFILE

{% endif %}
if [ $? -ne 0 ];
  mitreport "Duplicyity run has non zero exit status, please check"

mitreport "$0 all done"

exit 0