Hello Self-Hosters,

What is the best practice for backing up data from docker as a self-hoster looking for ease of maintenance and foolproof backups? (pick only one :D )

Assume directories with user data are mapped to a NAS share via NFS and backups are handled separately.

My bigger concern here is how do you handle all the other stuff that is stored locally on the server, like caches, databases, etc. The backup target will eventually be the NAS and then from there it’ll be double-backed up to externals.

  1. Is it better to run #cp /var/lib/docker/volumes/* /backupLocation every once in a while, or is it preferable to define mountpoints for everything inside of /home/user/Containers and then use a script to sync it to wherever you keep backups? What pros and cons have you seen or experienced with these approaches?

  2. How do you test your backups? I’m thinking about digging up an old PC to use to test backups. I assume I can just edit the ip addresses in the docker compose, mount my NFS dirs, and failover to see if it runs.

  3. I started documenting my system in my notes and making a checklist for what I need to backup and where it’s stored. Currently trying to figure out if I want to move some directories for consistency. Can I just do docker-compose down edit the mountpoints in docker-compose.yml and run docker-compose up to get a working system?

  • dabe@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 hours ago

    yeah this is the way. as part of my borgmatic script I bring down all the stacks that have databases, let the backup run, then bring those stacks back up. as long as the containers aren’t running (and as long as the container properly closes itself down, not usually something to be worried about though), any method of data back up should be fine.

    I do this with quadlets and systemd targets now but before I was doing it with a bunch of docker compose down commands.

    It is quite convenient for restoration, as you say