I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.
But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it’s kind of getting rediciolous how many postgres instances I run on one server.
Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?
Not so long ago I had the same question myself, and I ended up setting 1 Postgress instance and 1 MySQL instance for all services to share. In the long run, I had so many version and settings incompatibilities across services that moved back to one DB per service that is tuned specifically for it. Also, I add a backup app to all my docker compose files that have a DB in it. This way, backups happen periodically and automatically.
Which db backup app do you use if you don’t mind me asking?
You don’t need a db backup app… bind mount the data to a location then just stop the container and have borg take the backups. You can do this with all your containers.
/docker/postgres
/docker/postgres/data
/docker/postgres/compose.yml
And do that with every container. Easy as fuck to backup and restore them.
Sorry! I know it’s been ages, but this is what I use: