[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Backup of databases on OpenShift


Thanks for your ideas, Aleksandar, Mateus and Louis. Replication to an off-cloud DB seems a bit of a hassle, since we would like the backup to be 'automatically' enabled for every added database on OpenShift. The code at https://github.com/getupcloud/backup will probably be what we end up using, though probably modified to match our infrastructure, since we don't have S3 our azure blob available.

Thanks again,


On Fri, Jun 9, 2017 at 9:35 AM, Louis Santillan <lsantill redhat com> wrote:
My personal feeling is that I would, for at least MySQL/MariaDB & PostgreSQL, setup replication with compression to a non-cluster hosted DB.  Preferably, your ODW/DW DB instance(s) or maybe a staging DB.  With compression, you ship relatively small logs over the wire.




Red Hat Consulting, NA US WEST

lpsantil gmail com    M: 3236334854    

On Thu, Jun 8, 2017 at 7:46 AM, Jens Geiregat <jens geiregat gmail com> wrote:

We recently set up an OpenShift Enterprise cloud and we're wondering what the best practices are for backing up databases running in an OpenShift cloud. I will focus on PostgreSQL here, but the same goes for MongoDB, MariaDB...

- Should we rely on backups of the persistent volumes (we're using NFS)? This would mean assuming the on-disk state is always recoverable. Which it *should* be, but it does feel like a hack...
- Should we have an admin-level oc script that filters out all running database containers and does some 'oc exec pg_dump ... > backup.sql' magic on them?
- Should we provide some simple templates to our users that contain nothing but a cron script that calls pg_dump?

Please share your solutions?

Kind Regards,


users mailing list
users lists openshift redhat com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]