[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Openshift origin high availability and best practices



Hi Luke,

thank you for replying.
 
> 
> Package versions aren't significant, other than indicating which is newer. If you installed via puppet, you either got the latest from a "nightly" repo (most likely) or from a stable release (currently 3).

Okay, did not check the repo for a while. According to my repo I have installed version 1. The documentation on github.io has quite improved, congratulations. The changelog returns 404 (http://openshift.github.io/documentation/changelog.html). Since there might be schema changes between major versions - is it better to install new and clone the repos when doing a major update, or is it OK to update by changing the REPO and yum updating? Typically in Rails apps there is a some kind of migration routines. I did not find any update origin XX to YY.

> 
> > 
> > For fault tolerance, I am using:
> > - two separate masters
> > - glusterfs + external mysql to keep the data in sync
> > 
> > What about districts, are they already an alternative to multiple
> > separate installations (http://brenton-leanhardt.rhcloud.com/?p=102)
> 
> No, they have nothing to do with HA or separate installations.

if not HA at least failover / automatic node compensation, right? 

I would imagine that one could use districts (since they reserve the resources of one gear across all district nodes) to launch the gears simultanousely on multiple nodes in the same district. This would also enable HA? 
 
> 
> 
> I understand why you'd want to do this, but in general you just cannot count on gear UIDs matching up. They are not meant to.
> 
> Shared storage is still a hard problem on OpenShift, at least if you want shared access to be confined to a particular set of gears.

Are there any plans on integrating HekaFS (wich is multi-tenant gluster fs) with applications (application ID would be a tenant). By providing some kind of credential per application any gear running an application can access its private FS.

> 
> > I saw that there was a blog post about master/slave database
> > replication, is this cartridge also available in origin? Also I need
> > to have the data tree in sync.
> > (https://www.openshift.com/blogs/cloudy-with-a-chance-of-data-master-slave-database-replication-on-openshift)
> 
> We are exploring ways to scale replicated DB gears within an application to make them HA. It looks promising, but I have no idea when you'll see it in Origin. I'm sure someone has a proof-of-concept out on a branch on github somewhere.
> 
> For now, if you want your app's DB to be HA, the advice is still to manage that DB separately from OpenShift. You can configure connection parameters with app-specific env variables.

Thanks, I would be interested in a discussion here on the dev list. I will try to do automate some parts of my current setup. 

> 
> > Furthermore although changing the valid gear sizes to small,medium
> > console only allows me to create small gears.
> 
> See:
>  http://openshift.github.io/documentation/oo_administration_guide.html#set-default-gear-quotas-and-sizes
>  http://openshift.github.io/documentation/oo_administration_guide.html#set-the-type-of-gears-a-specific-user-can-create 
> 

Thanks for the pointer. 

Best,
Jakob


> > Best regards,
> > Jakob
> > 
> > 
> > 
> > _______________________________________________
> > dev mailing list
> > dev lists openshift redhat com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> > 
 
 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]