On Fri, Jan 24, 2014 at 12:45 AM, Mateus Caruccio <mateus caruccio getupcloud com> wrote:
Hi.On Thu, Jan 23, 2014 at 4:13 AM, Andrew Lau <andrew andrewklau com> wrote:On Thu, Jan 23, 2014 at 4:53 PM, Clayton Coleman <ccoleman redhat com> wrote:
On Jan 22, 2014, at 11:03 PM, Andrew Lau <andrew andrewklau com> wrote:
Hi,I was dissecting my possibility of running a MariaDB Galera cluster within OpenShift a few days ago, but I got stuck on a few things I was hoping someone could help me with:I started here if anyone is interested in helping https://github.com/andrewklau/openshift-galera-mariadb- Is there a way to have an environment variable of some sort which could list all the gears running? I would like to create the whole cartridge similar to a web-app style so that it can scale infinitely. >From what I understand, environment variables set are global across all the gears upon first run, but does that mean they can be dynamically updated? I need this to write some logic to prevent a split brain. This is because the cluster requires the first "gear" to be brought up with the slightly different command.Currently the way to get this information is to use a connection hook (which gives you variables from all of the gears) and then do the same configuration on each gear. If you look at the redis cart you can see an example of the parsing of the data, although we'd like to make this simpler in the future by piping JSON to the stdin of the hook. For now, the example below demonstrates breaking the info up and then using it to assign a master (the gear with the lowest alphabetical ordered uuid the first time the cluster starts up).Thanks for the reply. I have no clue where to start with ruby, tbh none of it actually makes sense to me :( Would I just need to modify a few variables to grab just the output? With the reddis cartridge, what happens if all the nodes crash and are brought up all at the same time or the master crashes and the other nodes are running. I haven't used reddis, but with mariadb this would cause issues which would need some sort of logic or manual intervention.Take a look at these cartridge hooks:- Runs on every gear when a new gear is added. Everything printed to stdout is temporally stored by openshift to feed all subscribers of this hook.- Runs on every gear with stdin from publishers' stdout (hat publish-es-node printed). Stdin on subscriber hooks are as follow:[app-name] [namespace] [local-gear-uuid] [stdout-from-publisher]Note there are only 4 parameters.Stdout from publisher is a single parameter. Its format is "[gear-uui]=[publisher-stdout-from-that-gear] [gear-uui]=[publisher-stdout-from-that-gear] ..."
I suggest you to use a non-space separator on publisher. That way is more simple to parse it on subscriber (see set-es-node:11)In fact, hooks can written in any langague, since it is available on the node level.Thanks a lot for that example, mind if I reuse it?
Two questions, how are you handling scaling down? And are you able to handle a crashed node?
- Can anyone explain a little further on how the inter-gear networking works? Looking at this post on Master-Slave it appears there doesn't seem to be anything special required which is really confusing me about the whole multi-tenancy concept https://www.openshift.com/blogs/cloudy-with-a-chance-of-data-master-slave-database-replication-on-openshift- Is it possible to use the haproxy routing to loadbalance the requests to the cluster? I'm not so familiar with haproxy, but I'm assuming as it can run like a tcp loadbalancer similar to LVS (which I currently use for my external cluster) I don't see why it couldn't work. It'd obviously need a custom port?Scalable apps are load-balanced by default, but for HTTP only. There is a second haproxy on the node routing tcp connections inter-gears. The entry point is $OPENSHIFT_GEAR_DNS.I don't think it is possible to accept external connections other than http[s] on app's haproxy.This may be interesting, is $OPENSHIFT_GEAR_DNS unique per scaled gear? When you say external connections does that include other apps within the openshift environment? eg. App1 mariadb and app2 php
- I think I read somewhere, that by default the gears will try to do negative affinity by default when scaling out? eg. be hosted on a separate node. Is that the default behavior, or am I thinking of something else?AFAIK, broker chooses the least full node. Could someone please confirm this?- Am I wrong to assume this won't work with Online as it requires some external dependencies from the MariaDB repo (not SCL)?You could deploy mariadb binary along with your cartridge.Where would I need to save these binary files? It also a few dependencies would I need to package those up too?