[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: Three-tier application deployment on OpenShift origin



Thank you very much for your help!

I will try to implement and see what can be done.

 

I am really learning a lot.

 

Regards,

 

Olga A.

 

De : Erik Jacobs [mailto:ejacobs redhat com]
Envoyé : mardi 10 mai 2016 04:22
À : ABDALA Olga
Cc : dev lists openshift redhat com
Objet : Re: Three-tier application deployment on OpenShift origin

 

 

On Mon, May 9, 2016 at 9:02 AM, ABDALA Olga <Olga ABDALA solucom fr> wrote:

 

 

De : Erik Jacobs [mailto:ejacobs redhat com]
Envoyé : lundi 9 mai 2016 14:31


À : ABDALA Olga
Cc : dev lists openshift redhat com
Objet : Re: Three-tier application deployment on OpenShift origin

 

On Mon, May 9, 2016 at 4:56 AM, ABDALA Olga <Olga ABDALA solucom fr> wrote:

Hello Erik,

 

Please find my comments inline

 

De : Erik Jacobs [mailto:ejacobs redhat com]
Envoyé : mercredi 4 mai 2016 17:32
À : ABDALA Olga
Cc : dev lists openshift redhat com
Objet : Re: Three-tier application deployment on OpenShift origin

 

 

On Wed, May 4, 2016 at 8:30 AM, ABDALA Olga <Olga ABDALA solucom fr> wrote:

Hello Erik,

 

Thank you for your inputs.

However, while trying to update the label for my Nodes, here is what I get: 

 

 

labels are single key/value pairs. You are trying to add an additional zone label without specifying --overwrite. You cannot have multiple values for the same key. 

 

Same thing if I try to update my pods’ labels.

 

Changing a pod label is not what you want to do. You want to change the pod nodeselector.

Ø  Yes I guess that is what I will have to change

 

Yes. 

 

For the NodeSelector, where can I find the pod configuration file, for me to specify the Node,  please?

Is it in the master-config.yaml file?

 

master-config.yaml is the master configuration, not a "pod configuration". "pod configuration" is kind of a strange statement. You probably mean "pod definition".

Ø  By « pod definition », do you mean the pod yaml file?

 

That is one example, yes.

 

 

We'll ignore nodeselector and master-config because while it's a thing, it won't do what you want. If you're interested, docs here: https://docs.openshift.org/latest/admin_guide/managing_projects.html#setting-the-cluster-wide-default-node-selector.

Ø  After checking the docs, My question is : if the defaultNodeSelector in the master config file is set for a specific region, does that mean that pods will never be placed on the Nodes of that specific region?

 

If the defaultNodeSelector is set, and you didn't somehow change it in the project, then the default node selector will *always* be applied, in addition to any pod-specific node selector. Whether that default nodeSelector is for "region", "zone", or any other arbitrary key/value pair is not relevant. The default is the default.

 

I think you meant to ask "if the default... is set for a region... does that mean the pods will always be placed". Not "never". Why would the selector mean never? That sounds more like an anti-selector...

 

 Always… yes, sorry, my bad

 

What you want to change is the pod nodeselector. I linked to the docs:

 

Ø  Just to make sure ; by setting a value to the « nodeSelector », will that put my pod to the specified Node?

 

If you set a value for the nodeSelector your pod will attempt to be scheduled on nodes who have labels that match.

 

If you want to run a pod on a specific node I believe there is also a way to select a specific node by its hostname. It's in the docs somewhere. 

Ok thanks

 

I don't know how you created your pods, so how you change/add nodeselector depends.

Ø  Actualy, I did not really ‘create’ the pods. What I did is, after creating a project and adding my application to the project, 1 pod was automatically created. From there, I simply increased the number of pods (from the web console) to as many as I wanted.

 

Yes, so you have a deployment config that causes a replication controller to be created that then causes a pod to be created. As per below, "new-app" / "add to project" are basically the same thing. One is the UI and one is the CLI.

Oh ok I see.

Ø  By the way, I wanted to set something clear in my head regarding the pods. Does the number of pods mean the number of the application’s ‘versions’?  

I don't understand your question. The number of pods is the number of pods. What do you mean by "the application's 'versions'"?

What I meant by application’s versions is a sort of ‘A/B testing’. That is because I was wondering how does the HA work. As in, when a pod goes down, how is another pod regenerated by the replication controller to make the App still running?

 

Simplest tl;dr kind of description:

* replication controller .spec.replicas says "how many pods do I want?"

* openshift then ensures that many pods are "running"

* if at any time OpenShift determines there is a "wrong" number of pods (too few, too many) it will "fix" the problem (add more, delete some)

 

A/B is a little more complicated. Today there is not a way to do this with a single RC/DC. There is a roadmap to be able to, but let's address what we can do today.

 

To do something like A/B testing today:

 

* you would get new pods running somehow (ex: use new-app / add-to-project again for a different branch of code)

* you would scale the new pods appropriately to do your test (eg: 10% of the total of existing pods or something)

* you would modify the definition of the new pods to have the same labels so that the service definition "finds" the new pods

 

Now some visitors would be directed to the new pods.

 

If you like the "B" version, you could scale up the B deployment and then delete the A deployment.

 

 

Here is a demo of A/B deployments and of blue/green deployments, if that's what you prefer:

 

 

Since you have builds, I am guessing that you used something like "new-app". new-app will have created a deploymentconfig. You would want to edit the deploymentconfig, find the pod template, and then add the nodeselector as shown in the docs above.

 

 

Thank you!

 

Olga

 

De : Erik Jacobs [mailto:ejacobs redhat com]
Envoyé : mardi 3 mai 2016 16:57
À : ABDALA Olga
Cc : dev lists openshift redhat com
Objet : Re: Three-tier application deployment on OpenShift origin

 

Hi Olga,

 

Some responses inline/


 

Erik M Jacobs, RHCA

Principal Technical Marketing Manager, OpenShift Enterprise

Red Hat, Inc.

Phone: 646.462.3745

AOL Instant Messenger: ejacobsatredhat

Twitter: @ErikonOpen

Freenode: thoraxe

 

On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga <Olga ABDALA solucom fr> wrote:

Hello all,

 

I am done with my origin advanced installation (thanks to your useful help) which architecture is composed of 4 virtualized servers (on the same network):

-       1  Master

-       2 Nodes

-       1 VM hosting Ansible

 

My next steps are to implement/test some use cases with a three-tier App(each App’s tier being hosted on a different VM):

-       The horizontal scalability;

-       The load-balancing of the Nodes : Keep the system running even if one of the VMs goes down;

-       App’s monitoring using Origin API: Allow the Origin API to “tell” the App on which VM is hosted each tier. (I still don’t know how to test that though…)

 

There are some notions that are still not clear to me:

-       From my web console, how can I know on which Node has my App been deployed?

 

If you look in the Browse -> Pods -> select a pod, you should see the node where the pod is running.

 

-       How can I put each component of my App on a separated Node?

-       How does the “zones” concept in origin work?

 

These two are closely related.

 

1) In your case it sounds like you would want a zone for each tier: appserver, web server, db

2) This would require a node with a label of, for example, zone=appserver

3) When you create your pod (or replication controller, or deployment config) you would want to specify, via a nodeselector, which zone you want the pod(s) to land in

 

This stuff is scattered throughout the docs:

 

 

I hope this helps.

 

 

Content of /etc/ansible/hosts of my Ansible hosting VM:

[masters]

sv5305.selfdeploy.loc

# host group for nodes, includes region info

[nodes]

sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=false

sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 'east'}"

sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 'west'}"

 

Thank you in advance.

 

Regards,

 

Olga

 


_______________________________________________
dev mailing list
dev lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

 

 

 

 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]