[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Deployment Strategy: lifecycle hooks how to inject configuration



Hi Dan,

The discussion started with us trying to answer your questions. Thanks for your feedback.

The keyword to understand the build and deloyment hooks is 'process'. The build process does more than just creating an application image using either a Dockerfile or S2I. The deployment process does more than just creating the application pod from one or more existing container images (that may or may not be created by an OpenShift build). The hooks allow you to add more steps to the process, and also to interrupt the process midway.

One example of a build hook is to run unit tests or vulnerability scans on the generated container image. If these tests fail, the image is 'reject' and the build fails. The generated image will not be pushed to a registry server and no one will be able to create pods using the 'damaged' image, which is simply discarded.

In a similar way, a deployment hook could run some short of smoke or integration tests to make sure the application is functional for end users, before it is exposed though services and routers. If the tests fail, the deployment fails, and users keep accessing the previous deployment.

A post-deployment hook would be an alternative to implement database initialization. But anything that a deployment hook runs need to be pre-baked into the image. Once I toyed with a hook that run a shell script that downloads a SQL script from an http server and run it, so the database was ready with inital data before being available to applications. I guess I could the same with a init container.

Database initialization in the real world is actually much more complicated: do you really want to run the init script every time you start a database pod? Probably ies if this is a test dabase. But a production database probably want to preserve the data already in a persistent volume, and database initialization will be a one-time event for the first deployment only. Then you have application changes that require database schema changes. And so on...

OpenShift also support Jenkins pipelines, which are another way to implement and manage custom build and deployment processes.

A hint for you: If you use a Dockerfile as input for an openshift build to create your application image, you might consider real S2I (from Java sources on Git). The main advantage of S2I compared to docker builds is that S2I adds a single layer to the base (Wildfly) image. This single layer contains all configuration changes and also your application binary, while the more you add to a Dockerfile more layers you add. Each and every instruction in a Dockerfile adds a new layer.

If you do not want to build your Java sources inside OpenShift, you can use 'binary s2i' that does the maven build outside of openshift, them send the final war to the s2i builder, together with configuration files, modules, etc. You check the fabric8 maven plugins to see how to do that in your POM. Fabric8 can also manage regular Dockerfile builds if you want to do it outside of openshift.


[]s, Fernando Lozano



On Thu, Feb 22, 2018 at 2:18 PM, Dan Pungă <dan punga gmail com> wrote:
Thank you all for your considerations and advice!

I just wanted to get some idea about hook-uses and how/if I should work with them at this point. I guess I first relied more on the naming of the option..."deployment lifecycle hook" and description "
allow behavior to be injected into the deployment process".

Now, if you'd a allow a newbie to make some considerations, this is a bit misleading. What I initially thought after reading this, is that these are running environments somewhat similar to what Tomas linked in the first reply with the Kubernetes initContainer.
In fact these are separate, (even more..)ephemeral pods, that get instantiated from what the DeploymentConfig states. They're not "hooks" (which I interpreted as "an attachement to") for the deployment, but rather volatile replicas used to do some "things" outside the scope of the deployment itself, after which they're gone....blink pods :)
Now, for the standard examples that I see online with the database provisioning/replication etc, not one of them explicitly underlined that, in order for this to work, you need to use persistent volumes, because on that external resource it is where all pre/mid/post hook procedure gets persisted. Or maybe that's just standard knowledge that I didn't have..
(just as a side issue and coming from the recent exchange between Graham and Fernando: https://blog.openshift.com/using-post-hook-to-initialize-a-database/  at the very start of the post: "

You can solve this in multiple ways, such as:

  • You can create a custom database image and bake in the script into that image
  • You can do the DB initialization from the application container that runs the script on the database
"
Now I wonder how your colleague would implement the first option. I'm guessing more or less Graham's approach.)

Thank you Graham for you examples! I've actually tried changing the start command for the pod, more or less in the same way. Not through a mounted ConfigMap, but through a script that was doing my changes and then starting the pod(it was available to the image because I was not in your scenario with standard image; I was/am using a custom one). However this failed. I haven't really checked to see the actual reason. Might be that the primary process was the script and at some point it exited(didn't include the actual start command), or the timeout for the readiness probe was exceeded.
The trick with the wrapper is greatly appreciated, thank you!

In the end I got it solved with Fernando's approach to push the configuration at build time. I was not bound to not being able to create an extra layer/custom image. In fact I was actually on the "extra" layer of composing the artifact image (built with S2I) with the Runtime Wildfly instance. My inline Dockerfile got a bit more content than a FROM, COPY and CMD.
Another advantage here would also be that the rolling out of a new deployment is quicker, with the old pods being quickly switched to the new ones. In a stateless environment, such as mine, this is nice.

Thanks again,
Dan Pungă

PS: I'm kind of interfering in an ongoing discussion. Please, don't let my message stop you; this is first-hand knowledge! :)


On 22.02.2018 14:42, Fernando Lozano wrote:
Hi Graham,

If the image was designed to be configured using environment variables or configuration files that can be provided as volumes, yes you don't need a custom image. But from Dan message I expect more extensive customizations which would become cumbersome.

And the idea of forcing the image to run a different command than its entrypoint, them get more files from a volume, to customize the image or compensate for deficiencies in the original entrypoint command, seem also cumbersome to me. You are making extensive changes each time you start the container (to it's ephemeral read/write layer). I don't see the advantage compared to just creating a child image with an extra layer that has the customizations.


[]s, Fernando Lozano



On Wed, Feb 21, 2018 at 7:40 PM, Graham Dumpleton <gdumplet redhat com> wrote:
Another example of where this can be useful is where the primary process in the container doesn't do what is required of process ID 1. That is, reap zombie processes. If that becomes an issue you can use a run script wrapper like:

#!/bin/sh

trap 'kill -TERM $PID' TERM INT

/usr/libexec/s2i/run &

PID=$!
wait $PID
trap - TERM INT
wait $PID
STATUS=$?
exit $STATUS

This simple alternative to a mini init process manager such as tini, will work fine in many cases.

Replace /usr/libexec/s2i/run with actual program to run.

Graham

On 22 Feb 2018, at 9:33 am, Graham Dumpleton <gdumplet redhat com> wrote:

Badly worded perhaps.

In some cases you don't have the ability to modify an existing image with the application in it, plus you may not want to create a new custom image as a layer on top. In those cases, if all you need to do is some minor tweaks to config prior to the application starting in the container you can use the configmap trick as described. It will work so long as the config files you need to change can be modified as the user the container is run as.

So you can do:

    oc create configmap blog-run-script --from-file=run

    oc set volume dc/blog --add --type=configmap \
        --configmap-name=blog-run-script \
        --mount-path=/opt/app-root/scripts

    oc patch dc/blog --type=json --patch \
        '[{"op":"add",
           "path":"/spec/template/spec/containers/0/command",
           "value":["bash","/opt/app-root/scripts/run"]}]'

So the 'run' script makes the changes and then executes original command to start the application in the container.

Graham

On 22 Feb 2018, at 9:22 am, Fernando Lozano <flozano redhat com> wrote:

Hi Graham,

This doesn't make sense to me:

>  3. If don't want to create a new custom image.

If you wanna run your application in a container you have to create a custom image with the application. There's no way around, because container images are immutable. You can only choose how you will build your custom image. This is the way containers are supposed to work, with or without OpenShift.


[]s, Fernando Lozano


On Wed, Feb 21, 2018 at 6:15 PM, Graham Dumpleton <gdumplet redhat com> wrote:


On 22 Feb 2018, at 3:21 am, Fernando Lozano <flozano redhat com> wrote:

Hi Dan,

As you learned, lifecycle hooks were not made to change anything inside a container image. Remember that container images are, by design, immutable. It looks you want to build a custom container image that includes your customizations to the wildfly configs plus your application. There are two ways to accomplish that with OpenShift:

1. Create a Dockerfile that uses the standard wildfly container image as the parent, and adds your customization.

2. Use the OpenShift source-to-image (s2i) process to add configurations and your application. See the OpenShift docs about the wildfly s2i builder image for details, this is easier than using a Dockerfile. The standard s2i processes builds the application from sources, but it also supports feeding an application war/ear.

3. If don't want to create a new custom image, but want to add additional actions before application started in the container, mount a shell script into the container from a config map. Override the command for the pod to run your script mounted from config map. Do you work in the script, with your script then doing an exec on the original command for the application.

Graham

[]s, Fernando Lozano


On Wed, Feb 21, 2018 at 9:43 AM, Dan Pungă <dan punga gmail com> wrote:
Hello all!

Trying to build an OShift configuration for running a Java app with a Wildfly server.
I've setup this with ChainBuilds where the app's artifacts are combined with a runtime image of Wildfly.

For this particular app, however, I need to do some configuration on the Wildfly environment, so that the app is properly deployed and works.
- update a server module (grabbing the contents from the web and copying them in the right location inside Wildfly)
- add system properties and some other configuration to Wildfly's standalone.xml configuration file
- create some directory structure

I've tried to run all this with the Recreate deployment starategy and as a mid-hook procedure (so the previous deployment pod is scaled down), but all these changes aren't reflected in the actual(new) deployment pod.

Taking a closer look at the docs, I've found this line "
Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a deployment configuration."
So whatever I'm doing in my hook, is actually done in a different pod, the hook pod, and not in the actual deployment pod. Did I understand this correctly?
If so, how does the injection work here? Does it have to do with the fact that the deployment has to have persistent volumes? So the hooks actually do changes inside a volume that will be mounted with the deployment pod too...

Thank you!


_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users









[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]