Thank you all for your
considerations and advice!|
I just wanted to get some idea about hook-uses and how/if I should work with them at this point. I guess I first relied more on the naming of the option..."deployment lifecycle hook" and description "allow behavior to be injected into the deployment process".
Now, if you'd a allow a newbie to make some considerations, this is a bit misleading. What I initially thought after reading this, is that these are running environments somewhat similar to what Tomas linked in the first reply with the Kubernetes initContainer.
In fact these are separate, (even more..)ephemeral pods, that get instantiated from what the DeploymentConfig states. They're not "hooks" (which I interpreted as "an attachement to") for the deployment, but rather volatile replicas used to do some "things" outside the scope of the deployment itself, after which they're gone....blink pods :)
Now, for the standard examples that I see online with the database provisioning/replication etc, not one of them explicitly underlined that, in order for this to work, you need to use persistent volumes, because on that external resource it is where all pre/mid/post hook procedure gets persisted. Or maybe that's just standard knowledge that I didn't have..
(just as a side issue and coming from the recent exchange between Graham and Fernando: https://blog.openshift.com/using-post-hook-to-initialize-a-database/ at the very start of the post: " "
Now I wonder how your colleague would implement the first option. I'm guessing more or less Graham's approach.)
Thank you Graham for you examples! I've actually tried changing the start command for the pod, more or less in the same way. Not through a mounted ConfigMap, but through a script that was doing my changes and then starting the pod(it was available to the image because I was not in your scenario with standard image; I was/am using a custom one). However this failed. I haven't really checked to see the actual reason. Might be that the primary process was the script and at some point it exited(didn't include the actual start command), or the timeout for the readiness probe was exceeded.
The trick with the wrapper is greatly appreciated, thank you!
In the end I got it solved with Fernando's approach to push the configuration at build time. I was not bound to not being able to create an extra layer/custom image. In fact I was actually on the "extra" layer of composing the artifact image (built with S2I) with the Runtime Wildfly instance. My inline Dockerfile got a bit more content than a FROM, COPY and CMD.
Another advantage here would also be that the rolling out of a new deployment is quicker, with the old pods being quickly switched to the new ones. In a stateless environment, such as mine, this is nice.
PS: I'm kind of interfering in an ongoing discussion. Please, don't let my message stop you; this is first-hand knowledge! :)
On 22.02.2018 14:42, Fernando Lozano wrote: