[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Running an OpenShift / kubernetes deployment from a Docker image



On 11 November 2014 14:01, Clayton Coleman <ccoleman redhat com> wrote:
On Nov 11, 2014, at 3:40 AM, James Strachan <james strachan gmail com> wrote:

So I tried running OpenShift on my Mac using Docker and it worked great! Many thanks for the email Clayton!

I've written up how to use OpenShift V3 with Docker on a Mac with Fabric8 here:

Excellent - thanks for taking a look at it.


making it really easy to run OpenShift locally on a laptop so folks can try things out & play around with local docker builds and JSON definitions is a really great goal. I can see lots of developers (myself included) wanting an easy way to try out code changes locally inside a little OpenShift environment to check things seem OK before pushing (and then doing the usual remote OpenShift build and so forth).


I found to be able to use local docker builds (that are not in the public registry) I had to run a docker registry locally; then build images and push them to the local registry.

I wonder if in this 'run openshift locally on your laptop inside docker' mode we could avoid the need to also run a docker registry? Just that the extra push adds extra time & complexity; things would feel a bit more agile if a stand alone openshift could just reuse the images the local docker daemon is aware of already?

My first test I just tried referencing images I'd built locally; but openshift couldn't find them. I'm guessing its trying to pull by default; or is it hard wired to only talk to local or global registries?

It's pull by default, you can set the PullPolicy in your pod to PullIfNotPresent to skip that behavior.

Yeah; was hoping to override when in 'development mode'. Not a massive issue; just felt it'd then be the simplest, smallest, fastest way for developers to hack on docker images and try them out in OpenShift on their laptop.

 
Either way, having some kind of environment variable that basically means it just defaults to using locally available docker images would be handy for folks using openshift in docker on their laptops; then folks can just do a docker build and avoid the push. 

We've discussed this in Kube once or twice, I don't think it will be easy to do until we have admission control.  

OK
 
Should I raise an issue so we don't forget?

I also found my Mac couldn't see the pod IPs by default; so added a route command:

    sudo route -n add 172.17.0.0/24 $DOCKER_IP


Is this vagrant rather than boot2docker?  Should be part of our vagrant setup I believe

Am just using vanilla Docker 1.3.1 on a Mac (OS X) with boot2docker and so can't ping the pod IPs by default without the above to tell my host to use the docker ip address to route requests.

 
Finally the other nit is using kube command line via docker is great; the only downside is when running this on your Mac host, you can't use the file system (as the commands actually running in a docker container inside boot2docker).

So we probably need a little shell script wrapper that scp's the file into somewhere visible by the docker container thats gonna run the kube command.

We can make the Openshift binary able to serve itself over http, so you can download a client from any server you can reach.

Aha of course! Thanks!

-- 
James
-------
Red Hat

Twitter: @jstrachan
Email: jstracha redhat com
Blog: http://macstrac.blogspot.com/

hawtio: http://hawt.io/
fabric8: http://fabric8.io/

Open Source Integration

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]