[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [v3] docker-registry deploy doesn't



I killed the node server and added that to the startup, and I now have containers running on the node :)

I really didn't see anything amiss in the logs, though. "osc get minions" reported the node, I could see health checks coming in on the node, there was stuff like this in the master log:
I0115 18:04:11.167747   22569 factory.go:289] Attempting to bind e8434f6e-9ce0-11e4-b0ed-fa163e33787b to 172.16.4.201
... but no followup "hey, what's that? sorry, can't bind it there after all, actually I have no idea who this node is" unless I'm missing something. So, an opportunity for more troubleshooting info somewhere!

Thanks!

----- Original Message -----
From: "Clayton Coleman" <ccoleman redhat com>
To: "Luke Meyer" <lmeyer redhat com>
Cc: "OpenShift Development" <dev lists openshift redhat com>
Sent: Thursday, January 15, 2015 2:03:04 PM
Subject: Re: [v3] docker-registry deploy doesn't

Oh, I looked at what you wrote - add "--hostname 172.16.4.201" to your node startup.  The hostname of the node is the "identity" of the node, and it defaults to hostname -f

----- Original Message -----
> Not making it out of unknown almost certainly means that the master can't
> find the nodes.
> 
> Do "osc describe pod e8434f6e-9ce0-11e4-b0ed-fa163e33787b" and I bet you'll
> see a bunch of errors about being unable to schedule.
> 
> The master log should probably have health check warnings on your node.
> 
> ----- Original Message -----
> > I'm trying to follow the sample-app deploy at
> > https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
> > and getting tripped up early on with deploying the docker-registry.
> > Pointers
> > would be appreciated as I have basically no idea how to find out what's
> > wrong.
> > 
> > The context is that I have compiled OpenShift from source and run it as
> > master on one host, node on another:
> > 
> > master# openshift start master --nodes=172.16.4.201 --loglevel=4
> > node# openshift start node --master=172.16.4.200 --loglevel=4
> > 
> > The hosts run Fedora 21, SELinux is permissive, firewalld isn't on, and
> > I've
> > added the --insecure-registry flag though it seems unlikely to be relevant
> > yet.
> > 
> > I do pullimages on the node to get docker warmed up. Then I try to apply
> > the
> > docker registry conf:
> > 
> > master [sample-app]# osc apply -f docker-registry-config.json
> > I0115 18:04:10.873990   22621 apply.go:59] Creation succeeded for Service
> > with name docker-registry
> > I0115 18:04:10.874574   22621 apply.go:59] Creation succeeded for
> > DeploymentConfig with name docker-registry
> > 
> > So this seems to be working fine, but the pod never makes it out of
> > "Unknown"
> > state, and the container never appears on the node according to docker ps
> > -a.
> > 
> > # osc get pods
> > POD                                    CONTAINER(S)        IMAGE(S)
> > HOST                LABELS              STATUS
> > e8434f6e-9ce0-11e4-b0ed-fa163e33787b   deployment
> > openshift/origin-deployer   172.16.4.201/       <none>
> > Unknown
> > 
> > # osc get services
> > NAME                LABELS                                    SELECTOR
> > IP                  PORT
> > kubernetes          component=apiserver,provider=kubernetes   <none>
> > 172.30.17.92        443
> > kubernetes-ro       component=apiserver,provider=kubernetes   <none>
> > 172.30.17.189       80
> > docker-registry     <none>
> > name=registrypod    172.30.17.3
> > 5001
> > 
> > # osc get deploymentconfig
> > NAME                TRIGGERS            LATEST VERSION
> > docker-registry     ConfigChange        1
> > 
> > # osc get pods
> > POD                                    CONTAINER(S)        IMAGE(S)
> > HOST                LABELS              STATUS
> > e8434f6e-9ce0-11e4-b0ed-fa163e33787b   deployment
> > openshift/origin-deployer   172.16.4.201/       <none>
> > Unknown
> > 
> > 
> > I'm attaching master and node logs... there's probably something indicative
> > in there but I can't decipher it at this point. More importantly, is there
> > any way to query openshift/kubernetes itself to see what is going wrong
> > with
> > this pod?
> > 
> > 
> > _______________________________________________
> > dev mailing list
> > dev lists openshift redhat com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]