Thanks for taking the time to reply, Joel.
I'm testing OCP 4.3 2019-11-19 nightly on OSP 13.
I added my CA cert  to install-config.yaml  and the installer now progresses. I can even `oc get nodes` and see the masters. .
I still have the following errors and no worker nodes though.
This sounds like ingress isn't deploying because the worker nodes are not deployed or your load balancer isn't making ingress available. Are your master nodes schedulable? Ie are your masters also workers? If not, then ingress won't deploy.
$ oc describe node osp-nightly-tfz6p-master-0 | grep -i schedul
They are schedulable, but there are no matching tolerations in openshift-ingress/router-default deployment, so those pods are indeed stuck in _pending_ without any worker nodes.
How is your load balancer configured for 80/443 traffic? If the masters aren't targets of that, then even if ingress deploys you still won't be able to use any routes
This is likely a symptom of not yet having associated a floating IP to the app neutron port, and not having created an /etc/hosts entry on the installer host. I assume that's a nonfatal error.
I assume this one is fatal, however:
Have you added the CA that covers openstack.domain.com
to install-config.yaml at .additionalTrustBundle like you mentioned in your previous post?
I had to do this even though I don't have an explicit proxy. I do have a transparent proxy though, which was doing MITM, essentially breaking anything trying to talk to the internet.
Where did you make this change?
I did this before installation, for convenience mostly, after running "openshift-install create manifests --dir=ignition-files", I edited the ignition-files/manifests/cluster-proxy-01-config.yaml file.
Otherwise, it looks like you can do it after the fact using "oc edit proxies cluster", then you'll need to wait for the masters to reboot I think. Which for me sometimes takes like 10 minutes until it has done all of them.
FYI, I managed to find out what name to use to edit that proxy config by running "oc api-resources --api-group=config.openshift.io
" and then finding the name for apigroup "config.openshift.io
" and kind "Proxy".
I was going to try the 12/02 4.3 nightly build, but based on the following 2 blockers it doesn't look like it will work:
There is a fair chance the above proxy config will fix this one
It's disappointing that the 4.2 release notes claim that OpenStack is supported when it does not seem to be supported in what I presume to be the majority of OSP configurations.
$ export KUBECONFIG=osp-nightly/auth/kubeconfig
$ oc get nodes
NAME STATUS ROLES AGE VERSION
osp-nightly-tfz6p-master-0 Ready master 102m v1.16.2
osp-nightly-tfz6p-master-1 Ready master 103m v1.16.2
osp-nightly-tfz6p-master-2 Ready master 103m v1.16.2
- hyperthreading: Enabled
- cidr: 10.128.0.0/14
users mailing list
users lists openshift redhat com