[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Ansible installation stoped with tcp4 0.0.0.0:8443: listen: address already in use



The load balancer and the master cannot be co-located so long as they
use the same port. I think you could set the master to use 8443 but
have the load balancer use 443, however given you've only got one
master I'd simply do away with the load balancer entirely, it's only
necessary when you have multiple masters.  This load balancer is not
to be confused with the haproxy-router that runs inside of openshift
providing a route into your pods from the outside.

On Wed, Jun 22, 2016 at 4:27 AM, Olaf Radicke <o radicke meteocontrol de> wrote:
> Hi,
>
> I'm not sure it is a mistake of me. I try to install a OpenShift with
> ansible on 3 kvm virtual machines. The /etc/hosts looks like this:
>
> --------------- begin ----------------------
> 192.168.122.168 os-master os-etcd os-lb
> 192.168.122.56  os-node-01
> 192.168.122.74  os-node-02
>
> --------------- end ------------------------
>
>
> My /etc/ansible/hosts looks like this:
>
>
> --------------- begin ----------------------
> [OSEv3:children]
> masters
> nodes
> etcd
> lb
>
> [OSEv3:vars]
> ansible_ssh_user=root
> debug_level=2
> deployment_type=origin
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
> 'filename': '/etc/origin/master/htpasswd'}]
>
>
> [masters]
> os-master
>
> [etcd]
> os-etcd
>
> [lb]
> os-lb
>
> [nodes]
> os-node-0[1:2] openshift_node_labels="{'region': 'primary', 'zone':
> 'default'}"
> os-master
>
>
> --------------- end ------------------------
>
>
> The ansible playbook stops here:
>
>
> --------------- begin ----------------------
>
> TASK: [openshift_master | Restore Master Proxy Config Options]
> ****************
> changed: [os-master] => (item=# Origin uses standard HTTP_PROXY environment
> variables. Be sure to set)
> changed: [os-master] => (item=# NO_PROXY for your master)
> changed: [os-master] => (item=#NO_PROXY=master.example.com)
> changed: [os-master] => (item=#HTTP_PROXY=http://USER:PASSWORD IPADDR:PORT)
> changed: [os-master] =>
> (item=#HTTPS_PROXY=https://USER:PASSWORD IPADDR:PORT)
>
> TASK: [openshift_master | Create session secrets file]
> ************************
> ok: [os-master]
>
> TASK: [openshift_master | set_fact ]
> ******************************************
> ok: [os-master]
>
> TASK: [openshift_master | Create master config]
> *******************************
> ok: [os-master]
>
> TASK: [openshift_master | Test local loopback context]
> ************************
> ok: [os-master]
>
> TASK: [openshift_master | command {{ openshift.common.client_binary }}
> config set-cluster --certificate-authority={{ openshift_master_config_dir
> }}/ca.crt --embed-certs=true --server={{ openshift.master.loopback_api_url
> }} {{ openshift.master.loopback_cluster_name }} --config={{
> openshift_master_loopback_config }}
> ] ***
> skipping: [os-master]
>
> TASK: [openshift_master | command {{ openshift.common.client_binary }}
> config set-context --cluster={{ openshift.master.loopback_cluster_name }}
> --namespace=default --user={{ openshift.master.loopback_user }} {{
> openshift.master.loopback_context_name }} --config={{
> openshift_master_loopback_config }}
> ] ***
> skipping: [os-master]
>
> TASK: [openshift_master | command {{ openshift.common.client_binary }}
> config use-context {{ openshift.master.loopback_context_name }} --config={{
> openshift_master_loopback_config }}
> ] ***
> skipping: [os-master]
>
> TASK: [openshift_master | Start and enable master]
> ****************************
> failed: [os-master] => {"failed": true}
> msg: Job for origin-master.service failed because the control process exited
> with error code. See "systemctl status origin-master.service" and
> "journalctl -xe" for details.
>
>
> FATAL: all hosts have already failed -- aborting
>
> PLAY RECAP
> ********************************************************************
>
>
> --------------- end ------------------------
>
> The log says...
>
>
>
>
> --------------- begin ----------------------
> TASK: [openshift_master | Restore Master Proxy Config Options]
> ****************
> changed: [os-master] => (item=# Origin uses standard HTTP_PROXY environment
> variables. Be sure to set)
> changed: [os-master] => (item=# NO_PROXY for your master)
> changed: [os-master] => (item=#NO_PROXY=master.example.com)
> changed: [os-master] => (item=#HTTP_PROXY=http://USER:PASSWORD IPADDR:PORT)
> changed: [os-master] =>
> (item=#HTTPS_PROXY=https://USER:PASSWORD IPADDR:PORT)
>
> TASK: [openshift_master | Create session secrets file]
> ************************
> ok: [os-master]
>
> TASK: [openshift_master | set_fact ]
> ******************************************
> ok: [os-master]
>
> TASK: [openshift_master | Create master config]
> *******************************
> ok: [os-master]
>
> TASK: [openshift_master | Test local loopback context]
> ************************
> ok: [os-master]
>
> TASK: [openshift_master | command {{ openshift.common.client_binary }}
> config set-cluster --certificate-authority={{ openshift_master_config_dir
> }}/ca.crt --embed-certs=true --server={{ openshift.master.loopback_api_url
> }} {{ openshift.master.loopback_cluster_name }} --config={{
> openshift_master_loopback_config }}
> ] ***
> skipping: [os-master]
>
> TASK: [openshift_master | command {{ openshift.common.client_binary }}
> config set-context --cluster={{ openshift.master.loopback_cluster_name }}
> --namespace=default --user={{ openshift.master.loopback_user }} {{
> openshift.master.loopback_context_name }} --config={{
> openshift_master_loopback_config }}
> ] ***
> skipping: [os-master]
>
> TASK: [openshift_master | command {{ openshift.common.client_binary }}
> config use-context {{ openshift.master.loopback_context_name }} --config={{
> openshift_master_loopback_config }}
> ] ***
> skipping: [os-master]
>
> TASK: [openshift_master | Start and enable master]
> ****************************
> failed: [os-master] => {"failed": true}
> msg: Job for origin-master.service failed because the control process exited
> with error code. See "systemctl status origin-master.service" and
> "journalctl -xe" for details.
>
>
> FATAL: all hosts have already failed -- aborting
>
> PLAY RECAP
> ********************************************************************
> --------------- end ------------------------
>
> The restart try get this:
>
>
>
> --------------- begin ----------------------
>
> [root os-master ~]# systemctl status origin-master.service
> ‚óŹ origin-master.service - Origin Master Service
>    Loaded: loaded (/usr/lib/systemd/system/origin-master.service; enabled;
> vendor preset: disabled)
>    Active: activating (auto-restart) (Result: exit-code) since Wed
> 2016-06-22 09:07:24 CEST; 4s ago
>      Docs: https://github.com/openshift/origin
>   Process: 32157 ExecStart=/usr/bin/openshift start master
> --config=${CONFIG_FILE} $OPTIONS (code=exited, status=255)
>  Main PID: 32157 (code=exited, status=255)
>
> Jun 22 09:07:24 os-master.localdomain systemd[1]: origin-master.service:
> main process exited, code=exited, status=255/n/a
> Jun 22 09:07:24 os-master.localdomain systemd[1]: Failed to start Origin
> Master Service.
> Jun 22 09:07:24 os-master.localdomain systemd[1]: Unit origin-master.service
> entered failed state.
> Jun 22 09:07:24 os-master.localdomain systemd[1]: origin-master.service
> failed.
> [root os-master ~]# systemctl restart origin-master.service
> Job for origin-master.service failed because the control process exited with
> error code. See "systemctl status origin-master.service" and "journalctl
> -xe" for details.
>
> --------------- end ------------------------
>
> Now I try to start by command line:
>
>
> --------------- begin ----------------------
> root os-master ~]# cat /usr/lib/systemd/system/origin-master.service
> [Unit]
> Description=Origin Master Service
> Documentation=https://github.com/openshift/origin
> After=network.target
> After=etcd.service
> Before=origin-node.service
> Requires=network.target
>
> [Service]
> Type=notify
> EnvironmentFile=/etc/sysconfig/origin-master
> Environment=GOTRACEBACK=crash
> ExecStart=/usr/bin/openshift start master --config=${CONFIG_FILE} $OPTIONS
> LimitNOFILE=131072
> LimitCORE=infinity
> WorkingDirectory=/var/lib/origin/
> SyslogIdentifier=origin-master
> Restart=always
> RestartSec=5s
>
> [Install]
> WantedBy=multi-user.target
> WantedBy=origin-node.service
> [root os-master ~]# . /etc/sysconfig/origin-master
> [root os-master ~]# echo ${CONFIG_FILE} $OPTIONS
> /etc/origin/master/master-config.yaml --loglevel=2
> [root os-master ~]# /usr/bin/openshift start master --config=${CONFIG_FILE}
> $OPTIONS
> W0622 09:09:58.770544   32371 start_master.go:270]
> assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated
> container logs in the console
> W0622 09:09:58.770610   32371 start_master.go:270]
> assetConfig.metricsPublicURL: Invalid value: "": required to view cluster
> metrics in the console
> I0622 09:09:59.364265   32371 plugins.go:71] No cloud provider specified.
> I0622 09:09:59.481956   32371 genericapiserver.go:81] Adding storage
> destination for group
> I0622 09:09:59.481993   32371 genericapiserver.go:81] Adding storage
> destination for group extensions
> I0622 09:09:59.482013   32371 start_master.go:383] Starting master on
> 0.0.0.0:8443 (v1.2.0-rc1-13-g2e62fab)
> I0622 09:09:59.482019   32371 start_master.go:384] Public master address is
> https://192.168.122.168:8443
> I0622 09:09:59.482031   32371 start_master.go:388] Using images from
> "openshift/origin-<component>:v1.2.0-rc1-13-g2e62fab"
> I0622 09:09:59.621562   32371 run_components.go:204] Using default project
> node label selector:
> W0622 09:10:00.567167   32371 controller.go:297] Resetting endpoints for
> master service "kubernetes" to &{{ } {kubernetes  default
> 72ae64f8-383e-11e6-bcb2-525400d7a122 8 0 2016-06-22 07:59:14 +0200 CEST
> <nil> <nil> map[] map[]} [{[{192.168.122.168 <nil>}] [] [{https 8443 TCP}
> {dns 8053 UDP} {dns-tcp 8053 TCP}]}]}
> I0622 09:10:00.866430   32371 master.go:262] Started Kubernetes API at
> 0.0.0.0:8443/api/v1
> I0622 09:10:00.866458   32371 master.go:262] Started Kubernetes API
> Extensions at 0.0.0.0:8443/apis/extensions/v1beta1
> I0622 09:10:00.866463   32371 master.go:262] Started Origin API at
> 0.0.0.0:8443/oapi/v1
> I0622 09:10:00.866466   32371 master.go:262] Started OAuth2 API at
> 0.0.0.0:8443/oauth
> I0622 09:10:00.866469   32371 master.go:262] Started Web Console
> 0.0.0.0:8443/console/
> I0622 09:10:00.866472   32371 master.go:262] Started Swagger Schema API at
> 0.0.0.0:8443/swaggerapi/
> F0622 09:10:00.989101   32371 master.go:277] listen tcp4 0.0.0.0:8443:
> listen: address already in use
> [root os-master ~]#
>
> --------------- begin ----------------------
>
> Is this a misconfiguration?
>
>
> Best regards,
>
> Olaf Radicke
>
>
>
>
> _______________________________________________
> users mailing list
> users lists openshift redhat com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]