[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Issues with console - going on for a while






From: "Kyle Crumpton (kcrumpto)" <kcrumpto cisco com>
To: users lists openshift redhat com
Sent: Thursday, October 17, 2013 11:22:01 AM
Subject: Issues with console - going on for a while

Hi all,

I am running into countless issues with the guide: http://openshift.github.io/documentation/oo_deployment_guide_comprehensive.html

I followed the guide all the way. 
My output from mco ping: 
[root broker1 ~]# mco ping
node1.dsx.org                            time=101.03 ms
---- ping statistics ----
1 replies max: 101.03 min: 101.03 avg: 101.03 

So I know MCollective is working. I also checked the activemq and MCollective logs to make sure everything was okay. - No errors there.
So I perform the command to create a tunnel to the openshift-console -

sudo ssh -f -N -L 80:broker.example.com:80 -L 8161:broker.example.com:8161 -L 443:broker.example.com:443 root 10 4 59 x
This command worked. I go to http://127.0.0.1 , get prompted to create a security exception, then again for my credentials. I enter them and it loads a page with the beautiful message: 

"An error has occurred

You can try refreshing the page, if the problem is temporary.


You can also try the following:



Reference #7307ff1a235900fa64f1958c8ef917df"

So I go poke around in the logs; here are some of the outputs:

Openshift Broker (/var/logs/openshift/broker/production.log)

Started GET "/broker/rest/cartridges.json" for 127.0.0.1 at 2013-10-15 19:51:40 +0000
Processing by CartridgesController#index as JSON
Reference ID: 33b4f007e733649887e913cd3b4f4fc5 - Node execution failure (error getting result from node).


This indicates something went wrong on the node while trying to get the list of cartridges. Can you confirm you have cartridges installed on the node and you've rebooted since installing? I have heard the same problem from someone else but haven't made time to run through the instructions and check if they're up to date and still correct...

You'll want to check /var/log/mcollective.log and /var/log/openshift/node/* on the node for clues. If that doesn't help then it's a fair amount of tracking things down that needs to be done.


/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.13.0.1/lib/openshift/mcollective_application_container_proxy.rb:2529:in `parse_result'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.13.0.1/lib/openshift/mcollective_application_container_proxy.rb:187:in `get_available_cartridges'

plus much more on this stack trace …



Here is the log from the Openshift Console (/var/logs/openshift/console/production.log)

2013-10-17 14:48:16.262 [INFO ] Started GET "/console/application_types" for 67.207.155.115 at 2013-10-17 14:48:16 +0000 (pid:22646)
2013-10-17 14:48:16.265 [INFO ] Processing by ApplicationTypesController#index as HTML (pid:22646)
2013-10-17 14:48:27.830 [ERROR] Unhandled exception reference #7307ff1a235900fa64f1958c8ef917df: Failed. Response code = 500. Response message = Internal Server Error.
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-console-1.13.0.2/lib/active_resource/persistent_connection.rb:218:in `handle_response'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-console-1.13.0.2/lib/active_resource/persistent_connection.rb:175:in `request'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-console-1.13.0.2/lib/active_resource/persistent_connection.rb:112:in `block in get'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-console-1.13.0.2/lib/active_resource/persistent_connection.rb:284:in `with_auth'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-console-1.13.0.2/lib/active_resource/persistent_connection.rb:112:in `get'

plus much more on this stack trace…


The console is just relaying the 500 error that it got from the broker, which is just relaying the error that it got from the node. Arguably the console should do something nicer when there's an error like this. But the problem lies in the node.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]