[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: s3 registry 500 errors in Sydney



Hi Guys,

Spent quite a bit more time on this and have been able to determine that the problem is specifically with the openshift registry image.
Using the docker.io/registry:2 to create a registry on the same nodes, using the same configuration works. 
The service is able to write to the designated S3 folder.

I've adjusted the registry build to use the latest image 
oadm registry --config=/etc/origin/master/admin.kubeconfig --service-account=registry --credentials=/etc/origin/master/openshift-registry.kubeconfig --latest-images=true
Which has pulled a newer image, but alas the error is much the same...

http.request.method=HEAD http.request.remoteaddr="10.1.1.1:51518" http.request.uri="/v2/bnz-uat/buzybox/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" http.request.useragent="docker/1.9.1 go/go1.4.2 kernel/3.10.0-327.22.2.el7.x86_64 os/linux arch/amd64" instance.id=745d924f-3a9d-4613-bd1d-dd9c826d52e0 vars.digest="sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" vars.name="bnz-uat/buzybox"
time="2016-07-20T02:03:20.167566754Z" level=error msg="response completed with error" err.code=unknown err.detail="s3aws: RequestError: send request failed\ncaused by: Get https://os3master-prod-os-aws-XXX-com-au-docker.s3-ap-southeast-2.amazonaws.com/?max-keys=1&prefix=registry%2Fdocker%2Fregistry%2Fv2%2Fblobs%2Fsha256%2Fa3%2Fa3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4%2Fdata: dial tcp 54.66.155.60:443: getsockopt: connection refused" err.message="unknown error" go.version=go1.6.2 http.request.host="172.30.160.215:5000" http.request.id=0cf8999f-21b6-4c8c-9ffb-258e607e4914 http.request.method=HEAD http.request.remoteaddr="10.1.1.1:51518" http.request.uri="/v2/bnz-uat/buzybox/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" http.request.useragent="docker/1.9.1 go/go1.4.2 kernel/3.10.0-327.22.2.el7.x86_64 os/linux arch/amd64" http.response.contenttype="application/json; charset=utf-8" http.response.duration=308.013092ms http.response.status=500 http.response.written=104 instance.id=745d924f-3a9d-4613-bd1d-dd9c826d52e0 vars.digest="sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" vars.name="bnz-uat/buzybox"

Is there anyone willing to help out here. Banging my head.

Cheers,

Lew

On 13 July 2016 at 00:13, Lewis Shobbrook <l shobbrook+origin base2services com> wrote:
Hi Guys,

I've spent the past two days looking at the problem of s3 backed storage from Sydney based instances. The problem now appears to be more generally associated with S3 backed registries in the ap-southeast-2 region.

With configurations identical other than keys/bucket/region the results are considerably different.

In Sydney I see this...

time="2016-07-12T13:03:33.859868304Z" level=error msg="response completed with error" err.code=UNKNOWN err.detail="s3: Get https://s3-external-1.amazonaws.com/os3master-prod-os-aws-XXX-com-au-docker-registry/registry/docker/registry/v2/repositories/bnz-uat/busybox/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/link: dial tcp 54.66.155.60:443: getsockopt: connection refused" err.message="unknown error" go.version=go1.6 http.request.host="172.30.27.237:5000" http.request.id=f5a2a1cb-be54-4f31-a13d-1294fc74a3f8 http.request.method=HEAD http.request.remoteaddr="10.1.0.1:45204" http.request.uri="/v2/bnz-uat/busybox/blobs/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4" http.request.useragent="docker/1.9.1 go/go1.4.2 kernel/3.10.0-327.22.2.el7.x86_64 os/linux arch/amd64" http.response.contenttype="application/json; charset=utf-8" http.response.duration=4.863003309s http.response.status=500 http.response.written=487 instance.id=542dd6da-2901-4cc7-975c-4afd71077e19

Configuration as follows...

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    layerinfo: inmemory
  s3:
    accesskey: XXX
    secretkey: XXX
    region: us-east-1
    bucket: os3master-prod-os-aws-XX-com-au-docker-registry
    encrypt: true
    secure: true
    v4auth: true
    rootdirectory: /registry
auth:
  openshift:
    realm: openshift
middleware:
  repository:
    - name: openshift

While the working registry in us-east
time="2016-07-12T13:45:08.168492162Z" level=info msg="response completed" go.version=go1.6 http.request.host="172.30.1.78:5000" http.request.id=66047be1-0f4e-4045-b710-c94fdaca10d6 http.request.method=GET http.request.remoteaddr="10.1.0.1:56051" http.request.uri="/v2/paycorp-pty-ltd/paycorp-service-auth1501/manifests/sha256:78a3af175b9d7c653cd4fdb42a3ccf44095c6e07b912c2719a85d5568179129f" http.request.useragent="docker/1.9.1 go/go1.4.2 kernel/3.10.0-327.13.1.el7.x86_64 os/linux arch/amd64" http.response.contenttype="application/json; charset=utf-8" http.response.duration=49.599539ms http.response.status=200 http.response.written=66957 instance.id=04de8cf0-2b4e-4546-b623-43f05c771805

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    layerinfo: inmemory
  s3:
    accesskey: XXX
    secretkey: XXX
    region: us-east-1
    bucket: os3master-test-openshift-XXX-com-au-docker
    encrypt: true
    secure: true
    v4auth: true
    rootdirectory: /registry
auth:
  openshift:
    realm: openshift
middleware:
  repository:
    - name: openshif

Looking inside the containers the only diffrence appears to be the image id which is openshift/origin-docker-registry:v1.2.0-rc1 in the working region & 1.2.0 in the failing.

I'm hoping someone can assist here.

Cheers,

Lew


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]