[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Kubernetes Jenkins slaves in parallel



After much head-banging over this I finally got pods created in parallel.

  1. Creating multiple podTemplates using different labels for each allows you to create the pods in parallel (it also allows concurrent jobs which are otherwise serialised too). But the plugin treats all podTemplates in a Run as nested templates even when they are not. Every podTemplate in the pipeline effectively inherits from previously declared templates. 

  2. The hacky workaround to stop this is to clear all previously stacked template names from the Run before creating a new template. Call this function before any occurrence of podTemplate:
@NonCPS
def clearTemplateNames() {
def r = script.currentBuild.rawBuild
def action = "">
if(action) {
action.names.clear()
}
}

With this I can parallelise pods to my hearts content. 

Cheers

Lionel.

On 7 February 2017 at 00:06, Ben Parees <bparees redhat com> wrote:
I'm trying to understand if this is a general jenkins pipeline issue or a problem specifically w/ the kubernetes plugin.

Given that it seems to run properly (in parallel) when the two parallel steps are not dependent on launching distinct slaves, it sounds like a problem in the kubernetes plugin itself(this is the component responsible for launching slave pods on your cluster)....  I would start by opening an issue in that repo:

https://github.com/jenkinsci/kubernetes-plugin/
or a JIRA:

On Sun, Feb 5, 2017 at 6:56 PM, Lionel Orellana <lionelve gmail com> wrote:
The intended usage might be to create a pod with all the containers you'll need during in the pipeline.

Defining the same container twice to run them in parallel works.

podTemplate(name: 'jenkins-slave', label: 'kube-node',  instanceCap: 4,
containers: [
containerTemplate(
name: 'maven1', 
image: "${registry}:5000/jenkins/slave-maven",
command: '/bin/bash',
args: '',
alwaysPullImage: true,
ttyEnabled: true),
containerTemplate(
name: 'maven2', 
image: "${registry}:5000/jenkins/slave-maven",
command: '/bin/bash',
args: '',
alwaysPullImage: true,
ttyEnabled: true)

])
{
                node('kube-node') {
    stage ('Compile') {
    container ('maven1') {
sleep(5)
     }
    }
 parallel (
    "Unit": {
            container('maven1') {
sleep(30)
            }
                }
    },
    "Integration": {
                    container('maven2') {
sleep(30)
             }
                    }
    }
)
        }
}


However this won't work with splitTests where the number of parallel branches is dynamic. 


On 4 February 2017 at 09:04, Lionel Orellana <lionelve gmail com> wrote:
I just tried removing the stages altogether but the pods are still created in sequence.

 parallel (
   "Unit Tests": {
               node('kube-node') {
           container('maven') {
               sleep(30)
               }
               }
   },
   "Integration Tests": {
                   node('kube-node') {
                   container('maven') {
                       sleep(30)
                   }
                   }
   }
)



On 4 February 2017 at 07:58, Lionel Orellana <lionelve gmail com> wrote:
Hi Gabe.

I am running the latest version of Jenkins from jenkinsci. I even updated to v2.44 this week when the security advisory came out and updated all plugins while I was at it.

I'll try leaving one stage in parallel to see what happens.

Thanks.

Lionel.



On Sat., 4 Feb. 2017 at 4:34 am, Gabe Montero <gmontero redhat com> wrote:
This is certainly a rapidly evolving space on the jenkinsci side, but Cesar and I were just talking,
and at the most recent versions of jenkins (i.e. a version we have not gotten to yet in our openshift
jenkins image, but we do have a jenkins upgrade for our image in the works), you are allowed
*one* stage within a parallel block.  Before that, any stage forced serial execution.

My speculation is that you are hitting both restrictions form older versions of jenkins 2, and/or
perhaps the fact that you have 2 stages in you parallel block.

Cesar - please see if my interpretation here of what Lionel has matches our recent conversation and
your recent forays into this aspect of Jenkinsfiles.

On Thu, Feb 2, 2017 at 2:57 PM, Lionel Orellana <lionelve gmail com> wrote:
Hi

I have a pipeline with a podTemaplate definition from the Kubernetes plugin to create slaves on Openshift. 

Something like this.

podTemplate(name: 'jenkins-slave', label: 'kube-node',  instanceCap: 4,
containers: [
containerTemplate(
name: 'maven', 
image: "${registry}:5000/jenkins/slave-maven",
command: '/bin/bash',
args: '',
alwaysPullImage: true,
ttyEnabled: true)
])
{
                node('kube-node') {
    stage ('Compile') {
    container ('maven') {
     }
    }
        }
node('kube-node') {
stage ('Unit Tests') {
container('maven') {
}
}
}
node('kube-node') {
stage ('Integration Tests') {
container('maven') {
}
}
}
}

That works fine and 3 kube-node pods get created one after the other.  But I want to parallelise the Unit Tests and the Integration Tests stages. When I move those two nodes into a parallel step they still get created in sequence. 

 parallel (
    "Unit": {
                node('kube-node') {
                stage ('Unit Tests') {
            container('maven') {
            }
                }
                }
    },
    "Integration": {
                    node('kube-node') {
                    stage ('Integration Tests') {
                    container('maven') {
                    }
                    }
                    }
    }
)

The console shows both branches waiting for a new pod to be created.


[Pipeline] parallel
[Pipeline] [Unit] { (Branch: Unit)
[Pipeline] [Integration] { (Branch: Integration)
[Pipeline] [Unit] node
[Pipeline] [Integration] node
[Unit] Still waiting to schedule task
[Integration] All nodes of label ‘kube-node’ are offline
[Unit] Still waiting to schedule task
[Integration] All nodes of label ‘kube-node’ are offline

At this point one pod is created and one of the branches runs. The branch finishes, the pod is terminated, a new pod created and the other branch runs. 

I have set -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 in my jenkins startup options but made no difference. 

Any ideas how I can parallise these pods? 

Thanks

Lionel

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users







--
Ben Parees | OpenShift



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]