How to extend the size of a worker

Issue

One of the workers is running out resources or is close to the defined resource threshold and you need to extend the disk size.

Environment

  • CloudBees Jenkins Enterprise (CJE)

Resolution

Depending on the type of worker, you will have to follow different steps

Extending Master workers

1) Create a new worker with the desired configuration.

$ cje prepare worker-add
(edit the file worker-add.config and ensure that you configure the proper values (especially the workload_type to master) )
$ cje apply

2) Mark the master worker you want to update as disabled. The worker will be still running but won’t provision new masters

$ cje prepare worker-disable
(edit the file worker-disable.config setting the worker you want to disable.)
$ cje apply

3) Restart the managed masters (which are running in the worker you want to update) one by one to force them to be provisioned in the new worker.
* Access to the CJOC and then click on the Masters tab
* Move the mouse over the name of the Master and a little arrow will show up. Click on it and then Manage > Restart
* Wait until the master is up and running again and verify that it’s running on the new worker
* Proceed with the next Master (if you have more than one)

4) If the CJOC is running in the disabled, the CJOC need to be restarted in order to be reprovisioned in the new worker. If the CJOC is running in a different worker, you can skip this step.

dna stop cjoc
dna start cjoc

5) If you are running Elasticsearch in a master worker and want to keep the Elasticsearch data, you will have to perform the next operations. If is not case you can skip this step and jump to 6)

  • Execute the cluster elasticsearch-backup operation to backup all ES data

    $ cje prepare elasticsearch-backup
    $ cje apply
    

  • The backup process can take some time. To verify if the backup finished successfully or no, execute the next commands

    export ES_PASSWD=$(awk '/elasticsearch_password/ {print $3}' .dna/secrets)
    export DOMAIN=$(awk '/domain_name/ {print $3}' .dna/project.config)
    export ES_CREDS="admin:$ES_PASSWD"
    export ES_URL="https://elasticsearch.$DOMAIN"
    curl -f -s -XGET -u $ES_CREDS $ES_URL/_snapshot/tiger-backup/_all?pretty  > snapshots_backup.json
    

    Check the fields state and end_time in the json returned to confirm if the backup finished. Wait until the backup is completed to continue.

  • Restart Elasticsearch

    dna stop elasticsearch
    dna start elasticsearch
    
  • Restore the Elasticsearch data

    $ cje prepare elasticsearch-restore
    $ cje apply
    

6) At this point the CJOC and all the master should be running on the new worker, so we can proceed to remove the old one

$ cje prepare worker-remove
(edit the file worker-remove.config setting the worker you want to remove)
$ cje apply

Extending Build workers

1) Create a new worker with the desired configuration.

$ cje prepare worker-add
(edit the file worker-add.config and ensure that you configure the proper values (especially the workload_type to build) )
$ cje apply

2) Mark the build worker you want to update as disabled. The worker will be still running but won’t take new builds

$ cje prepare worker-disable
(edit the file worker-disable.config setting the worker you want to disable.)
$ cje apply

3) Wait until all builds in progress on the masters finish. The new triggered builds will be processed in the new worker. To verify if there are builds running on that worker, you can connect directly to the worker and check the containers, it should be only cloudbees/pse-logstash running

$ dna connect worker-3
$ sudo docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS               NAMES
8afec729751c        cloudbees/pse-logstash   "/docker-entrypoin..."   3 days ago          Up 3 days                               cloudbees-logstash.service

4) Remove the old worker

$ cje prepare worker-remove
(edit the file worker-remove.config setting the worker you want to remove)
$ cje apply

Extending Elasticsearch workers

1) Create a new worker with the desired configuration.

$ cje prepare worker-add
(edit the file worker-add.config and ensure that you configure the proper values (especially the workload_type to elasticsearch) )
$ cje apply

2) Execute the cluster elasticsearch-backup operation to backup all ES data

$ cje prepare elasticsearch-backup
$ cje apply

3) The backup process can take some time. To verify if the backup finished successfully or no, execute the next commands

export ES_PASSWD=$(awk '/elasticsearch_password/ {print $3}' .dna/secrets)
export DOMAIN=$(awk '/domain_name/ {print $3}' .dna/project.config)
export ES_CREDS="admin:$ES_PASSWD"
export ES_URL="https://elasticsearch.$DOMAIN"
curl -f -s -XGET -u $ES_CREDS $ES_URL/_snapshot/tiger-backup/_all?pretty  > snapshots_backup.json

Check the fields state and end_time in the json returned to confirm if the backup finished. Wait until the backup is completed to continue.

4) Restart Elasticsearch

dna stop elasticsearch
dna start elasticsearch

5) Restore the Elasticsearch data

$ cje prepare elasticsearch-restore
$ cje apply

Wait until the process finish

6) Remove the old worker

$ cje prepare worker-remove
(edit the file worker-remove.config setting the worker you want to remove)
$ cje apply
Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.