CloudBees CI (CloudBees Core) Performance Best Practices for Linux


  • Poor performance of a Jenkins instance is frequently due to misconfiguration during installation and are often the result of not following Best Practices.

The following document describes some common issues encountered when getting started with Jenkins, and provides some best practices.



Open files and new processes

Jenkins is an application which usually produces more open files in the OS than the default values set-up in almost any Linux distribution. When there is a migration or an installation of a fresh Jenkins instance, this is usually the first issue you will face due to Jenkins not being able to open as many files as it requires.

The stacktrace we will see on Jenkins when this happens is:

Caused by: Too many open files
	at Method)

To access to the default values in the OS you can use the command ulimit -a.

max user processes              (-u) 1024
open files                      (-n) 1024

The recommended values below should be set-up in /etc/security/limits.conf.

jenkins      soft   nofile  4096
jenkins      hard   nofile  8192
jenkins      soft   nproc   30654        
jenkins      hard   nproc   30654
  • Note that this assumes jenkins is the Unix user running the Jenkins process. If you’re running JOC, the user is probably jenkins-oc.

You can find detailed information about this problem in our KB article
* Too many open files

Huge pages

Some Unix distributions have Transparent Huge Pages (THP) enabled which is known to cause performance issues with Java workload on big servers. If you would like more background on this, you can take a look at this CentOS issue: and this JDK issue:

The recommendation for Jenkins is to disable the THP. For this, run this command as root:

echo "never" > /sys/kernel/mm/redhat_transparent_hugepage/enabled

Detailed information about disabling THP can be found in the RHEL KB:

$JENKINS_HOME Shared Storage

This is perhaps the most common mistake executed by Jenkins Administrators which has a big performance impact. It is important that the .WAR file is not extracted to the $JENKINS_HOME/war directory in the shared filesystem. Doing so, the application will execute read operations through the shared filesystem location.

Some configurations may do this by default, but .WAR extraction can easily be redirected to a local cache (ideally SSD for better Jenkins core I/O) on the container/VM’s local filesystem with the JENKINS_ARGS properties --webroot=$LOCAL_FILESYSTEM/war --pluginroot=$LOCAL_FILESYSTEM/plugins. For example, on Debian installations, where $NAMErefers to the name of the jenkins instance: --webroot=/var/cache/$NAME/war --pluginroot=/var/cache/$NAME/plugins.

  • Note (if Jenkins is running in a web container): The --pluginroot and --webroot options are specific to Winstone. The alternative to --pluginroot is to add the system property -Dhudson.PluginManager.workDir=$LOCAL_FILESYSTEM/plugins. There is no need for an alternative to --webroot since the .war is extracted in a local directory of the container manager. For example in Tomcat, if the application name is jenkins the .war is extracted under $CATALINA_HOME/webapps/jenkins.

  • Note: --pluginroot option and -Dhudson.PluginManager.workDir system property only work since jenkins-1.649 so if the argument is added in a jenkins version lower than this one, jenkins might not be able to start.

$JENKINS_HOME is read intensively during the start-up. If bandwidth to your shared storage is limited, you’ll see the most impact in startup performance. Large latency causes a similar issue, but this can be mitigated somewhat by using a higher value in the bootup concurrency by a system property -Djenkins.InitReactorRunner.concurrency=8.


Use NFSv3 or NFSv4.1 or greater as NFSv4.0 is NOT recommended due known performance problems. Please follow the best practices on regards NFS on the KB below.


Heap memory

See JVM Best Practices

Garbage collector

As explained in Prepare Jenkins for support under the section A. Java Parameters.

  • G1GC (-XX:+UseG1GC) should be used.


Jenkins Best Practices is a great entry point and contains a collection of tips, advice, gotchas and advice for getting the most from your Jenkins instance, including performance.

The following are some good best practices to follow on your Jenkins journey.

Master build executors

Never use the master to build jobs, as this puts an unnecessary strain on resources. Disable building on the master by navigating to Manage Jenkins -> Manage Nodes -> master and do the following:

  • Set-up # of executors to 0
  • Change the Usage strategy to Only build jobs with label expressions matching this node

SCM Triggers

When possible, use Webhooks as explained in SCM Best Practices > Triggers: Polling must die section.

If for some reason, you cannot get away from SCM polling then you should limit the concurrent SCM polling to no more than 10 under Manage Jenkins -> Configure System [SCM Polling -> Max # of concurrent polling]

The problem is usually that Jenkins users create bad SCM pollings like * * * * * [poll every minute].

The below script can be executed under Manage Jenkins->Script Console to provide the SCM polling value of all the jobs configured in the instance.

import hudson.triggers.*;
import hudson.maven.MavenModuleSet;

println("--- SCM Polling for FreeStyle jobs ---");
List<FreeStyleProject> freeStyleProjectList = Jenkins.getInstance().getAllItems(FreeStyleProject.class);
for (FreeStyleProject freeStyleProject : freeStyleProjectList) {
  SCMTrigger scmTrigger = freeStyleProject.getSCMTrigger();
  if (scmTrigger!= null) {
    String spec = scmTrigger.getSpec();
    if (spec != null) {
      println(freeStyleProject.getFullName() + " with spec " + spec);

println("--- SCM Polling for Maven jobs ---");
List<MavenModuleSet> mavenModuleSetList = Jenkins.getInstance().getAllItems(MavenModuleSet.class);
for (MavenModuleSet mavenModuleSet : mavenModuleSetList) {
  SCMTrigger scmTrigger = mavenModuleSet.getTrigger(SCMTrigger.class);
  if (scmTrigger!= null) {
    String spec = scmTrigger.getSpec();
    if (spec != null) {
      println(mavenModuleSet.getFullName() + " with spec " + spec);


The JobConfigHistory Plugin is one of the most used plugins, and often it is not well configured, which can produce performance issues. Thus, configure this plugin as explained in JobConfigHistory Plugin Best Practices

Folder Plugin

After version 5.14 (December 5, 2016) of the CloudBees+Folders+Plugin a better caching method for folder health was implemented, however we still recommend to disable the weather column for increased performance.

Post Build: Archive the Artifacts

Do not use the Archive the Artifacts post build step for large artifacts (> 100KB), they should be sent to your favorite Artifact Repository Manager e.g. Artifactory, Nexus, S3, etc.) and should not be kept in the Build directory of the job (e.g ${ITEM_ROOTDIR}/builds/<build>/archive)

Discard Old Builds/Fingerprints

Builds and fingerprints are correlated. Jenkins maintains a database of fingerprints, Jenkins records which builds of which projects used. This database is updated every time a build runs and files are fingerprinted. Therefore it is a best practice to manage old builds to prevent your $JENKINS_HOME directory from becoming unnecessarily large.

Discard Old Builds/Fingerprints


It is very common to see instances which are receiving a large amount of REST API calls without the Jenkins Administrators knowledge.

The number of REST API calls received can be monitored by utilizing the CloudBees Monitoring Plugin and can be accessed via Web Browser at: http/s://<INSTANCE_URL>/monitoring.

The Jetty or Tomcat access logs should also be reviewed to monitor REST API activity.

To block all the REST API calls you can use the CloudBees Request Filter Plugin and follow the instructions found here: Block All API Calls.

If your business is relying on REST API, then you should follow the best practices Best Practice For Using Jenkins REST API

Are you Suffering From Performance Issues?

Please file a support ticket on the CloudBees Support Portal and attach the required information outlined in Required Data: High CPU On Linux

If your archive is larger than 20Mb please use this service to send it to us. This service works best in Chrome or Firefox.

Have more questions?


Please sign in to leave a comment.