How to Troubleshoot and Address Jenkins Startup Performances


  • Jenkins takes a long time to start


The following recommendation and techniques help to improve and troubleshoot performance issues on startup. They are based on experience with CloudBees customers.

Apply JVM Recommendation

The speed of Jenkins startup depends on the amount of initial memory allocated. It is important that you follow the recommendation:

  • Ensure that the recommended settings are applied to the process running Jenkins, see Prepare Jenkins for Support.
  • Ensure that an appropriate initial heap size of memory is allocated directly on startup (i.e. -Xms). If this is not specified, the process starts with a low portion of the available physical memory (that can vary depending on OS distribution and properties as explained in Garbage Collector Ergonomics)

Remove Custom Loggers

If you have created custom loggers, remove them. Loggers are a good troubleshooting tool but they can impact the performances of your instance if they do not target a specific component or if they are specified at a high level package like for example hudson.model or

Improve logging on Startup

Set the system property -Djenkins.model.Jenkins.logStartupPerformance=true to record additional information of how much time Jenkins spend loading components on startup.

On the next Jenkins startup you will see logs like the following on startup:

2017-07-19 04:36:32.598+0000 [id=55]	INFO	jenkins.model.Jenkins$7#runTask: Took 5487ms for GroovyInitScript.init by pool-9-thread-10
2017-07-19 04:36:35.188+0000 [id=54]	INFO	jenkins.model.Jenkins$7#runTask: Took 8078ms for CloudBeesAssurance.initialize by pool-9-thread-9
2017-07-19 04:36:35.213+0000 [id=43]	INFO	jenkins.model.Jenkins#<init>: Took 0ms for item listener jenkins.triggers.ReverseBuildTrigger$ItemListenerImpl startup
2017-07-19 04:36:35.213+0000 [id=43]	INFO	jenkins.model.Jenkins#<init>: Took 0ms for item listener org.jenkinsci.main.modules.sshd.ItemListenerImpl startup
2017-07-19 04:36:35.213+0000 [id=43]	INFO	jenkins.model.Jenkins#<init>: Took 0ms for item listener org.jenkinsci.plugins.pipeline.milestone.MilestoneStepExecution$CleanupJobsOnDelete startup
2017-07-19 04:36:35.215+0000 [id=43]	INFO	jenkins.model.Jenkins#<init>: Took 2ms for item listener org.jenkinsci.plugins.workflow.flow.FlowExecutionList$ItemListenerImpl startup
2017-07-19 04:36:35.215+0000 [id=43]	INFO	jenkins.model.Jenkins#<init>: Took 0ms for item listener hudson.model.Job$LastItemListener startup
2017-07-19 04:36:35.215+0000 [id=43]	INFO	jenkins.model.Jenkins#<init>: Took 15796ms for complete Jenkins startup

Analyze Startup Logs

You can then extract relevant information from the Jenkins logs and find out what are the most costly components / processes on startup. For example, the following command extract each startup log record and sorts them by amount of time (ascending):

grep "Took [0-9]*ms" jenkins.log | sed 's/.*Took \([0-9]*\)ms \(.*\)/\1:\2/' | sort -t ':' -k1,1n

With this command we can identify costly components / plugin on startup. Example of an output:

11363:for Loading item FolderA by pool-9-thread-8
11519:for Initializing plugin cloudbees-license by pool-9-thread-10
13425:for Loading item SuperPipeline2 by pool-9-thread-12
16692:for Loading item SuperPipeline1 by pool-9-thread-2
27410:for Loading item SuperImportantMultibranch by pool-9-thread-15
52045:for Loading item StagingFolder by pool-9-thread-7
65006:for Loading item ProdFolder by pool-9-thread-1
102728:for item listener hudson.plugins.templateproject.ItemListenerImpl startup
296654:for item listener jenkins.plugins.hipchat.upgrade.ConfigurationMigrator startup
502376:for complete Jenkins startup

Note: You will always see the line ${STARTUP_TIME}ms: for complete Jenkins startup at the end. This is the overall startup time of you Jenkins instance.


While this technique tells us which component takes a lot of time to load on startup, it does not necessarily means - although it often does - that these components are the direct culprits. It could well be an underlying issue in which case we need to find the correlation. If for example many jobs are identified as taking an awful lot of time to load, it does not necessary means that these jobs are bad, but we need to look for a correlation (job type, specific step/property, …). Another example could be several plugins components taking a long time to load on startup because they share a dependency that impact them and therefore the dependency is the problem.

Review Hook scripts

Check whether there is any post-initialization scripts under $JENKINS_HOME/init.groovy.d/ that could impact the startup of your instance. See Groovy Hook Scripts.

Note: Hook scripts time are actually captured by the startup log mentioned above.

Cleanup Large Instances

Cleanup Strategy
Obviously, the startup time is proportional to the size of your instance (number of items, nodes, installed plugins). Ensure you have a strategy for cleanup as explained in the article Deleting Old Builds - Best Strategy for Cleanup and disk space management

Obsolete plugins
An instance with a large amount of plugins may take a lot of time to load all the components. It is a good practice to review plugin usage and remove obsolete ones. The CloudBees Plugin Usage can help to identify plugin usages.

Adjust Storage File System

Jenkins configuration files are stored on disk. Startup performances - and also overall performances for that matter - can be severely impacted if the storage File System holding $JENKINS_HOME is not tuned properly. If the $JENKINS_HOME does not resides in a local File System, ensure that the File System mount is tuned for performances. For NFS, please have a look at the recommendation in the article NFS Guide

Have more questions?


  • 0
    Jonathan Block

    I believe the thing that causes the biggest difference between a 20 second jenkins startup and a 90+ second start is the amount of files in the JENKINS_HOME directory.

    Jenkins does not use a database - it uses the filesystem. In order to take a tally of what you see on the Jenkins homepage, the server appears to do a giant directory traversal of everything in JENKINS_HOME in order to both discover which jobs exist but also which have artifacts and how many at the moment the Jenkins master service starts.

    In order to keep Jenkins working as quickly as possible, do not keep huge numbers of job runs lying around. Set Jenkins jobs to only hold the last 100-150 job runs and keep the number of total jobs as low as possible in order to avoid the startup sequence being forced to do an insane directory scan across your JENKINS_HOME directory.

Please sign in to leave a comment.