Jenkins stops processing builds in the build queue after an error appears in the logs


Jenkins jobs will sit in the build queue, and not get started, even when there are build agents available for the chosen ‘label’, and you see stack traces in the logs similar to the one shown below:

SEVERE  hudson.triggers.SafeTimerTask#run: Timer task hudson.model.Queue$MaintainTask@XXXX failed

How do we know what is causing this queue freeze?



The Queue.MaintainTask is a periodic task which is run in the instance and it is responsible for maintenance operations such as adding elements to the queue, assigning elements in the queue to nodes or executors, etc. If for some reason, this task fails, this causes the queue to become unresponsive and the jobs eventually stop being run as they stay stuck in the queue.

In order to determine what is the cause for this problem, we need to pay special attention to the full stack trace of the error which shows up in the logs.

You will be able to see some potential causes below, the intent of the list below is to allow you understand the pattern that you might follow to determine the root cause of the failure, thus helping you recover the instance as fast as possible. Whenever possible, we will also add some Workaround or Solution details.

A Nodes Plus Plugin

SEVERE  hudson.triggers.SafeTimerTask#run: Timer task hudson.model.Queue$MaintainTask@XXX failed
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from XXXX/XXX:XXX
                at hudson.remoting.Channel.attachCallSiteStackTrace(
                at hudson.remoting.UserRequest$ExceptionResponse.retrieve(
                at hudson.Launcher$RemoteLauncher.launch(
                at hudson.Launcher$ProcStarter.start(
                at com.cloudbees.jenkins.plugins.nodesplus.CustomNodeProbeBuildFilterProperty.getProbeResult(

In the stacktrace we can clearly see that there is a correlation between the getProbeResult method and the task failure.

A.1 Solution/Workaround

Check your nodes looking for any custom node probe and disable it first as an initial remediation step.

Verify that you are using cloudbees-nodes-plus 1.18 or higher, as this version included some extra verification that prevented faulty custom probes from causing a queue lock.

If after upgrading the plugin persists, disabling any custom node probe should be the next step.

B Microfocus Plugin

 SEVERE  hudson.triggers.SafeTimerTask#run: Timer task hudson.model.Queue$MaintainTask@XXX failed
java.lang.NoClassDefFoundError: Could not initialize class org.apache.logging.log4j.core.impl.Log4jLogEvent
    at org.apache.logging.log4j.core.impl.DefaultLogEventFactory.createEvent(
    at org.apache.logging.log4j.core.config.LoggerConfig.log(
    at org.apache.logging.log4j.core.config.DefaultReliabilityStrategy.log(
    at org.apache.logging.log4j.core.Logger.logMessage(
    at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(
    at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(
    at org.apache.logging.log4j.spi.AbstractLogger.logMessage(
    at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(
    at org.apache.logging.log4j.spi.AbstractLogger.error(

In this case, the exception thrown is different but the effect is the same, the periodic task starts failing.

B.1 Solution/Workaround

For the Microfocus plugin, the recommended workaround is to upgrade the plugin at least up to version 5.6.2, as previous versions of the plugin would also be impacted by JENKINS-6070.

C Build Blocker Plugin

SEVERE  hudson.triggers.SafeTimerTask#run: Timer task hudson.model.Queue$MaintainTask@XXX failed
java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0
    at java.util.regex.Pattern.error(
    at java.util.regex.Pattern.sequence(
    at java.util.regex.Pattern.expr(
    at java.util.regex.Pattern.compile(
    at java.util.regex.Pattern.<init>(
    at java.util.regex.Pattern.compile(
    at java.util.regex.Pattern.matches(
    at java.lang.String.matches(
    at hudson.plugins.buildblocker.BlockingJobsMonitor.checkForPlannedBuilds(
    at hudson.plugins.buildblocker.BlockingJobsMonitor.checkForQueueEntries(
    at hudson.plugins.buildblocker.BuildBlockerQueueTaskDispatcher.checkAccordingToProperties(

Again, the stacktrace will allow us to determine the source of the problem affecting the queue.

C.1 Solution/Workaround

The remediation step if you find yourself impacted by this issue is either to review that you are using a correct regular expression in the configuration page for the job showing in the thread referenced or you can completely disable the plugin. As this is an old plugin that was last released 5 years ago, we would recommend the latter.

D Pipeline Graph Analysis Plugin

SEVERE    hudson.triggers.SafeTimerTask#run: Timer task hudson.model.Queue$MaintainTask@XXX failed
java.lang.IndexOutOfBoundsException: Index: 0
    at java.util.Collections$EmptyList.get(
    at org.jenkinsci.plugins.workflow.graph.StandardGraphLookupView.bruteForceScanForEnclosingBlock(
    at org.jenkinsci.plugins.workflow.graph.StandardGraphLookupView.findEnclosingBlockStart(
    at org.jenkinsci.plugins.workflow.graph.StandardGraphLookupView.findAllEnclosingBlockStarts(

D.1 Solution/Workaround

The solution for this error is to upgrade workflow-api plugin to version 2.35 or higher. This will ensure that you have the fix to prevent the edge case that triggered this issue.

E Block Queued Job Plugin

SEVERE  hudson.triggers.SafeTimerTask#run: Timer task hudson.model.Queue$MaintainTask@XXX failed
    at org.jenkinsci.plugins.blockqueuedjob.condition.JobResultBlockQueueCondition.isBlocked(
    at org.jenkinsci.plugins.blockqueuedjob.BlockItemQueueTaskDispatcher.canRun(
    at hudson.model.Queue.getCauseOfBlockageForItem(

E.1 Solution/Workaround

The solution for this error is to disable the plugin, this plugin was last released 5 years ago and does not have too many installations, so if you can disable it, that would be the most direct way to get the problem solved.

F Kubernetes Plugin

The queue is blocked and no builds are being processed. Shortly after, the instance goes down. After capturing a thread dump for the instance, you get a stack trace similar to the one shown below:

	java.lang.Object.wait(Native Method)

The queue is locked due to the KubernetesSlave._terminate() call.

F.1 Solution/Workaround

This is a known issue that was reported in JENKINS-54988 and it is due to a problem with the Kubernetes plugin.

The fix for this issue was released as Kubernetes Plugin 1.21.1

Have more questions?


  • 0
    Ryan Campbell

    Please note that this general class of errors is tracked as

    The instances above are actually bugs in those particular plugins -- these RuntimeExceptions should be checked and appropriate responses made to the given extension point.

  • 0
    Gregory Picot


    I have another case of Queue$MaintainTask failed :

    SEVERE: Timer task hudson.model.Queue$MaintainTask@XXXXX failed
    org.acegisecurity.userdetails.UsernameNotFoundException: XXXXX
    at hudson.model.User.getUserDetailsForImpersonation(
    at hudson.model.User.impersonate(
    at hudson.model.Queue$Item.authenticate(
    at hudson.model.Node.canTake(
    at hudson.model.Queue$JobOffer.getCauseOfBlockage(
    at hudson.model.Queue.maintain(
    at hudson.model.Queue$MaintainTask.doRun(
    at java.base/java.util.concurrent.Executors$
    at java.base/java.util.concurrent.FutureTask.runAndReset(
    at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.base/java.util.concurrent.ThreadPoolExecutor$
    at java.base/

    The user requested exist in the LDAP, and is know by the master.

    It can happen when the LDAP is not reachable.

    In my case I hope to solve the issue by adding a cache to the LDAP configuration

    Edited by Gregory Picot
Please sign in to leave a comment.