Jenkins is unresponsive and/or the instance shows a growing number of
BLOCKEDthreads like the following:
"Computer.threadPoolForRemoting [#...]" #XXXXX daemon [...] waiting for monitor entry [...] java.lang.Thread.State: BLOCKED (on object monitor) at hudson.plugins.sshslaves.SSHLauncher.afterDisconnect(SSHLauncher.java:1330) - waiting to lock <0x00000003c6da7ed0> (a hudson.plugins.sshslaves.SSHLauncher) at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:627) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
- CloudBees Jenkins Enterprise - Managed Master (CJE-MM)
- CloudBees Jenkins Platform - Client Master (CJP-CM)
- CloudBees Jenkins Team (CJT)
- Jenkins LTS
- SSH Slaves plugin
threadPoolForRemoting is an executor pool which is widely used. Such a threads burst is most likely caused by underlying bugs and issues that create threads intensively, so intensively that it clogs the executors service.
For this particular stacktrace - that shows it is related to the SSH Launcher - there is an epic that is currently under investigation: JENKINS-27514. Until this is resolved, here are possible solutions and workarounds to avoid this:
Upgrade SSH Agents plugin to version 1.24 or later
Many improvements are made to the SSH Slaves plugin so it is important to keep ip up to date. In particular, version 1.24 contains fixes that prevent the thread spikes caused by SSH connections.
Configure SSH Agents with a launch timeout
By default, the SSH Launcher has no connection launch timeout. One workaround for that issue is to specify a launch timeout in the Advanced configuration of your SSH Agents:
Use the NIO SSH Slaves plugin