Too many open files


You have received an error in the Jenkins application which contains Too many open files in the stacktrace.


Caused by: Too many open files
	at Method)

Or Too many open files
	at Method)


  • CloudBees Core
  • CloudBees Jenkins Platform
  • CloudBees Jenkins Enterprise
  • CloudBees Jenkins Operations Center


At the user level, check the numerical limit of open files currently allowed. To see the current limits of your system, run ulimit -a on the command-line as the user running Jenkins (usually jenkins, jenkins-oc if you’re running CJOC). You should see something like this:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 30
file size               (blocks, -f) unlimited
pending signals                 (-i) 30654
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 99
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

To increase limits, add these lines to /etc/security/limits.conf per our Best Practices:

jenkins      soft   nofile  4096
jenkins      hard   nofile  8192
jenkins      soft   nproc   30654        
jenkins      hard   nproc   30654

Note that this assumes jenkins is the user running the Jenkins process. If you’re running JOC, the user is probably jenkins-oc.

You can now logout and login and check that the limits are correctly modified with ulimit -a.

Note that this assumes jenkins is the user running the Jenkins process. If you’re running JOC, the user is probably jenkins-oc.

Additionally, please take into consideration the fact that depending on your Operating System, you might need to perform some additional changes for these settings to take effect. You might need to check with your Operating System team on the specifics of these changes.

  • Example: If you are running SUSE Linux Enterprise, you might need to verify that inside your /etc/pam.d/login file, you have this line: session required

Limits are applied when the user logs in: you must restart Jenkins to get the new limits.

If after setting this you still encounter open file descriptor issues, it is possible there is a file handle leak which is causing this problem to appear eventually despite any fixed limit. To track these down you will need to install the File Leak Detector plugin. More information.

Configure the File Leak Detector Plugin

Once you install the File Leak Detector Plugin, you should be able to access it by going to Manage Jenkins>Open File Handles:


Once you click on Open File Handles you are met with some options:


We want to focus on two things here, error and threshold:


Ideally, we want to pass in options like this: error=/tmp/file_leak_detector.txt,threshold=5000 (We would want to use a path that exists on your system)

This means once 5000 open files are reached, it will dump the data to a text file and you can then provide that to our team for investigation. Typically, we investigate anything over 5-7K open files as abnormal.

As an alternative, you can use the lsof command on your command line to output a list of currently open files. Running this command as root similar to lsof > losf_yyyymmdd_output.txt will generate a file which can be reviewed in addition (or as an alternative) to the File Leak Detector Plugin data.


If Jenkins or a Jenkins agent is running inside a container, you need to increase these limits inside the container. Before Docker 1.6, all containers inherited the ulimits of the docker daemon. Since Docker 1.6, it is possible to configure the user limits to apply to a container.

You can change the daemon default limit to apply it to all containers:

docker -d --default-ulimit nofile=4096:8192

You can also override default values on a specific container:

docker run --name my-jenkins-container --ulimit nofile=4096:8192 -p 8080:8080 my-jenkins-image ...

Note: By default, linux set the nproc limit to the maximum value. It is possible to set up the nproc to be used by a container but be aware the nproc is a per user value and not a “per container” value.

More information about Docker ulimits can be found here:
More information about the Docker daemon configuration can be found here:

Have more questions?


  • 0
    Jason Azze

    "Note that this assumes jenkins is the Unix user running the Jenkins process."
    Indeed. And if you're running JOC, the user is probably jenkins-oc. It only took me a couple of hours to figure out I was changing the limits for the wrong user.

  • 1
    Steven Christenson

    Based on our results, it may NOT be sufficient to increase the Jenkins limits if the system limit is lower.

    Here is one guide that may help

    n summary:

    1. Check the system limit:   cat /proc/sys/fs/file-max
    2. Increase the system limit:   sudo sysctl -w fs.file-max=100000
    3. Make the increase permanent:  sudo vi /etc/sysctl.conf
         change or add the line "fs.file-max = 100000"
    4. If you are spawning processes that run as other users, you may need to add /etc/security/limits.conf for those user(s) as well.

    Jenkins may need to be restarted to make all changes take effect.

    If you have the metrics plugin installed, you can monitor the file-handle limit RATIO (called vm.file.descriptor.ratio)

    There is a known file handle leak in 2.60.x Jenkins that manifests as a constant leak of log files. We've watched Jenkins accumulate 1000 additional file handles per day. 

    Edited by Steven Christenson
Please sign in to leave a comment.