You have received an error in the Jenkins application which contains
Too many open files in the stacktrace.
Caused by: java.io.IOException: Too many open files at java.io.UnixFileSystem.createFileExclusively(Native Method) at java.io.File.createNewFile(File.java:1006) at java.io.File.createTempFile(File.java:1989)
java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
- CloudBees Core
- CloudBees Jenkins Platform
- CloudBees Jenkins Enterprise
- CloudBees Jenkins Operations Center
At the user level, check the numerical limit of open files currently allowed. To see the current limits of your system, run
ulimit -a on the command-line as the user running Jenkins (usually
jenkins-oc if you’re running CJOC). You should see something like this:
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 30 file size (blocks, -f) unlimited pending signals (-i) 30654 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 99 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
To increase limits, add these lines to
/etc/security/limits.conf per our Best Practices:
jenkins soft nofile 4096 jenkins hard nofile 8192 jenkins soft nproc 30654 jenkins hard nproc 30654
Note that this assumes
jenkins is the user running the Jenkins process. If you’re running JOC, the user is probably
You can now logout and login and check that the limits are correctly modified with
Limits are applied when the user logs in: you must restart Jenkins to get the new limits.
If after setting this you still encounter open file descriptor issues, it is possible there is a file handle leak which is causing this problem to appear eventually despite any fixed limit. To track these down you will need to install the File Leak Detector plugin. More information.
Once you install the File Leak Detector Plugin, you should be able to access it by going to
Manage Jenkins>Open File Handles:
Once you click on Open File Handles you are met with some options:
We want to focus on two things here,
Ideally, we want to pass in options like this:
error=/tmp/file_leak_detector.txt,threshold=5000 (We would want to use a path that exists on your system)
This means once 5000 open files are reached, it will dump the data to a text file and you can then provide that to our team for investigation. Typically, we investigate anything over 5-7K open files as abnormal.
If Jenkins or a Jenkins slave is running inside a container, you need to increase these limits inside the container. Before Docker 1.6, all containers inherited the
ulimits of the docker daemon. Since Docker 1.6, it is possible to configure the user limits to apply to a container.
You can change the daemon default limit to apply it to all containers:
docker -d --default-ulimit nofile=4096:8192
You can also override default values on a specific container:
docker run --name my-jenkins-container --ulimit nofile=4096:8192 -p 8080:8080 my-jenkins-image ...
Note: By default, linux set the nproc limit to the maximum value. It is possible to set up the nproc to be used by a container but be aware the nproc is a per user value and not a “per container” value.
More information about Docker
ulimits can be found here: https://docs.docker.com/engine/reference/commandline/run/
More information about the Docker daemon configuration can be found here: https://docs.docker.com/engine/reference/commandline/daemon/