Jenkins suddenly crashed and
[XXXXX] Out of memory: Kill process <JENKINS_PID> (java) score
[XXXXX] Killed process <JENKINS_PID> (java) total-vm:XXXkB, anon-rss:XXXkB, file-rss:XXXkB, shmem-rss:XXXkB
- CloudBees Core on modern cloud platforms - Managed Master
- CloudBees Core on modern cloud platforms - Operations Center
- CloudBees Core on traditional platforms - Client Master
- CloudBees Core on traditional platforms - Operations Center
- CloudBees Jenkins Distribution
- CloudBees Jenkins Enterprise - Managed Master
- CloudBees Jenkins Enterprise - Operations Center
- CloudBees Jenkins Team
- CloudBees Jenkins Platform - Client Master
- CloudBees Jenkins Platform - Operations Center
- Jenkins LTS
A Java process is made up of:
- Java heap space (set via
- the Metaspace (previously PermGen in Java 7)
- the Native Memory area
Each one of these areas will use RAM. The memory footprint of Jenkins (a Java application) is the sum of the maximum Java heap size, the Metaspace size and the native memory area.
It is important to understand that the Operating System itself and any other processes running on the machine have their own requirements regarding RAM and CPU. The Operating System uses a certain amount of RAM which leaves the remaining RAM to be split among Jenkins and any other processes on the machine.
(This does not indicate a problem with Jenkins. It indicates that the Operating System is unable to provide enough resources for all the programs it has been asked to run.)
The OOM Killer is a function of the linux kernel that kill rogue processes that are requesting more memory that the OS can allocate so that the system can survive. The function applies some heuristics (it gives each process a score) to decide which process to kill when the system is in such state. The process monopolizing the most amount of memory and not releasing enough of it is more likely to be killed.
If you are affected by this error, there could be different causes:
- Too much memory is allocated to Jenkins
- Other processes are running on the same machine as Jenkins
Following are recommendations for each case.
You cannot (shouldn’t) allocate as much memory as there is available on the machine. That is because the Operating System needs resources to manage the system.
We recommend keeping a ratio of Total Memory Available / Maximum Memory allocated to Jenkins JVM to 70%.
In this scenario, Jenkins is not the only process running on the machine but it is killed because it is the process consuming the most memory on the OS.
We strongly recommend that Jenkins be the only non-process running on the machine hosting it. Should you run other processes, like for example monitoring agents, ensure that they are not overloading the system or otherwise that enough resources are available to handle the load on the machine.
How to find the culprit
It is possible to check the processes consuming the most memory at any time on the machine with commands like:
$ top -o mem
$ ps aux --sort -pmem
You can also dig into the
/var/log/kern.log or the
/var/log/dmesg.log. In these logs, locate the “Out of memory: Kill process <JENKINS_PID>” message. Just above that message, the kernel dumps the stats of the processes that were running. For example:
[...] [XXXXX] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name [XXXXX] [ 480] 0 480 13863 113 26 0 -1000 auditd [XXXXX]  123 12345 4704977 3306330 6732 0 0 java [XXXXX]  0 11939 46699 328 48 0 0 crond [XXXXX]  0 11942 28282 45 12 0 0 sh [XXXXX]  456 16789 1695936 38643 165 0 0 java [...] [XXXXX] Out of memory: Kill process 12345 (java) score 869 or sacrifice child [XXXXX] Killed process 12345 (java) total-vm:18819908kB, anon-rss:13225320kB, file-rss:0kB, shmem-rss:0kB [...]
In this example, the Jenkins PID was
12345 and it was killed. We can see in the summary (
Killed process line) that Jenkins was reserving ~18 GiB of memory (see
total-vm) of which ~12.6 GiB resided in RAM (
anon-rss). However, in the table there is also another process with PID
16789 that is reserving ~6.4 GiB of memory (note that table memory values are in 4 KiB pages). You can then investigate more about this other process and see what it does by running the following command:
$ ps -f <pid>
For more details about the OOM Killer and this particular issue, have a look at the following links: