- Jenkins suddenly crashed and
[XXXXX] Out of memory: Kill process <JENKINS_PID> (java) score <SCORE> or sacrifice child [XXXXX] Killed process <JENKINS_PID> (java) total-vm:XXXkB, anon-rss:XXXkB, file-rss:XXXkB, shmem-rss:XXXkB
- CloudBees Jenkins Enterprise - Managed Master (CJEMM)
- CloudBees Jenkins Enterprise - Operations Center (CJEOC)
- CloudBees Jenkins Platform - Client Master (CJPCM)
- CloudBees Jenkins Platform - Operations Center (CJPOC)
- CloudBees Jenkins Team (CJT)
- Jenkins LTS
A Java process is made up of:
- Java heap space (set via
- the Metaspace (previously PermGen in Java 7)
- the Native Memory area
Each one of these areas will use RAM. The memory footprint of Jenkins (a Java application) is the sum of the maximum Java heap size, the Metaspace size and the native memory area.
It is important to understand that the Operating System itself and any other processes running on the machine have their own requirements regarding RAM and CPU. The Operating System uses a certain amount of RAM which leaves the remaining RAM to be split among Jenkins and any other processes on the machine.
(This does not indicate a problem with Jenkins. It indicates that the Operating System is unable to provide enough resources for all the programs it has been asked to run.)
The OOM Killer is a function of the linux kernel that kill rogue processes that are requesting more memory that the OS can allocate so that the system can survive. The function applies some heuristics (it gives each process a score) to decide which process to kill when the system is in such state. The process monopolizing the most amount of memory and not releasing enough of it is more likely to be killed.
If you are affected by this error, there could be different causes:
- Too much memory is allocated to Jenkins
- Other processes are running on the same machine as Jenkins
Following are recommendations for each case.
1) Too much memory allocated to Jenkins
You cannot (shouldn’t) allocate as much memory as there is available on the machine. That is because the Operating System needs resources to manage the system.
We recommend keeping a ratio of Total Memory Available / Maximum Memory allocated to Jenkins JVM to 70%.
2) Other processes are impacting Jenkins
In this scenario, Jenkins is not the only process running on the machine but it is killed because it is the process consuming the most memory on the OS.
We strongly recommend that Jenkins be the only non-process running on the machine hosting it. Should you run other processes, like for example monitoring agents, ensure that they are not overloading the system or otherwise that enough resources are available to handle the load on the machine.
How to find the culprit
It is possible to check the processes consuming the most memory at any time on the machine with commands like:
$ top -o mem
$ ps aux --sort -pmem
You can also dig into the
/var/log/kern.log or the
/var/log/dmesg.log. In these logs, locate the “Out of memory: Kill process
[...] [XXXXX] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name [XXXXX] [ 480] 0 480 13863 113 26 0 -1000 auditd [XXXXX]  123 12345 4704977 3306330 6732 0 0 java [XXXXX]  0 11939 46699 328 48 0 0 crond [XXXXX]  0 11942 28282 45 12 0 0 sh [XXXXX]  456 16789 1695936 38643 165 0 0 java [...] [XXXXX] Out of memory: Kill process 12345 (java) score 869 or sacrifice child [...]
In this example, the Jenkins PID was
12345 and it was killed. We can see that Jenkins was consuming ~4.7 GB memory (see
total_vm) and there is also another process of PID
16789 that was consuming ~1.6 GB of memory. You can then investigate more about this other process and see what it does by running the following command:
$ ps -f <pid>
For more details about the OOM Killer and this particular issue, have a look at the following links: