Concurrent Humongous Allocations

Issue

  • CloudBees Jenkins resources (Operations Center, Masters, Agents, etc) are unstable or failing
  • I found a number of concurrent humongous allocation messages in my garbage collection log

 59.584: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 1891631104 bytes, allocation request: 1972856 bytes, threshold: 966367620 bytes (45.00 %), source: concurrent humongous allocation]
 59.593: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 1893728256 bytes, allocation request: 1972760 bytes, threshold: 966367620 bytes (45.00 %), source: concurrent humongous allocation]
 59.595: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 1895825408 bytes, allocation request: 1971840 bytes, threshold: 966367620 bytes (45.00 %), source: concurrent humongous allocation]
 59.597: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 1897922560 bytes, allocation request: 1971584 bytes, threshold: 966367620 bytes (45.00 %), source: concurrent humongous allocation]

To see these errors in your GC logs, you’ll need to have verbose GC logging enabled as mentioned in our article on how to prepare Jenkins for support.

A humongous allocation when using the G1GC garbage collection policy is defined as:

> Whenever your application is using the G1 garbage collection algorithm, a phenomenon called humongous allocations can impact your application performance in regards of GC. To recap, humongous allocations are allocations that are larger than 50% of the region size in G1.
>
> Having frequent humongous allocations can trigger GC performance issues, considering the way that G1 handles such allocations:
>
> * If the regions contain humongous objects, space between the last humongous object in the region and the end of the region will be unused. If all the humongous objects are just a bit larger than a factor of the region size, this unused space can cause the heap to become fragmented.
>
> * Collection of the humongous objects is not as optimized by the G1 as with regular objects. It was especially troublesome with early Java 8 releases ??? until Java 1.8u40 the reclamation of humongous regions was only done during full GC events. More recent releases of the Hotspot JVM free the humongous regions at the end of the marking cycle during the cleanup phase, so the impact of the issue has been reduced significantly for newer JVMs.

And to read more on the topic generally, see this article.

Environment

Resolution

  1. Ensure that you are following the best practices on JVM Heap settings

  2. We also recommend that you set -XX:G1HeapRegionSize, for example -XX:G1HeapRegionSize=8m. Note, you have to review the GC logs manually to determine the size appropriate for your case, and it has to follow this requirement:

  • The specified region size must be between 1 and 32 megabytes and has to be a power of two
  • The size specified has to be large enough so that concurrent humongous allocation request is not greater than 50% of the heap region size

References

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.