The largest factor in estimating the Jenkins server disk usage will be the artifacts generated by the build. An artifact is the binary output of the build which you want to retain for testing or deployment. As you can imagine, the artifact size can range from a Java JAR which is just a few kilobytes, up to a DVD ISO in the gigabytes. So, the artifact size depends on the project being built. The second major factor is your retention policy for these artifacts. Jenkins allows you to determine a retention policy for build artifacts, measured in days or in number of builds. Using these two numbers you can estimate the disk space required for a given project.Roughly:
artifact disk usage per project = artifact size * number of artifacts retained
project size = artifact disk usage + report disk usage
You might add a (generous) gigabyte for all the historical reports you want to keep for a given project. You’d then add this up for each project, with plenty of extra room for growth.
Disk usage on the Jenkins should be:
- Easily expandable, perhaps using volume managers such as LVM
- Regularly backed up.
In general it does not need to be especially fast, so favor size over speed on the Jenkins server. Disk speed matters on the build executors, which are ideally on separate servers. Their storage does not usually need to be backed up, assuming you can recreate their configuration easily
You will want several gigabytes (4-6) of memory, plus several (4-8) CPU cores, depending on the number of users accessing the system simultaneously. But the most important aspect is that the storage should be easily expandable.This usually translates toconfiguring a server with at least 500GB-1TB of disk, 6-8GB of memory and at least a 4 core processor, ensuring you can easily expand disk storage as needed.