1

DevOptics performance impact in a CloudBees environment

Is there any impact in my CloudBees environment (CJOC, Masters, Agents) when using CloudBees DevOptics, in terms of performance and resources?

1 comment

  • Avatar
    Stephen Connolly Official comment

    We can assess the (Sep 2018) performance impact of DevOptics against the two main feature sets.

    NOTE: Users of the DevOptics free offering need only concern themselves with the Run Insights functionality as the Value Stream functionality is only enabled in the Jenkins plugin for users that have a subscription plan including that service.

    Run Insights

    Run Insights was designed to minimise impact on the Jenkins instances that it collects data from:

    • It leverages the Metrics plugin to perform the majority of the data collection. A significant percentage of Jenkins instances run this plugin anyway and it has proven itself not to impact performance
    • We perform a small amount of triage of the data on the Jenkins instance before sending to the DevOptics service. This triage helps us to reduce the amount of data that needs to be sent.
    • We segment the data that is being sent based on requirements in order to minimise the network traffic: a small amount of data is sent every 15 seconds (in order to provide the top level gauges where usability showed that live information was the priority); the remainder of the data is sent once every 15 minutes

    Our current measurements show that the CPU usage of this plugin is of the order of seconds per 15 minute interval for typical masters under typical load, though it is difficult to separate the CPU component of the DevOptics plugin from the CPU usage of Jenkins in general.

    The 15 minute data set size scales with the number of Jenkins build agents and the number of labels in use, so systems with many agents and many labels will send a larger data set than systems with small numbers of agents and/or labels. The data set is independent of the number of jobs and the number of builds.

    Value Streams

    Value Streams was also designed to minimize impact, however this functionality requires the collection of more data.

    Each build of a job generates events for certain actions, e.g.:

    • Start of a build
    • Checkout of source code from Git
    • Fingerprinting of artifacts consumed or produced by the build
    • End of a build and the build result

    The network traffic will be proportional to the number of events which ultimately depends on how many builds your Jenkins master runs.

    The majority of these events are essentially simple filtering of the event information provided by Jenkins Pub-Sub "light" Bus plugin. A significant percentage of Jenkins instances run this plugin anyway and it has proven itself not to impact performance.

    The only non-trivial event is the "Checkout of source code from Git" as this initiates an analysis of the commit tree delta for the corresponding git repository. This analysis is performed on the build agent (i.e. off-loaded from the main Jenkins instance itself - unless they are the same machine). Currently, the algorithm for computing the tree delta of a checkout of a git repository requires a synchronization point. There is an RFE to explore if this can be removed, but the side-effect is that where you have multiple concurrent builds checking out the same git repository and there are a large number of different commits being added between builds then concurrent builds may encounter delays if they both attempt to checkout simultaneously. Most users following typical development patterns should be unaffected by this as generally builds are triggered by individual commits.

Please sign in to leave a comment.