Recently, while working in Jenkinsfile, I got stuck with this piece of Groovy code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
In the above code, the jobs
variable is a mapping from String to Groovy Closure objects.
It is intended for parallel
step to programmatically create a multi-fork stage in Jenkins.
1 2 3 4 5 |
|
The stage will look like this in BlueOcean interface:
As you can probably guess, the intention is to concurrently deploy/print mulitple distinct applications, colorfully named as app1
app2
app3
, in a Jenkins stage “Deploy”.
However, it does not work, as shown in the console log output below (NOTE: the deployment code has been replaced with println
for simplicity).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Although the keys (used for display names) are correct, the values, which are Closure objects for actual execution such as deployment or simple prints, are wrong.
The bug is subtle and puzzling: only the last element in the application list, regardless of its size and content, will be deployed or printed out (app3
in this example).
As we look further into it, we’ll see that this problem has nothing to do with Map or Groovy.
It can happen to any language that has closures.
For example: The same above problem can be simplified with list in Groovy:
1 2 3 4 5 6 |
|
The same problem can be seen in Go language:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
or in JavaScript:
1 2 3 4 5 |
|
In these List-based examples, “5” (the last values of the list) is always printed 5 times.
It turns out that this surprise problem is quite common.
In fact, it is so common that the “Go Programming Language” book dedicates a whole section (5.6.1: Caveat: Capturing iteration variables) in its Chapter 5 to discuss this gotcha.
The reason is related to scope rule: as we iterate through closures and use iteration variable (i
in the three list examples), all the Closure objects created in this loop “capture” and share the same variable i
(i.e., same addressable memory location) - not its value at that particular iteration (such as 0 in the first iteration).
At the end of the loop, the variable i
has been updated several times and has the final value 5
.
Thus, the values that are used by all individual Closure objects when executed are all 5
’s instead of 0-4 for each.
Now that we understand what went wrong, the fix is pretty simple: we simply declare a new variable within the loop body before using it in the closure. By doing so, each Closure object will have a separate variable (with distinct memory address) and value.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 |
|
In general, I would recommend adding the comment TRICKY: necessary!
.
This would caution another team member, out of desire for premature optimization, from accidentally remove the apparently useless line and produce the subtly incorrect variants as seen above.
The feedback is overall positive.
Some are outright hilarious.
Twitter users generally like the clear, digestable format on Apple News. Others also start appreciating human-curated contents on Apple News, especially in the context of “fake news” and deeply divided 2020 election.
After the election, it is reported that the President-elect Joe Biden “relies on Apple News to help him get headlines from other reputable media sources.” Politico also mentioned Biden’s media diet, noting he is a “devoted fan” of Apple News and has news app notifications on his iPhone on to keep up with stories throughout the day.
P.S.: I’m just a little proud of my work’s impact. This post is not meant to express my political views.
]]>1 2 3 4 5 6 7 |
|
The issue has been extensively discussed in this bug report. This pull request supposedly fixes the issue, in v0.19.0 release. However, I’m still occasionally seeing the issue. I have attempted different approaches but they have different degrees of convenience and success in different networks.
--host-only-cidr
option in minikube start
.In this post, we will look into each approach in more details.
OpenConnect is a CLI client alternative for Cisco’s AnyConnect VPN. Here’s how you setup OpenConnect on Mac OSX:
OpenConnect can be installed via homebrew:
brew update
brew install openconnect
Connect. The only thing you should be prompted for is your VPN password.
sudo openconnect --user=<VPN username> <your vpn hostname>
This approach is the more convenient and more reliable in my experience. All you need to do is to set up a list of port forwarding rules for minikube’s VirtualBox:
1 2 3 4 |
|
Then, you can set up a new Kubernetes context for working with VPN:
1 2 |
|
When working on VPN, you can set kubectl
to switch to the new context:
1
|
|
All Minikube URLs now must be accessed through localhost
in browser.
For example, the standard Kubernetes dashboard URL such as:
1 2 |
|
must now be accessed via localhost:30000
.
Similar applies to other services that are deployed to minikube, such as jenkins
shown above.
In addition, the eval $(minikube docker-env)
standard pattern to reuse minikube’s Docker deamon would not work anymore.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Instead, you have to adjust DOCKER_HOST accordingly and use docker --tlsverify=false ...
.
1 2 3 4 5 |
|
Finally, when not working on VPN, you can set kubectl
to switch back to the old context:
1
|
|
--host-only-cidr
optionThis approach is the most simple but it also has less success than I hoped.
The idea of this approach is that AnyConnect VPN client likely routes 192.168.96.0/19
through its tunnel.
This may conflict with the default Minikube network of 192.168.99.0/24
.
Therefore, we use minikube start --host-only-cidr 10.254.254.1/24
to instruct minikube to use a different, unused arbitrary network.
It is worth a try but it often does not work in my experience.
curl
commands embedded in Groovy-based Jenkinsfile code with some Jenkinsfile DSLs.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The equivalent curl
command is as follows, with JSON processing is done in jq
:
1 2 3 |
|
Reference: Create a comment.
1 2 3 |
|
Based on this article.
1 2 3 4 5 6 7 8 |
|
The Jenkins-provided environment variable $CHANGE_ID
, in the case of a pull request, is the pull request number.
At the end of a Jenkins build for a feature branch (NOT develop
/master
), you may want to email some developer of its status, as opposed to blasting a whole distribution list.
Note that in Git, there is no such metadata for branch creator, as discussed here.
Instead, it makes more sense to notify the latest/active committer which is likely the owner of the branch.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
Searching how to delete a branch in Github API’s Branches reference does not return anything. In fact, to delete a branch, we have to delete its HEAD reference as shown here.
1
|
|
1) When processing data from Github API, note that any commit has an author and a committer, as shown below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
While the two fields are usually the same in normal commits (with same associated email and timestamp), they have different meanings. In summary, the author is the one who created the content, and the committer is the one who committed it. The two fields can be different in some common Github workflows:
Due to that subtle difference in committer and author in different scenarios, one has to be careful when using data sent by Github API in a Jenkins pipeline. For example, you want to send email to the repository owner (committer) at the end of a Pull Request build, but what if someone adds a commit via Github web interface (commiter email would be “no-reply@github.com” which is not helpful).
2) There is an API rate limit for the free public Github API (note “X-RateLimit-Limit” and “X-RateLimit-Remaining” in output below).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
You are likely to hit this rate limit quickly if you are polling the repos for updates. Instead of polling from your CI (e.g., Jenkins) system, it is recommended to use Github webhooks.
Jsonnet
tool can help reducing the hassle of maintaining.
Using Jsonnet templates, it is easier to organize data and reduce repeated code present in such JSON data.
This post goes over a few common Jsonnet code recipes for generating JSON data.
At least, make sure your jsonnet template files can compile. The following example bash script will find all the manifest files and try to compile that:
1 2 3 4 5 6 7 8 9 10 |
|
Conditionally adding items to a list.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Conditionally adding attributes to an object/map.
1 2 3 4 5 6 7 |
|
For this tutorial, we look at the following Groovy build wrapper as the example under test:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
After the shared library is set up properly, you can call the above Groovy build wrapper in Jenkinsfile as follows to use default parameters:
1 2 |
|
or you can set the parameters in the wrapper’s body as follows:
1 2 3 |
|
In the next section, we will look into automated testing of both use cases using JenkinsPipelineUnit.
To use JenkinsPipelineUnit, it is recommended to set up IntelliJ following this tutorial.
To test the above buildWrapper.groovy
using the Jenkins Pipeline Unit, you can start with a unit test for the second use case as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Unfortunately, when executing that unit test, it is very likely that you will get various errors that are not well-explained by JenkinsPipelineUnit documentation.
1 2 3 4 5 6 |
|
The short explanation is that the mock execution environment is not properly set up.
First, we need to call setUp()
from the base class BaseRegressionTest of JenkinsPipelineUnit to set up the mock execution environment.
In addition, since most Groovy scripts will have this statement checkout scm
, we need to mock the Jenkins global variable scm
, which represents the SCM state (e.g., Git commit) associated with the current Jenkinsfile.
The most simple way to mock it is to set it to empty state as follows:
1
|
|
We can also set it to a more meaningful value such as a Git branch as follows:
1 2 3 4 5 6 7 8 |
|
However, an empty scm
will usually suffice.
Besides Jenkins variables, we can also register different Jenkins steps/commands as follows:
1
|
|
After going through the setup steps above, you should have the following setup method like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Rerunning the above unit test will show the full stack of execution:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
For automated detection of regression, we need to save the expected call stack above into a file into a location known to JenkinsPipelineUnit.
You can specify the location of such call stacks by overriding the field callStackPath
of BaseRegressionTest in setUp
method.
The file name should follow the convention ${ClassName}_${subname}.txt
where subname
is specified by testNonRegression
method in each test case.
Then, you can update the above test case to perform regression check as follows:
1 2 3 4 5 6 7 8 9 10 |
|
In this example, the above call stack should be saved into DemoTest_configured.txt
file at the location specified by callStackPath
.
Similarly, you can also have another unit test for the other use case of buildWrapper
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Any change in buildWrapper.groovy
will be detected as test failures, as shown in the screen shot below.
In IntelliJ, we can click on Click to see difference link to compare the actual call stack versus the expected one that was saved in the text file.
This test class shows a complete example, together with files of expected call stacks.
You can also use PipelineUnitTests to test Jenkinsfile.
In most cases, testing Jenkinsfile will be similar to testing Groovy files in vars
folder, as explained above, since they are quite similar.
1 2 3 4 5 6 7 8 9 10 11 |
|
The process is very similar: you need to mock out some global variables and functions corresponding to Jenkins pipeline steps.
You will need to printCallStack
to obtain the expected output and save it into some text file.
Then, you can use testNonRegression
for automated verification of no-regression in Jenkinsfile.
This test class shows an example of testing Jenkinsfile using PipelineUnitTests.
Note that, unlike Groovy files in vars
folder, Jenkinsfiles are regularly updated and usually NOT depended/used by any other codes.
Therefore, automated tests for Jenkinsfile are not very common because of the cost/effort required.
NOTE: this setup is NOT intended for Jenkins plugin or core development.
It is best to start a new project:
sdk
tool instead of Homebrew.Set up for Jenkins Plugins files which are of types .hpi or .jpi.
Modify build.gradle to add the following lines.
1 2 3 4 5 6 7 8 9 10 |
|
The above example will grab Jenkins core libraries, Matrix Authorization Plugin hpi, other plugin dependencies and javadocs for all imported libraries. Having these libraries imported will enable code auto-completion, syntax checks, easy refactoring when working with Groovy scripts for Jenkins. It will be a great productivity boost.
NOTE 1: The last line compile fileTree
is the last resort for any Jenkins plugins that you cannot find the right group ID and artifact ID.
It is rare these days but such cases cannot be completely ruled out.
NOTE 2: The ext: 'jar'
is VERY important to ensure that jar
files, instead of hpi
/jpi
files, are being downloaded and understood by IntellJ.
Without that ext
option specified, IntellJ won’t find JAR files nested in hpi
/jpi
files which is the default binaries for Jenkins plugins.
The final build.gradle will look like this. All of the above setup should suffice for working with Groovy Init Scripts. For working with Jenkins Shared Pipeline Libraries, we should take one extra step shown as follows.
All Groovy files in Jenkins shared library for Pipelines have to follow this directory structure:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Note that the Groovy code can be in both src
and vars
folders.
Therefore, you need to add the following lines in build.gradle
to inform Gradle locations of Groovy source codes:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Optionally, for unit testing Jenkins shared library, we have to add the following dependencies into our build.gradle file.
1 2 |
|
Please see this blog post for more details on unit testing. The final build.gradle will look like this.
IntelliJ can’t auto-complete Jenkins pipeline steps such as echo
or sh
out of the box.
We have to make it aware of those Jenkins pipeline DSLs, via a generic process explained here.
Fortunately, it is much easier than it looks and you don’t have to actually write GroovyDSL script for tens of Jenkins pipeline steps.
Jenkins make it easy by auto-generating the GroovyDSL script and it is accessible via “IntelliJ IDEA GDSL” link, as shown in screenshot below.
The “IntelliJ IDEA GDSL” link can be found by accessing “Pipeline Syntax” section, which is visible in the left navigation menu of any Pipeline-based job (e.g., “Admin” job in the example above). After clicking on the “IntelliJ IDEA GDSL” link, you will be able to download a plain text file with content starting like this:
1 2 3 4 5 6 7 8 |
|
As you can see, it is a GroovyDSL file that describes the known pipeline steps such as echo
and error
.
Note that GDSL files can be different for different Jenkins instances, depending on Pipeline-supported plugins currently installed on individual Jenkins instance.
To make IntelliJ aware of the current Jenkins pipeline steps available on our Jenkins, we need to place that GDSL file into a location known to source folders.
As shown in the last section, anywhere in both vars
and src
folders are eligible as such although I personally prefer to put the GDSL file into vars
folder (for example).
After installing the GDSL file into a proper location, IntelliJ may complain with the following message DSL descriptor file has been change and isn’t currently executed and you have to click Activate back to get the IntelliJ aware of the current DSLs. After that, you can enjoy auto-completion as well as documentation of the Jenkine Pipeline DSLs.
The container and the container image should be the abstractions for the development of distributed systems. Similar to what objects and classes did for OOP, thinking in term of containers abstracts away the low-level details of code and allows us to think in higher-level design patterns. Based on how containers interact with other containers and get deployed into actual underlying machines, the authors divide the patterns in to three main groups:
/health
endpoint.The more contentious patterns are probably single-node multi-container patterns, especially the sidecar pattern. The most common anti-pattern is that we try to merge the functionality of the sidecar container into the main container. Analogously, we also have seen a similar anti-pattern in OOP that ends up with a large class that tries to do many things at once. There are several benefits to use separate containers:
These scripts are written in Groovy, and get executed inside the same JVM as Jenkins, allowing full access to the domain model of Jenkins.
For a given hook HOOK
, the following locations are searched:
1 2 3 4 |
|
The init
is the most commonly used hook (i.e., HOOK=init
).
The following sections show how some of the most common tasks and configurations in Jenkins can be achieved by using such Groovy scripts.
For example, in this project, many of such scripts are added into a Dockerized Jenkins master and executed when
starting a container to replicate configurations of the Jenkins instance in production.
It will give us ability to quickly spin up local Jenkins instances for development or troubleshooting issues in production Jenkins.
On a side note, IntelliJ IDEA is probably the best development tool for working with these Groovy Scripts. Check out these instructions on how to set it up in IntelliJ. UPDATED ON 2018/09/29: More on IntelliJ setup is discussed in this blog post.
This section shows how to enable different authorization strategies in Groovy code.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Matrix-based authorization: Gives all authenticated users admin access:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
For importing GlobalMatrixAuthorizationStrategy class, make sure that matrix-auth
plugin is installed.
For full list of standard permissions in the matrix, see this code snippet.
Note that the matrix can be different if different plugins are installed.
For example, the “Replay” permission for Runs is not simply hudson.model.Run.REPLAY
since there is no such static constant.
Such permission is only available after Workflow CPS plugin is installed.
Therefore, we can only set “Replay” permission for Runs with the following:
1
|
|
References
In addition to enable authorization strategy, we should also set some basic configurations for hardening Jenkins. Those includes various options that you see in Jenkins UI when going to Manage Jenkins > Configure Global Security.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
Some are not working for versions before 2.46, according to this.
For disabling Jenkins CLI, you can simply add the java argument -Djenkins.CLI.disabled=true
on Jenkins startup.
References
1 2 3 4 5 6 7 8 9 10 |
|
Adding Credentials to a new, local Jenkins for development or troubleshooting can be a daunting task. However, with the following scripts and the right setup (NEVER commit your secrets into VCS), developers can automate adding the required Credentials into the new Jenkins.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 6 |
|
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
1 2 3 4 5 6 |
|
src
folder as opposed to vars
folder, for Jenkins Shared Library.
src
folderAll Groovy files in Jenkins shared library for pipelines have to follow this directory structure:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
src
folder is intended to set up with groovy
files in the standard directory structure, such as “src/org/foo/bar.groovy”.
It will be added to the class path when the Jenkins pipelines are executed.
Any custom function in a Jenkins shared library has to eventually use basic Pipeline steps such as sh
or git
, made available through various Jenkins plugins.
However, Groovy classes in shared Jenkins library cannot simply call those basic steps directly.
There are a few approaches on how to access those Pipeline steps indirectly.
Groovy scripts in src
folder can implement methods that invoke Pipeline steps, like this:
1 2 3 4 5 6 |
|
The method is stored implicitly in library and can then be invoked from a Scripted Pipeline like this:
1 2 |
|
However, the requirement of this approach is that those methods defined in buildUtils.groovy
cannot be enclosed in any class.
The “implicit class” mentioned in this approach refers to the fact that any Groovy script, such as buildUtils.groovy
, has an implicit class (e.g., org.demo.buildUtils
) that contains all the defined functions in it.
This approach has limitations; for example, it prevents the declaration of a superclass.
In the following example, we create an enclosing class that would facilitate things like defining a superclass.
In that case, to access standard DSL steps such as sh
or git
, we can explicitly pass special global variables env
and steps
into a constructor or a method of the class.
Global object env
contains all current environment variables while steps
contains all standard pipeline steps.
Note that the class must also implement Serializable interface to support saving the state if the pipeline is stopped or resumed.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
1 2 3 4 5 6 7 |
|
In the final example, we can also use static method and pass in the script
object, which already has access to everything, including environment variables script.env
and Pipeline steps such as script.sh
.
1 2 3 4 5 6 |
|
The above example shows the script being passed in to one static method, invoked from a Scripted Pipeline as follows (note import static
):
1 2 3 4 5 |
|
All three approaches shown in three examples above are valid in Scripted Jenkinsfile.
However, per recommended by CloudBees Inc., src
folder is best for utility classes that contains a bunch of static Groovy methods.
It is easier to use global variables in the vars
directory instead of classes in the src
directory, especially when you need to support declarative pipelines in your team.
The reason is that in declarative pipelines, the custom functions in Jenkins shared libraries must be callable in declarative syntax, e.g., “myCustomFunction var1, var2” format.
As you can see in the examples above, only in Method 3 (Static methods in explicit class), where custom functions are defined as static methods, the invocation of custom function is compatible with declarative pipeline syntax.
When using src
area’s Groovy codes with library
step, you should use a temporary variable to reduce its verbosity, as follows:
1 2 3 4 |
|
There are times that we can ssh
to our servers but simply can’t ping
those servers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
One possible explanation of seemingly perplexing situation like above is that ICMP requests (i.e., ping) are blocked.
It is not unheard of that an ISP or a network administrator blocks ICMP requests.
To work around that limitation, you can use a “TCP ping” on a port, using a tool like nmap
.
The following examples check if a host can be reached via port 80:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Based on offical instruction and this, you need to add the following code snippet in to your Maven pom.xml
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
At least, you need “prepare-agent” before test phase for Jacoco instrumentation and “report” after test phase for generating the report. You could subject the project to code coverage and generate the same report without making any changes to the pom file. To do this, run the following command:
1
|
|
You may get the following error:
1
|
|
There are two options to fix that error.
The easiest way is to specify the groupId
and artifactId
parameters of the plugin explicitly.
You can also add version
to ensure the stability of your build pipeline.
1
|
|
The more long-term solution is to add the following in to your Maven “settings.xml”.
1 2 3 |
|
If mocking is involved in unit tests, you need to use “instrument” and “restore-instrumented” steps.
Reference:
Officially, multi-module Maven projects are supported differently by Jacoco as documented here. Instrumentation will be similar but the challenge of multi-module Maven projects lies in how to collect and report code coverage of all modules correctly. Jacoco Maven standard goals, as shown in sections above, work on single modules only: Tests are executed within the module and contributed coverage only to code within the same module. Coverage reports were created for each module separately.
In the past, there are some ad-hoc solutions such as this (for Jacoco 0.5.x) to work around that limit. However, those patterns are also error-prone and hard to customize, especially when Jacoco is used with Surefire plugin. Fortunately, Jacoco recently introduced a new Maven goal “report-aggregate” in its release 0.7.7 which will aggregate code coverage data across Maven modules. Its usage is also present in the same link (quoted below) but it is too succint and not very helpful for new users.
Create a dedicated module in your project for generation of the report. This module should depend on all or some other modules in the project.
Let' say you have a multi-module Maven project with this structure:
1 2 3 4 |
|
To use Jacoco “report-aggregate” goal for these modules, you first need to add a dedicated “coverage” module. This “coverage” module should be added into the root POM. The multi-module Maven project should now look like this:
1 2 3 4 5 |
|
The POMs for each module does not need to change at all. The POM for the “coverage” module will look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
|
Note that we still require “prepare-agent” step to run before the first test suite. Depending on what plugins are being used and how the modules are organized within the project, we might have different setup for that particular step. One option is to run from the command-line:
1 2 3 4 5 |
|
Links:
In theory, a global threshold can be defined in coverage/pom.xml
to enforce code coverage standard across teams.
However, in practice, different teams are at different stages of module/service maturity and blindly having a global threshold will hamper teams working on newer services/modules.
In addition, it does not make sense to enforce code coverage on some Maven modules such as those generated in GRPC.
In Jacoco, you can set different coverage limits for individual modules instead of a global threshold for all modules. In the following example, you can specify a coverage threshold for module A by modifying module A’s pom.xml file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
|
As you can see, you can also specify files being excluded from coverage calculation.
Let’s say you start a new Node project.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
At the end of these steps, you have a basic package.json
and Gruntfile
.
The basic Gruntfile would appear like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
|
load-grunt-tasks
pluginIn the original basic Gruntfile, we have to manually load our Grunt plugins, as
1 2 3 |
|
If you now uninstall the plugin via npm
and update your package.json
, but forget to update your Gruntfile
, your build will break.
With load-grunt-tasks
plugin, you can collapse that down to the following one-liner:
1
|
|
After requiring the plugin, it will analyze your package.json file, determine which of the dependencies are Grunt plugins and load them all automatically.
load-grunt-config
pluginload-grunt-tasks
shrunk your Gruntfile in code and complexity a little, but task configurations still remain in the Gruntfile (defined in grunt.initConfig
).
As you configure a large application, it will still become a very large file.
This is when load-grunt-config
comes into play.
load-grunt-config
lets you break up your Gruntfile config by task.
With load-grunt-config
, your Gruntfile
may look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Note that load-grunt-config
also includes load-grunt-tasks
’s functionality.
The task configurations live in files in folder ./grunt/tasks
.
By default, ./grunt
folder is used but, in this example, using a custom path is demonstrated.
In other words, our directory structure should be like this:
1 2 3 4 5 6 |
|
The task configuration for each task is defined in respective file name.
For example, task concat
is defined in “grunt/tasks/concat.js”:
1 2 3 4 5 6 7 8 9 10 |
|
The list of registered task aliases such as default
is defined in aliases.js
file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
grunt-init
This blog post offers my simplistic view of how an internal DSL is implemented in Groovy via closure delegation. It shows the progression from standard Java-like implementation -> its fluent version -> final DSL form. This might help undrestanding the inner workings of a DSL such as Jenkins’s Pipeline steps. There are probably more advanced methods/frameworks for creating DSL. However, those are not in the scope of this post.
We want to implement a simple DSL that is similar to Pipeline steps in Jenkinsfile.
1 2 3 4 5 6 |
|
In this DSL example, users will write a sequence of steps using a small, pre-defined set of custom statements such as echo
and sh
above.
For each step in the DSL, the backend classes and objects will perform some execution in the background, using the relevant context specific to the domain (e.g., Jenkins domain).
For simplicity, println
statements will be used in the following examples.
The advantage of DSL is that the developers can implement the backend in some fully-featured language such as Java but the users don’t need to know such language to use it. Such a separation is common in DevOps and automation frameworks where the users want the flexibility of configuring based on their needs but don’t want to get exposed to the implementation details (which are usually ugly and compplicated). Instead, the users only need to learn the DSL to use it while still have the flexibility to do what they want. One example can be found in data science domain where data scientists are usually more comfortable developing in R or SQL but automated deployment frameworks or tools can be in another language such as Java.
First, we show a standard implementation in Java to show how backend execution can be implemented. In the advanced versions, the difference is only in its public interface to make it more user-friendly but the backend execution will be similar.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
The problem of this approach is that users have to write Java (or Groovy) code directly to use it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
|
In this version, the Build design pattern is used in the implementation.
As shown above, the code is much more fluent with the object name builderDsl
not being repeated every single line.
As a result, the code is less verbose and much more user-friendly.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
This first version of Groovy implementation is presented here to show connection with its Java counterparts.
As shown below, the input variable dsl
in the closure can be abstracted away using delegate.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
|
In this final version, only a very small boiler-plate code GroovyDsl.executeBest
remains.
The following lines form a mini language (i.e., DSL) that can be exposed to users.
The users can start using the DSL without having to learn Groovy or Java.
Note that the executeBest
is the equivalent but less straight-forward way to do the same thing with delegate.
Compared with execute
, it has the benefit of NOT modifying the input reference closure
.
More detailed discussion is in here.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
It is possible to take a Maven/JUnit-based test suite that takes too long to run on a single node and parallelize the test execution across multiple nodes instead. The Parallel Test Executor Plugin is exactly for that purpose.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Note that, this is different from modiyfing test harness (e.g., JUnit, TestNG) to parallelize the test execution on a single node. It could be time-consuming and risk destabalizing the tests while the chance of success is usually small.
More details can be found in the following links:
splitTests
defined by this plugin.List of basic Jenkinsfile steps in this post:
checkout
/git
emailext
findFiles
input
junit
parameters
/properties
podTemplates
sendSlack
stash
/unstash
withCredentials
checkout
/git
stepscm
is the global variable for the current commit AND branch AND repository of Jenkinsfile.
checkout scm
means checking out all other files with same version as the Jenkinsfile associated with running pipeline.
To check out another repository, you need to specify the paremeters to checkout
step.
1 2 3 4 5 6 7 8 9 10 |
|
Reference:
emailext
stepTo send email as HTML page, set content type to HTML and use content as ${FILE,path="email.html"}
.
In Jenkinsfile, the code should look like this:
1 2 3 4 5 6 7 8 |
|
Note that it’s single-quoted strings, not double-quoted, being used for body
and presendScript
parameters in the example code above.
Reference:
findFiles
stepDoing in Bash:
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 |
|
Reference:
findFiles
stepreadFile
, writeFile
.input
stepSimple input
step can be used to ask for approval to proceed.
For asking input from a list of multiple choices, you can use the advanced version of input.
1 2 3 4 5 6 |
|
Reference:
junit
stepJUnit tests + PMD, FindBugs, CheckStyle. In Blue Ocean interface, these will be displayed in a separate tab.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
parameters
/properties
stepparameters
step adds certain job parameters for the overall pipeline job.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
In Scripted pipeline, its equivalent counterpart is properties
step, as shown below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
In the Jenkins UI, this will be converted to configurations when you click on “View Configuration” for that job, as shown in screenshot below. Note that the configurations in this page is read-only when using Jenkinsfile. Any modifications made to the page will be ignored, leaving configurations set in Jenkinsfile final (“Infrastructure as Code”).
Reference:
podTemplate
stepThis step is used to specify a new pod template for running jobs on Kubernetes cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Reference:
sendSlack
stepStandard Jenkinsfile for testing Slack
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
stash
/unstash
stepsstash
step can be used to save a set of files, to be unstash
ed later in the same build, generally for using in another workspace.
unstash
will restore the files into the same relative locations as when they are stash
ed.
If you want to change the base directory of the stashed files, you should wrap the stash
steps in dir
step.
We should use stash
/unstash
to avoid the common anti-pattern of copying files into some special, globally visible directory such as Jenkins home or one of its subdirectories.
Using such anti-pattern will make it hard to support many jobs for many users since, eventually, there will be some name clash and, subsequently, some convoluted naming of those files to avoid such name clashes.
Note that stash
and unstash
steps are designed for use with small files.
If the size is above 5 MB, we should consider an alternative such as Nexus/Artifactory for jar files, blob stores for images.
Example usage of stash
and unstash
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Example output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
withCredentials
stepThere are different variations of withCredentials
step.
The most common ones are:
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 |
|
For secret file, the file will be passed into some secret location and that secret location will be bound to some variable. If you want the secret files in specific locations, the workaround is to create symlinks to those secret files.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
For “private key with passphrase” Credential type, sshagent
is only usage that I know (credential ID is jenkins_ssh_key
in this example):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Reference:
Problem: Loading Groovy methods from a file with load
step does not work inside Declarative Pipeline step, as reported in this issue.
Workaround: There are a few work-arounds. The most straight-forward one is to use script
step.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
You can also define Groovy methods from inside the Jenkinsfile.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
For Declarative Pipeline, to reuse the code from a Groovy script, you must use Shared Libraries. Shared Libraries are not specific to Declarative; they were released some time ago and were seen in Scripted Pipeline. This blog post discusses an older mechanism for Shared Library. For the newer mechanism of importing library, please check out this blog post. Due to Declarative Pipeline’s lack of support for defining methods, Shared Libraries definitely take on a vital role for code-reuse in Jenkinsfile.
File
reading and writing not supportedJava/Grooy reading and writing using “java.io.File” class is not directly supported.
1
|
|
In fact, using that class in Jenkinsfile must go through “In-Process Script Approval” with this warning.
new java.io.File java.lang.String Approving this signature may introduce a security vulnerability! You are advised to deny it.
Even then, “java.io.File” will refer to files on the master (where Jenkins is running), not the current workspace on Jenkins slave (or slave container). As a result, it will report the following error even though the file is present in filesystem (relevant Stackoverflow) on slave:
1 2 |
|
That also means related class such as FileWriter will NOT work as expected. It reports no error during execution but you will find no file since those files are created on Jenkins master.
Workaround:
readFile
step.writeFile
step. However, Pipeline steps (such as writeFile
) are NOT allowed in @NonCPS
methods. For more complex file writing, you might want to export the file content as String and use the following code snippet:1 2 3 4 5 6 |
|
In the code snippet above, we construct a here document-formatted command for writing multi-line string in mCommand
before passing to sh
step for executing.
1 2 3 4 5 6 7 8 9 10 |
|
You often encounter this type of errors when using non-serialiable classes from Groovy/Java libraries.
1 2 |
|
1 2 3 4 5 6 |
|
There is also some known issue about JsonSlurper. These problems come from the fact that variables in Jenkins pipelines must be serializable. Since pipeline must survive a Jenkins restart, the state of the running program is periodically saved to disk for possible resume later. Any “live” objects such as a network connection is not serializble.
Workaround: Explicitly discard non-serializable objects or use @NonCPS methods.
Quoted from here: @NonCPS
methods may safely use non-Serializable
objects as local variables, though they should NOT accept nonserializable parameters or return or store nonserializable values.
You may NOT call regular (CPS-transformed) methods, or Pipeline steps, from a @NonCPS
method, so they are best used for performing some calculations before passing a summary back to the main script.
In summary, if it is possible, use another script language (e.g., Python) for file manipulation in Jenkinsfile. It is time consuming to navigate all tricky stuffs of Groovy implementaiton in Jenkins:
@NonCPS
.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
Some notes:
import
statements must be at the top, right after the shebang and before anything else.@NonCPS
or Jenkins will report the error “java.io.NotSerializableException”.step
block. It must be defined at the top.@NonCPS
is required since the Groovy method uses several non-serializble objects.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
The above Nexus authentication code is likely repeated across multiple Maven builds. Therefore, it is worth converting it into a DSL into a Shared Library in Jenkins. The DSL takes two parameters:
The example usage is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
The Jenksinfile is much cleaner since most of implementation details have been moved inside the DSL:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
We first look at a typical Jenkins setup, where the Jenkins instance is installed directly on a host machine (VM or bare-metal) and has direct communication to the SMTP server. For corporate network, you may have to use an SMTP relay server instead. For those cases, you can configure SMTP communication by setting up Postfix. In CentOS, it could be a simple “sudo yum install -y mailx”.
After installing, update /etc/postfix/main.cf with correct relay information: myhostname, myorigin, mydestination, relayhost, alias_maps, alias_database. An example /etc/postfix/main.cf is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
We can test the setup by sending a test email with the following command:
1 2 3 |
|
After the postfix
service is up, Jenkins can be configured to send email with Mailer plugin.
Mail server can be configured in Manage Jenkins page, E-mail Notification section.
Please visit Kohei Nozaki’s blog post for more detailed instructions and screenshots.
We can also test the configuration by sending test e-mail in the same E-mail Notification section.
Many Jenkins-based CI systems have been containerized and deployed on Kubernetes cluster (in conjunction with Kubernetes plugin).
For email notifications in such CI systems, one option is to reuse postfix
service, which is usually configured and ready on the Kubernetes nodes, and expose it to the Docker containers.
There are two changes need to be made on Postfix to expose it to Docker containers on one host.
Docker bridge (docker0
) acts a a bridge between your ethernet port and docker containers so that data can go back and forth.
We achieve the first requirement by adding the IP of docker0
to inet_iterfaces
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
For the second requirement, the whole docker network as well as localhost should be added to mynetworks
.
In our kubernetes setup, the docker network should be flannel0
and its subnet’s CIDR notation is added to the mynetworks
line:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
Note the differences in inet_interfaces
and mynetworks
from the last section.
One can simply enter the Docker container/Kubernetes pod to verify such setup.
Note that application mailx
maybe not available in a container since we tend to keep the containers light-weight.
Instead, prepare a sendmail.txt
file (based on this) with the following SMTP commands and use nc
to send out the email as shown below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
For containerized Jenkins system, mail server can also be configured in same Manage Jenkins page, E-mail Notification section.
The only difference is the IP/hostname provided to SMTP server option.
Instead of providing the known SMTP server’s IP and host, one should use the IP of docker0
, as explained above.
In the case of many nodes in Kubernetes cluster with different docker0
IP, the Docker container of Jenkins master should reside only on one host and docker0
’s IP on that host should be used.
1 2 |
|
withCredentials
step.
Maven builds in corporates usually use private repositories on Nexus, instead of public ones in Maven Central Repository. To do that, we usually configure Maven to check Nexus instead of the default, built-in connection to Maven Central. These configurations is stored in ~/.m2/settings.xml file.
For authentication with Nexus and for deployment, we must provide credentials accordingly. We usually add the credentials into our Maven Settings in settings.xml file.
1 2 3 4 5 6 7 8 9 |
|
However, for automated build and deployment in Jenkins pipelines, it is not safe to store credentials in plain text files.
Instead, one should store Nexus credentials as secrets in Jenkins and pass them into Jenkinsfile using their IDs (credentialsId
).
See this article for the full picture of related plugins used for storing and passing secrets in Jenkins.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
The step withCredentials
will not only provide a secure way of injecting secrets (e.g., Nexus credentials) into Jenkins pipeline, but also scrub away such sensitive information if we happen to print them out in log files.
transformXml
is my Groovy function that generates the settings.xml
from the redacted Maven settings.xml template (no credentials) and the provided Nexus credentials.
Since Maven 3.0, the above problem is made much easier since environment variables can be referred inside settings.xml
file by using special expression ${env.VAR_NAME}
, based on this doc.
Nexus authentication for Maven 3.0 in Jenkins pipeline can be done as follows:
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 10 |
|
However, note that it is still tricky even in Maven 3.0 since this is not always applicable, as noted in the same doc.
Note that properties defined in profiles within the settings.xml cannot be used for interpolation.
In Gradle, Nexus authentication can be specified in both build.gradle
and gradle.properties
file, where build.gradle
should be checked into VCS (e.g., git) while gradle.properties
contains sensitive credentials information.
1 2 3 4 5 6 7 8 9 |
|
1 2 3 |
|
The default location of the gradle.properties
file is ~/.gradle
.
This is due to the environment variable GRADLE_USER_HOME
usually set to ~/.gradle
.
For custom location of gradle.properties
(i.e., other than ~/.gradle
), ensure that the variable GRADLE_USER_HOME
is set accordingly.
However, similar to Maven, for Jenkins pipeline automation, it is not safe to store credentials in plain text file gradle.properties
, no matter how “hidden” its location is.
For that purpose, you should use the following Groovy code:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Note that, in Gradle, the solution is much simpler because Gradle respects properies set through environment variales.
Based on its doc, if the environment variable name looks like ORG_GRADLE_PROJECT_prop=somevalue, then Gradle will set a prop
property on your project object, with the value of somevalue
.
Therefore, in withCredentials
step, we specifically bind the secrets nexusUsername
and nexusPassword
to the environment variables ORG_GRADLE_PROJECT_nexusUsername and ORG_GRADLE_PROJECT_nexusPassword and not some arbitrary variable names.
These environment variables should match the ones used in builde.gradle
and, in the following Closure, we simply call the standard Gradle wrapper command ./gradlew <target>
.
Compared with Maven solution in the last section, there is no intermediate step to generate settings.xml
based on the provided secrets.
If Maven/Gradle build is used in multiple repositories across organization, it is recommended to move the above Groovy code into shared Jenkins library, as shown in last post.
For example, the Gradle builds can be simplified by defining useNexus
step (see here) and adding it into the shared library workflow-lib.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
After that, all the Gradle builds with Nexus authentication in Jenkinsfile will now be reduced to simply this:
1 2 3 |
|
As shown above, it will reduce lots of redundant codes for Gradle builds, repeated again and again in Jenkinsfiles across multiple repositories in an organizaiton.