— jenkins, jenkinsx, kubernetes, helm, ci/cd — 2 min read
Jenkins X is the self described CI/CD solution for modern cloud applications on Kubernetes.
Jenkins X uses Helm to manage Kubernetes deployments. This short post is about leveraging the dynamism of Helm charts to allow different environment variables per environment and even per pull request.
You have a cluster running that is managed by Jenkins X. There are plenty of getting started tutorials on the Jenkins X website and since this post is more about how to use Helm with Jenkins X, we won't be covering installation.
Jenkins X subscribes to the GitOps methodology, which means the Git is used as a single source of truth for continuous delivery. With Jenkins X, Helm charts are kept in your code repository, builds are triggered by PRs or branch merges, and Jenkins X handles deploying to either a preview or an environment based upon your configuration.
It's likely that you would want different environment variables based upon which environment your application is deployed. I'm going to show you how to do this.
For our contrived use of environment variables, we're going to use a slightly modified version of the Jenkins X Python quickstart application. You can find the finished product on GitHub.
For pull requests, we want our app to show the Jenkins branch name as noted in the built-in
BRANCH_NAME
environment variable. If we merge to master and deploy to staging,
we'll just set the branch as master
.
For building previews of pull requests, Jenkins X uses a simple Makefile to inject
values into a few of the Helm YAML files. We're going to follow this approach, take the BRANCH_NAME
variable from Jenkins and use that in our deployment definition.
We're going to add a branchName
key under the preview
key in the
charts/preview/values.yaml file. preview
will look like this:
1preview:2 image:3 repository:4 tag:5 pullPolicy: IfNotPresent6 branchName:
Now, in charts/preview/Makefile, we'll grab the BRANCH_NAME
environment variable
from Jenkins. After creating the two branchName
lines, the Makefile will add our new
environment variable to our preview values.
1OS := $(shell uname)23preview:4ifeq ($(OS),Darwin)5 sed -i "" -e "s/version:.*/version: $(PREVIEW_VERSION)/" Chart.yaml6 sed -i "" -e "s/version:.*/version: $(PREVIEW_VERSION)/" ../*/Chart.yaml7 sed -i "" -e "s/tag:.*/tag: $(PREVIEW_VERSION)/" values.yaml8 # This is a new line.9 sed -i "" -e "s/branchName:.*/branchName: $(BRANCH_NAME)/" values.yaml10else ifeq ($(OS),Linux)11 sed -i -e "s/version:.*/version: $(PREVIEW_VERSION)/" Chart.yaml12 sed -i -e "s/version:.*/version: $(PREVIEW_VERSION)/" ../*/Chart.yaml13 sed -i -e "s|repository:.*|repository: $(DOCKER_REGISTRY)\/duffn\/jx-environment-variables|" values.yaml14 sed -i -e "s/tag:.*/tag: $(PREVIEW_VERSION)/" values.yaml15 # This is a new line.16 sed -i -e "s/branchName:.*/branchName: $(BRANCH_NAME)/" values.yaml17else18 echo "platfrom $(OS) not supported to release from"19 exit -120endif21 echo " version: $(PREVIEW_VERSION)" >> requirements.yaml22 jx step helm build
Now, we're going to add an env
section at the bottom of charts/jx-environment-variables/values.yaml.
1env:2 - name: BRANCH_NAME3 value: "master"
Finally, in charts/jx-environment-variables/templates/deployment.yaml, we'll utilize
the variables that we setup. If our values.yaml contains branchName
, we'll display
that in our app, otherwise we'll get the env
section that we created above.
1apiVersion: extensions/v1beta12kind: Deployment3metadata:4 name: {{ template "fullname" . }}5 labels:6 draft: {{ default "draft-app" .Values.draft }}7 chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"8spec:9 replicas: {{ .Values.replicaCount }}10 template:11 metadata:12 labels:13 draft: {{ default "draft-app" .Values.draft }}14 app: {{ template "fullname" . }}15{{- if .Values.podAnnotations }}16 annotations:17{{ toYaml .Values.podAnnotations | indent 8 }}18{{- end }}19 spec:20 containers:21 - name: {{ .Chart.Name }}22 image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"23 imagePullPolicy: {{ .Values.image.pullPolicy }}24 ports:25 - containerPort: {{ .Values.service.internalPort }}26{{/*27Here's the section we added.28*/}}29{{- if .Values.env }}30 env:31{{- if .Values.branchName }}32 - name: BRANCH_NAME33 value: {{ .Values.branchName | quote }}34{{- else }}35{{ toYaml .Values.env | indent 10 }}36{{- end }}37{{- end }}38 resources:39{{ toYaml .Values.resources | indent 12 }}
When we build a PR, we can see that we'll have PR-XX
output by our app and
when we build staging, we'll see master
. It works!
Okay, but now we want to display something different when we promote our staging image to production. How do we do that since it's going to use exactly the same image and the values.yaml file as well?
Thankfully, you can override values per environment using the production repository that Jenkins X creates for us when we initialize our cluster. Mine happens to be called environment-razorfortune-production. This is the repository that Jenkins X will use when we promote an image to our production environment.
In the env/values.yaml file in this repository, let's add our production variable.
1jx-environment-variables:2 env:3 - name: BRANCH_NAME4 value: "i am in production!"
Note: You need to add any overriding values underneath a key that is the exact name of your application. Don't let this trip you up!
And now when we promote our image to production, we'll get i am in production!
.
This example was a bit contrived, but now you can see some of the power in utilizing Jenkins X for continuous delivery in Kubernetes. Now you have environment specific variables (or anything that you can put in values.yaml for that matter) in Jenkins X. Get to it!