CI/CD. Setting up a Jenkins pipeline to automatically pull code from Github and deploy it toKubernetes

Felipe Lujan
11 min readJul 23, 2019

--

Edited 2021. Unfortunately, this procedure can no longer be replicated due to incompatibilities between the Jenkins plugins used to deploy to kubernetes.

Hello guys, first of all, I want to make clear that I’m by no means an expert in DevOps. Still, I hope this is useful to you.

This guide is heavily influenced by Will Boyd’s learning activity and course Implementing Fully-Automated Deployment in a CD Pipeline in Linux Academy

The whole idea of this procedure is to show you how to set up a CI/CD pipeline that connects your source code to its final deployment, meaning that every time you push code to a GitHub repository a new deployment with the latest code will be launched in a Kubernetes cluster. If you’ve used Heroku this will definitely sound familiar. This means that we won’t have a deployment updated in “real-time” in fact, the whole building and deployment process can take ~3 minutes or more from the moment you push your code to a repo, this depending on the size of your node.js app.

Although there are managed services like Google App Engine that make deployments a breeze, it never hurts to expand your field of view and include modern technologies such as containerization and container orchestration.

Oh, I’ll be using Google Cloud Platform (GCP), but this should definitely work with whichever cloud provider you prefer so feel free to deviate and explore, and yes, GCP’s free trial is enough for what we’ll be doing here.

For this tutorial, you’ll need.

  • Basic concepts of node.js apps and one ready to deploy (I’ll use React.js)
  • Basic concepts of GIT and a Github.com account.
  • Basic understanding of Docker containers and a DockerHub account.
  • Access to a Kubernetes cluster and an additional VM with Jenkins, Docker, GIT, and node.js installed.

Basic understanding of Kubernetes and Linux systems is desired but not required.

Let's begin going to Google Cloud Platform’s Cloud Console.

In http://cloud.google.com click on the Console link located on to top right corner.

Once you’re on the Google Cloud Console, hover over the triple equal icon on the top left corner and go to Compute Engine > VM instances

Here we want to have

  • 1 Kubernetes Cluster
  • 1 VM with Jenkins, Docker, and GIT.

Note. A GKE cluster won’t work here since we can’t communicate directly to Kubernetes’ API Server. At least not whit this procedure.

Still, we can deploy a Kubernetes cluster easily by creating a VM with Bitnami’s Certified Kubernetes Sandbox

So, in Google Compute Engine’s main menu, click Create VM, go to the Marketplace option

and search for “Kubernetes”, Bitnami’s image should come up within the search results if not the first result.

Click on it and then on the Launch on Compute Engine button.

In this VM’s overview, you can see it contains Kubectl, Kubeadm, and Kubelet, as well as the cri-containerd plugin.

Scroll down and click deploy to deploy this VM in your GCP project. After your VM is successfully created, wait 5 more minutes while Bitnami’s scripts bring your Kubernetes Cluster to life, in the VM overview panel you should be able to see the progress of those tasks.

Once those tasks are completed, go back to the Google Compute Engine’s main view and SSH into your VM

There you should be able to communicate with your newly created Kubernetes cluster using the Kubectl command.

Try entering

Kubectl get nodes

The result is a list of the nodes that make up this cluster and their state, in our case, we have only one node that works as Master and Worker, which is enough.

Now let’s focus on the continuous deployment (CD) side of the project. We are going to need another virtual machine with Jenkins, Docker, and Git

For that let’s just create a new VM instance in GCE, you’re free to pick whichever taste of Linux you prefer, I’ll be using CentOS 7 for this one, again make sure to enable HTTP and HTTPS traffic and, if you can afford to, give this machine an SSD drive as we’ll be installing quite a bit of stuff in it.

First of all, I need to have java installed on my VM, this particular distribution of CentOS doesn’t have java preinstalled so let’s install java 1.8.0 OpenJDK

sudo yum install -y java-1.8.0-openjdk

And now Jenkins, first I’ll add the repo and import its key

curl --silent --location http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo | sudo tee /etc/yum.repos.d/jenkins.reposudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

and now Jenkins itself

sudo yum install -y jenkinssudo systemctl enable jenkinssudo systemctl start jenkins

With those commands, we added the Jenkins repo and imported the necessary key for its repository, installed Jenkins, enabled the Jenkins service to made sure it will run every time we start this VM, finally started the actual service

The Jenkins service should now be running. Run this command to verify.

sudo tail -f /var/log/jenkins/jenkins.log

Now let’s install docker, beginning with some of its dependencies

sudo yum update && sudo yum -y install yum-utils device-mapper-persistent-data lvm2

and now Docker itself

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.reposudo yum install -y docker-cesudo systemctl start dockersudo systemctl enable docker

With docker installed and running you can run a container by executing the following command.

sudo docker run hello-world

Finally lets install git by running

sudo yum install -y git

Make sure the docker group is already created by running

sudo groupadd docker

Later on, we’ll need Jenkins to be able to run docker commands, so we need to add its user to the docker group by executing

sudo gpasswd -a jenkins dockersudo gpasswd -a $USER docker

Now that our Jenkins VM is ready we need to access the Jenkins web interface that is already running on port 8080. For that, we could either set up a firewall rule in Google Cloud Platform or just redirect traffic from another port to port 8080. Not willing to deviate too much I’m going to choose the last option by running this command

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080

Here, everything coming in and out of port 31000 will be redirected to port 8080 in my Jenkins VM, if port 31000 is not available to you try redirecting traffic to port 80 in your Jenkins VM which should be accessible by default.

Put aside your SSH web console for a while and, to the GCE main menu and grab the Jenkins VM external IP, in a new tab introduce that external IP followed by :<port_number>

In my case, it’d be http://34.67.253.61:31000

You should see the Jenkins welcome screen that looks similar to this

Go back to your SSH session and run:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

It should return a temporary password, copy and paste it in the Jenkins web interface in the Administrator Password text box

Click continue

Once inside, you’ll be prompted to install various plugins on your Jenkins machine and then to create an administrator’s account

If you see the following screen, congratulations, you’ve now installed and configured Jenkins.

Once in Jenkins, we need kubernetes continuous deployment plugin so head over to the left-hand side panel and select manage Jenkins and then manage plugins. Once you in the plugin manager head over to available tab and search kubernetes, the corners can go ahead and install it and restart it Jenkins

Once you’ve logged back in jenkins, go to the Options link on the left-hand side and click credentials > global > add credentials

First let’s add the docker hub credentials, for that select username and password under the kind input field and insert your docker hub username and password in the respective fields. For ID it’s very important to type in docker_hub_login as we are going to be referencing these credentials later on. Click on save

Click add credentials again, for github it’s going to be a little different, you want to put in your GitHub username but for password you need to put an access token, so head over to your github account and click on your avatar and select settings > developer settings and generate a new personal access token, call it however you want and select admin:repo_hook in order to allow this token to manage your GitHub web hooks once you click generate. you’ll be given a token, this token you should put under the password field in your Jenkins github credentials. don’t close the github site because we are going to need the token later. For the ID field enter github_key and finish by clicking OK.

Click on ADD credentials one more time and under type select kubernetes configuration, give it and ID of kubeconfig, select the enter directly option and a text area will appear in the form. Here we are going to put the kubeconfig text with which Jenkins is going to communicate with your kubernetes cluster in order to set up the deployment and services. To obtain the kubeconfig file head over to your kubernetes VM, SSH into it and execute the following command

sudo cat ~/.kube/config

or try

sudo cat /etc/kubernetes/admin.conf

in case the kubeconfig is not in .kube/config

The content of your kubeconfig file will appear in the terminal, make sure to all copy it's content, go back to Jenkins and paste it in the text area and click okay

Finally, go back to the Jenkins main page and click manage Jenkins and then configure system scroll down until you see the GitHub server section, click on add GitHub server > GitHub server.

Call it however you want and leave the API URL as https://api.github.com

Add a new credential of type secret text, enter your token under the secret textbox and give it an ID of github_secret. Finally, click add.

Now these credentials should appear in the credentials drop-down, select it, check “manage hooks” and click save

We are now ready to build a Jenkins pipeline that takes code from GitHub builds a docker image pushes it to docker hub and finally launches a deployment in our kubernetes cluster

If you want to follow along there’s something you need to do first, head over to this Github repo https://github.com/FelipeLujan/gradle-test and create a fork of it. In the root of your forked repo you’ll find a file called Jenkinsfile. Open it and modify the environment variable by removing felipelujan off the value of DOCKER_IMAGE_NAME and replacing it with your dockerhub username, this file is very important we’ll explain what it does later.

Also, while we are in your forked repo’s page, go to settings

On the left side panel go to webhooks and click on the Add webhook button.

In the payload URL input field, enter your jenkins’ server external IP and port, followed by /github-webhook/

This is how GitHub is going to notify your Jenkins server when new code is pushed to your repo, click add webhook

Finally, we can set up the pipeline that is going to take care of the heavy lifting for us. In the jenkins main page, click on new item. Enter a name for your project and select multibranch pipeline and hit ok

In branch sources click add source > GitHub. The following form will appear

In credentials select your github_key credential

Owner, enter your GitHub account name

The repository dropdown will auto-populate with your projects, select the recently forked repo.

Under behaviors, we only need one that excludes branches with PRs, other strategies you can explore later. Click Save

Your automated Jenkins pipeline is now complete. To see what it’s doing click on your pipeline’s name on the top left part of the screen to see the branches of your repo.

If you haven’t created any new branches all the work should have already started in the master branch, click on it and you should see the steps contained in the Jenkinsfile manifest.

While the pipeline is working let’s see what’s on that Jenkinsfile

  1. We are declaring an environment variable called DOCKER_IMAGE_NAME, as its name implies, its the tag of the docker image that we are going to be building
  2. Between lines 9 and 15 we are running the build.gradle file that is in the root of the project, this file basically runs npm install and npm build, although this process is not necessary, I’ve left this example so you can integrate your Gradle scripts with your Jenkins pipelines, including dependencies between different commands.
  3. Next up we are building the docker image of this project, for that, we use the .dockerignore file and Dockerfile, in this file I’ve decided to use an alpine version of node because its very small, although good for testing, this server and setup is not recommended for production
  4. In lines 27 to 39, we are pushing the docker image to the Docker registry using the docker_hub_login credentials
  5. And finally, we are creating a Kubernetes deployment using the kubeconfig kubeconfigid and the k8s_svc_deploy.yaml file that is also in the root of your project. This YAML file is a Kubernetes manifest for a deployment of 2 containers running the docker image we’ve just created and a service to expose it in port 31000

You might have noticed that we are effectively running npm install twice which extends the time required for this pipeline. This was just to demonstrate the inclusion of a Gradle build in this process. So if you want to reduce the time this pipeline takes, just comment out step 2 of the jenkinsfile.

Now you can edit any of the files of your source code or even the dockerfile, jenkinsfile, Gradle.build and this Jenkins pipeline will be modified as soon as the changes are pushed to your GitHub repo

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Felipe Lujan
Felipe Lujan

Written by Felipe Lujan

Google Developer Expert — Google Cloud.

No responses yet

Write a response