Introduction
Azure Kubernetes Services (AKS) is a completely managed Kubernetes container orchestration service. It streamlines deployment and operation. With auto scaling of application infrastructure, we get the required agility. Being self-managed, a big team is not needed to manage our Kubernetes cluster.
A Kubernetes cluster offers an added advantage to Jenkins. Azure Kubernetes ensures that the resources are used efficiently, and resource overload does not take place. In addition, Kubernetes has the required ability to orchestrate container deployment. It also ensures that Jenkins always has the correct amount of resources available.
Jenkins is an open-source automation Continuous Integration (CI) Continuous Delivery (CD) tool. It accelerates the software development process by automating it. Jenkins also performs the software delivery processes throughout the entire lifecycle, including different stages.
Deployment of Jenkins in AKS
Traditional Deployment
Jenkins follows the Master-Agent architecture. In this architecture, the Master is responsible for organizing the execution of jobs and hosting the Jenkins UI and server. Jenkins can be deployed in different ways and with different configurations. For instance, it can be hosted as single or many Virtual Machines (VMs). Although the Jenkins installation process is easy and upfront, keeping it secure and scalable is still a challenge.
Below are a few disadvantages of the traditional deployment methodology:
- Scalability: A single server or machine has a specific amount of resources. It may improve the performance of the Jenkins cluster by adding more resources to the machine. However, it is not a correct solution for scaling of Jenkins cluster. The cluster is required to handle hundreds of jobs. There is a limit to the resources that can be attached to the server. It would be much easier to scale the Jenkins cluster by adding more workers/agents.
- Maintainability: In general, all the jobs executed on Jenkins need a specific third-party software. The installation of all the required packages on one server can complicate the installation process. Some packages, libraries and software may have conflicts. So, all the required packages cannot be installed together on the same machine. However, we can have multiple dedicated agents on the cluster for setup, and each agent will have a different purpose.
- Security: A single server Jenkins has security risks. All the production environments and testing environments must be accessed from the same server. Although, a Jenkins multi-node cluster improvises the security, the master nodes will be separate from the agents. Also, there can be separate agents for each environment.
Why Use Jenkins in AKS?
Jenkins on AKS is recommended to use the dynamic agents instead of using static virtual machines. The default configurations of Jenkins are not recommended for production environments. Enabling security, using distributed builds on multiple pods or agents, using pipelines, and storing the steps/tasks scripts under version control system are few of the good practices for improving and enhancing Jenkins installation setup.
Since AKS Cluster will help resolve both these best practice requirements, it is recommended to use Jenkins on AKS. Jenkins scalability provides a few benefits like:
- Running multiple builds in parallel
- Automation creation and removal of agents to save costs
- Load Division
Jenkins Installation on AKS using Helm v3
Helm is a package manager which eases the process of installation, configuration, upgrade, and uninstallation of complicated Kubernetes application. These templates can be shared with the community and customized for specific installations. Kubernetes commands (kubectl) is required to create and configure resources using Kubernetes manifest. A helm chart defines several Kubernetes resources as a bundle. Since all the resources of an application are deployed using just one command, Helm can make deployments easier and reusable. Default Jenkins Helm Chart is available from stable.
Pre-Requisites for Jenkins installation on AKS using Helm Chart
- Jenkins Namespace
- Jenkins namespace requires to be created in Kubernetes
- Storage Class
- There are multiple options to set the type of storage which can be mounted in Jenkins pod.
Azure Disk or Azure files can be used.
- There are multiple options to set the type of storage which can be mounted in Jenkins pod.
- Persistent Volume Claim (PVC)
- All Jenkins pipeline data must be persistently saved in disk or file, so that in case of pod failure, no pipeline data is lost. PVC is created to map storage volume to the required storage class.
- Base helm chart for Jenkins
- Base helm chart of Jenkins needs to be pulled and pushed into Azure Container Registry (ACR)
- Agent
- Agent is created so that it listens to build request
Customized Jenkins Chart
Jenkins base helm chart can be used to deploy Jenkins on AKS, but all the additional configurations like service type, password, username, plugins, customized image, and persistence values would be manual and would require the same effort every time you need to install Jenkins. It is recommended to customize the helm chart so that all the required configurations are included in custom file and no or very little configuration is required after helm chart installation.
This customized value can be passed on as values files as shown in the below command:
$ helm install jenkins . --values customised-jenkins.yml -n Jenkins-ns
Creation of Ingress Resource for Jenkins Application
Jenkins installed is exposed internally by default as ‘Cluster IP’ service type. It needs to be exposed to users using Ingress controller and Ingress resource file. You can either create a VM in your VNet and access it by using bastion host to access Jenkins URL or if exposed to outside VNet, it can be done via ingress of type Load Balancer.
Update the Chart in Repo
Once the chart is updated with customized values and Jenkins is tested. To use this chart further with the same configuration, push and store the chart in ACR. To push the chart a connection needs to be made to ACR using secret. The service principal value is embedded in this Kubernetes secret for connection to work.
CI CD
For applications, images to be built using this tool on AKS Cluster, Kaniko needs to be used.
Kaniko
Kaniko is a tool, used to build image from Dockerfile inside a container or Kubernetes Cluster. It creates images in environments that cannot effortlessly or securely run a docker demon like standard Kubernetes cluster. Kaniko created by Google Cloud Container Registry (GCR) and run as an image: gcr.io/kaniko-project/executor.
Kaniko pod is configured to run as an executor, with required configurations done. With this configuration as soon as a CI instance is triggered the agent creates the pod with yaml file and pulls in the code and docker file from Repo. The image is created in this pod and pushed into ACR.
For authentication of ACR with Kaniko Pod, a secret is created as a pre-requisite and passed on to Kaniko yaml file. For the creation of secret, we require a SP’s (Service Principle) Client ID & Secret which has Pull & Push permission on ACR and ACR’s login server details.
$ kubectl create secret docker-registry registry-credentialsacrd01 --docker-server ${ACR_LOGIN_SERVER} --docker-username ${CLIENT_ID} --docker-password ${SP_PASSWD} --docker-email ${EMAIL}
CI Using Jenkins POD as Agent
Though most of the configuration required for CI through Jenkin is done using helm customized values file, it is recommended to check that the required plugins like maven integration, Kubernetes, and Kubernetes continuous deploy have been correctly configured. Jenkins CI pipeline for each application needs to be created and configured with your code repository and Kaniko yaml file.
By default, an agent listens to the request. As soon as CI is triggered, a POD is created with Kaniko yaml file and an image is built on the pod from Docker file. As the pod already is authenticated with ACR, the application image that is created is pushed to ACR after the build and Pod gets destroyed once the CI job is complete.
CD to Deploy Application on AKS Cluster
Jenkins allows the execution of continuous delivery pipelines as code. Using Jenkins file, it is possible to define and store in source control the different phases that will be executed to build, bundle, test, and deploy an application.
After CI, the image is built and pushed to ACR, the CD should be setup correctly so that it uses deployment.yaml file defined in Jenkins file and deploys the required AKS Components in the cluster. For deploying a deployment.yaml file, it is required to authenticate Cluster with Jenkins using kubeconfigId (Secret created using. cube/config file) and kubernetesDeploy (Kubernetes continuous Deploy) plugin. The repository URL should be set with deployment.yaml path correctly.
Best Practices for Jenkins Integration
The best practices recommended by industry leaders for Jenkins integration with AKS is based on the below parameters:
- Allow only required rules in Firewall to fetch helm and images for Jenkins and Kaniko
- Provide only the required permission to SP used in Kaniko ACR authentication
- Check updates and changes impact of Jenkins image used in Helm chart
- Use customized helm chart despite using stable Helm chart.
Limitations
The limitations are:
- Building Windows containers is not supported by Kaniko
- Entrypoint needs to be overridden, otherwise, the build script will not run
- For the desired container registry authentication a config.json Docker file needs to be created
- The v1 Registry API (Registry v1 API Deprecation) is not supported by kaniko
- Only official debug image is recommended for kaniko(gcr.io/kaniko-project/executor:debug). It is not recommended to run kaniko executor binary in another image, as it might not work because it has a shell, and a shell is required for an image to be used with Jenkins.
- It also includes copying executables of kaniko from its official image into another image.
Conclusion
Using Jenkins over AKS ensures that the resources are used efficiently, and the required resources of the infrastructure are not overloaded, and the correct amount of resources are available. It utilizes only the required resources which helps in saving cost. With customized Jenkins helm chart, it provides the feasibility to install, uninstall, and update Jenkins without any difficulty and it can be shared over community easily. Kaniko is used as a tool to build image from docker file and push that image to ACR by spinning up a pod to perform this task securely. The pod is destroyed once the job is completed, which again helps in saving cost.
A Jenkin pod can be spinned up on any of the managed Kubernetes services which is provided by any of the cloud providers. The only underlined object that needs to get changed within the helm chart are Storage class and PVC according to the cloud provider.