Continuous Delivery - Setting up advanced deployment strategies on OpenShift | HCL Blogs

Continuous Delivery - Setting up advanced deployment strategies on OpenShift

Continuous Delivery - Setting up advanced deployment strategies on OpenShift
August 07, 2020

One of the key challenges in software development is rolling out newer features to users, receiving feedback from the audience, and having the ability to quickly respond to the feedback. Over the last few years, continuous delivery has introduced concepts wherein a feature once built is in a ready state for deployment to production and having the feature in a ready to deploy state is given a high degree of importance. It has also introduced us to newer deployment strategies such as blue/green deployments and canary releases, which help to address some of these challenges.  

Here, we are looking at two such deployment strategies and how to set them up on the OpenShift container platform:

  • Blue-Green
  • Canary

Blue-Green Deployments

In the blue-green deployment approach, newer versions of an application are deployed in an identical environment and a router is used to select the application version which will be exposed to the users. This deployment strategy is useful in rapidly deploying new features to production, minimizing disruptions during the deployment interval, and has a manageable way of reverting back to older versions.

This setup can be enabled at an application level using load balancers to route the requests to the right servers or in case of a microservice architecture it can be used to rapidly roll out newer features and enhancements on a service. In this use case, we will look enabling a blue-green deployment for an application setup on OpenShift.

This setup

We are going to use an application - 'quote-demo' for setting this up. We have multiple versions of the application pre-built and the images hosted on the docker hub. We will be setting up two latest versions of this application in a blue/green mode.

This application exposes two end points:

/version - returns the app version
/quote - returns a stock quote.

The two versions of the app return different responses for both end points. /version returns 0.2 and 0.3. /quote returns a concise and a long-format stock quote respectively.

On OpenShift, create a new project and a new app:

oc new-project bluegreen

oc new-app vijayraghavan/quote-demo:v0.2 --name=v2

This creates a project, a new image stream, takes the image quote-demo v0.2 from the docker hub and deploys on the deployment config 'v2'.

Use oc status, to view the app:

C:\Users\vijay>oc status
In project bluegreen on server https://console.pro-eu-west-1.openshift.com:443
dc/v2 deploys istag/v2:v0.2
  deployment #1 running for 11 seconds - 0/1 pods

Create a service:

C:\Users\vijay>oc expose dc/v2 --port=8082
service/v2 exposed
C:\Users\vijay>oc get svc
NAME      TYPE        CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
v2        ClusterIP  172.30.249.95          8082/TCP  6s

Create a route to expose the service to the outside world:

C:\Users\vijay>oc expose svc v2 --name=bluegreen
route.route.openshift.io/bluegreen exposed
C:\Users\vijay>oc get route
NAME        HOST/PORT                                                  PATH      SERVICES  PORT      TERMINATION  WILDCARD
bluegreen  bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com            v2        8082                    None

This creates a route named 'bluegreen' and exposes the service.

This setup

The application 'quote-demo' deployed exposes the version of the API at /version. The URL - http://bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com/version returns the current version of the app.

C:\Users\vijay>curl http://bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com/version
App Version : 0.2

 

Next, we will deploy the new version- v0.3 of the quote-demo app alongside the existing v0.2 version and alter the route to repoint to v0.3.

 

Create a new app for v0.3 and set up a new service:

C:\Users\vijay>oc new-app vijayraghavan/quote-demo:v0.3 --name=v3
C:\Users\vijay>oc status
In project bluegreen on server https://console.pro-eu-west-1.openshift.com:443
http://bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com to pod port 8082 (svc/v2)
  dc/v2 deploys istag/v2:v0.2
    deployment #1 deployed 19 minutes ago - 1 pod
dc/v3 deploys istag/v3:v0.3
nbsp; deployment #1 deployed 47 seconds ago - 1 pod

Create a new service for the new version:

C:\Users\vijay>oc expose dc/v3 --port=8082 service/v3 exposed
C:\Users\vijay>oc get svc
NAME      TYPE        CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
v2        ClusterIP  172.30.249.95            8082/TCP  22m
v3        ClusterIP  172.30.242.180          8082/TCP  32s

We now have two versions of the app, v2 and v3, deployed. Two services which expose them and one route:

This setup

Edit the route to point to the v3 service:

C:\Users\vijay>oc get route
NAME        HOST/PORT                                                  PATH      SERVICES  PORT      TERMINATION  WILDCARD
bluegreen  bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com            v2        8082                    None
C:\Users\vijay>oc edit route bluegreen
In the yaml file, under the spec- change v2 to v3:
spec:
  host: bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com
  port:
    targetPort: 8082
  to:
    kind: Service
    name: v3   
    weight: 100
  wildcardPolicy: None
C:\Users\vijay>oc edit route bluegreen
route.route.openshift.io/bluegreen edited

The change can be done on the command line:

oc patch route/bluegreen -p '{"spec":{"to":{"name":"v3"}}}'

or even from the console:

This setup

The route is now switched to the newer version of the app and the end points can be verified for the updated response from the newer version:

C:\Users\vijay>curl http://bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com/version
App Version : 0.3

If the app version needs to be reverted back to v0.2, we would need to only patch the route again. We could host multiple versions of the app and the route can be modified to serve the desired version during runtime with no restarts or any downtime.

Canary Deployments

In the canary deployment approach, newer versions of the app are rolled out in a phased manner to the users. The newer features in canary releases are rolled out to a smaller group of users initially to minimize risk, detect problems, or weed out regression issues. The percentage of users to whom the newer version is served is gradually increased over time. This approach is useful to execute A/B testing scenarios, where multiple versions of the application can be served simultaneously, and user response can be assessed.

This setup

On OpenShift, canary releases can be set up by configuring the route. In the previous example, we set up two different versions of the application with the route pointing to one version with 100% weightage. The route can also be configured to point to two services (here - v2 and v3) and it is also possible to configure what percentage of the newer version has to be served.

We will try to configure a canary release for v3 and try to set up a route which would serve v2/v3 at an 80/20 ratio. As earlier, this can be done by editing the route configuration. This can be done by editing the route yaml file or from the console.

Edit the route on the console. Select traffic across multiple services. Assign 20% to v3 and 80% to v2:

This setup

Once saved, the route should look like this:

This setup

This change updates the route yaml file. It can also be updated manually to make this happen:

C:\Users\vijay>oc edit route bluegreen
route.route.openshift.io/bluegreen edited
------------
...
spec:
  alternateBackends:
  - kind: Service
    name: v2
    weight: 80
  host: bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com
  port:
    targetPort: 8082
  to:
    kind: Service
    name: v3
    weight: 20
  wildcardPolicy: None

Testing the canary release

Once changes are done, we would expect the app to respond with both v0.2 and v0.3 in an 80/20 ratio. We can use a simple curl command running in a loop to test this:

file:versioncheck.bat
--------
ECHO OFF
for /l %%N in (1 1 10) do (
  curl "http://bluegreen-bluegreen.e4ff.pro-eu-west-1.openshiftapps.com/version";
  echo.
  )

 

C:\Users\vijay\dev\github>versioncheck.bat
App Version : 0.2
App Version : 0.2
App Version : 0.2
App Version : 0.2
App Version : 0.2
App Version : 0.3
App Version : 0.2
App Version : 0.2
App Version : 0.3
App Version : 0.2

We can observe and verify that the OpenShift platform delivers the services in the configured ratio.

In summary, deployment strategies which would have been difficult to set up and implement in the older on-premises configurations and application setups have been made easier using the new container technologies. Historically, an application setup for delivering a blue-green deployment would require two completely identical infrastructure stacks and the load balancer would be pointed to the version which would need to be delivered.

Similarly, a canary deployment would require the load balancer to be configured so that a certain percentage of the network traffic is routed to a specific version. However, these application setups can be done only at an application entry point and not very easily at an individual service level.

Container platforms have made this easier. Platforms such as OpenShift provide out-of-the-box features which can enable these deployment strategies at both an application level and for a specific microservice. Application deployment strategies can be finetuned as such that microservice deployments could be seamless, with individual services having their own deployment strategies. It is also possible to run A/B tests for multiple features in parallel and makes it easier to revert or configure the deployed versions with no downtime or service interruption. Using such container technologies we can ship features much faster, control the deployments and continuous delivery better, and manage the risks of introducing new features in a controlled manner.