Maximizing DevOps ROI with Modern Application Practices | HCL Blogs

Maximizing DevOps ROI with Modern Application Practices

Maximizing DevOps ROI with Modern Application Practices
August 03, 2020


An ever-increasing number of organizations are embracing enterprise-wide transformation initiatives to implement business imperatives. In this regard, scaled Agile models of delivery, DevOps practices, micro-services, and cloud-first and API-first strategies have been the most popular implementation tactics.

DevOps emphasizes on the use of container registries and container orchestration tools

It would be incorrect to assume that the organizations that desire to implement transformation projects will not have a sound DevOps optimization strategy carved out for the modernization path. To secure ROI from such projects, reduce time-to-market, and drive agility at scale, enterprises mainly focus on tools and processes that implement DevOps and microservices-/API-driven architecture. The DevOps automation strategies focus mainly on automation in code check-in and deployment processes like the use of CI/CD tools. It also emphasizes on the use of container registries and container orchestration tools (e.g. Quay, Docker Hub, AWS ECR, and others).

Automation in monitoring operations, executing automated test scripts in CI/CD pipelines, automated creation and tearing down of infrastructure using tools like Chef, Puppet, ARM templates, and cloud formation scripts are some of the well-known DevOps practices.

Modern Application Architecture and Practices for DevOps

One of the reasons why the ROI of modernization projects is underachieved is because the applications that are architected and designed to drive modernization fail to support the nonfunctional aspects intended to facilitate DevOps.

In addition to having a well-thought-out DevOps strategy which includes industry-standard tools and processes, the modern EA practice needs to factor some of the below concerns while implementing DevOps practice as part of the scaled agile framework.

  • Inefficiently optimized containers that take hours to build containers in CI/CD pipelines, resulting in developers’ productivity loss
  • Lack of standard practices in application coding for tracing and monitoring, which causes hours being wasted on support calls to resolve latency issues
  • Lack of well-thought-out strategy for handling database-related changes in release pipelines
  • Architectural deficiencies to support deployment strategy for blue-green/canary– rollouts

Here are some of the standard aspects that need to be baked into application architecture and EA governance for maximizing the DevOps ROI:

A well-governed strategy for distributed tracing

Distributed traces that are generated without any standards can end up being just any other log file or time-series data record. To maximize the value obtained from the traces, the tracing solution must provide insights to put them in the right context for the issues being investigated.

The trace records should enable DevOps to focus on recovering from and resolving service-related issues while making them less dependent on the core developers in the team. This will free up core developers and allow them to focus on design and functional capabilities development. Tracing needs to be designed to answer specific DevOps questions during support, such as:

  • When did the customer start experiencing an error on the mobile app?
  • Did the recently deployed change caused the issue?
  • What type of customers are experiencing delays in accessing the account information on the website?

Using tracing tags helps in the conditional tracking of header attributes. Tracing strategy must include practices for capturing important system behaviors like cache misses, code versions, user persona type, originating channel type, data center, hardware type, and related information. The tracing practices should focus on dealing with storage costs, overheads, and sampling rates.

Established practices for handling immutable versioned binaries

Binary images of containers are built using standard container composing scripts as part of CI/CD pipelines to store them as binary images in the registries, before being used in the release and deployment process. When dealing with CI/CD pipelines that contain multiple containers to be deployed as part of large-scale deployments, the containers need to be optimized.

Large container images with multiple applications can cause start-up and shutdown issues and are prone to failures, forfeiting the gains made through the container model of deployment. The governance strategy should focus on:

  • Having smaller container sizes
  • Applying container build practices like multistage build and utilizing container build-cache
  • Utilizing industry best practices geared toward container size optimization

Support for multiple deployment strategies

Having an architecture strategy around DevOps practices to enable automated release management that minimizes downtime and release risk is one of the pillars of the modern application practice. Hence, it is important to consider best practices around deployment and understand various patterns.

The governance strategy must include guidelines and infrastructure support to implement:

  • Support for canary releases and shadow/dark canary releases to implement low-risk deployment strategies for unknown performance behaviors and new feature validations
  • Support for blue-green deployments to minimize downtime during the rollout
  • Support for feature toggling to centrally manage the release process

Handling database changes

Database-related changes are hard to manage in canary releases and blue-green deployments. Sometimes the database changes are not backward compatible and lead to breaking changes.

It is difficult to rollback changes made to the database. Deployment strategies that are focused on low-risk feature deployment need to address the database changes with the carefully crafted strategy to maximize the DevOps investments. Sometimes canary services might need a full-fledged clone of production databases.

Backward-compatible schema changes

Treat schema as an interface to the database by making schema changes such that it is compatible with the older version of services. When new columns are added, values of the column should be defined as optional and are updated with default values when a value is not provided.

When the application reads the entries from the table, the column value can be ignored and presented to the higher-level services. When columns are deleted, it is to be ensured that the records are deleted as part of the last of release, when everything else is validated and the new service is in effect.

Separate data sources

The best option is to separate the data sources and use different schema, if not different databases, for deployments. This would incur an additional expense in terms of storage costs.

Tracking database changes as code inversion control

Tracking the DML changes as codified scripts and versioning them in version control is ideal so that changes can be easily rolled back when a rollback is triggered.

Automated tools for tracking schema changes and database migration

Since RDBMS has fixed schema- canary releases require handling of schema changes and data migration in a less error-prone and automated fashion. This can be achieved using automated schema tracking and migration tools such as Flyway and Liquibase or DbMaintain. Migration scripts into the CI/CD pipeline or canary release scripts should be included to rollback and apply changes.

Modern application practices are the key to optimizing the DevOps and, in turn, maximizing the ROI thereof. The considerations stated above will help you in adopting modern application practices effectively to realize their benefits.