Co-author: Amit Sharma
Take a few moments to imagine an ideal IT environment – one where the enterprise runs on cloud-native apps, containers are home to the applications, and all the processes and operations are streamlined. Now, let’s return to reality. Enterprises are still running on legacy systems and applications exist partly in containers with major hardware and software resources still in traditional virtual machines (VMs). Enterprises continue to spend a lot of money, time, and effort on migration and transformation for each application. So the question is, how do we build a bridge from this reality to the ideal environment made of containerizing applications?
What makes containerizing applications challenging?
First, the good news. Enterprises are increasingly recognizing the potential benefits of containerized applications. Gartner expects that by 2022, over 75% of organizations worldwide will run their containerized applications in production. This is a significant jump from the current estimate of under 30%.
While the data indicates a positive trend, a key bottleneck is the speed of refactoring and/or replacing applications. The presence of dependencies and complexities slow down the migration. This bottleneck is exacerbated by the cost of the re-platform or re-architecture of an application and the shortage of cloud and container skill sets.
These factors pile on to an enterprise’s ongoing application backlog, budget constraints, and technical debt. Gartner, in the same report, considers these factors to be responsible for the low adoption (less than 5%) of containerized enterprise applications. But the report also goes on to say that by 2024, the situation is expected to improve, and organizations will run up to 15% of enterprise applications in a container environment.
But making this leap isn’t that simple, because organizations would like this process to be completed in a single seamless step. Unfortunately, application migration from VMs to containers can potentially take years of dedicated developer time since containerized applications need to work concurrently with or require concurrent support from application components in the VMs. So the question is, how can they accelerate this transformation to fully realize the potential?
What are the alternatives to fully containerizing applications?
Owing to the time-intensive process of migration, some organizations are exploring alternatives: such as the lift-and-shift approach. With this approach, you can migrate closely coupled legacy applications to a container platform, without redesigning the app. It prevents application code rewriting and allows developers to program with patterns that are consistent with the existing structure.
The lift-and-shift approach works best with smaller applications. But, real legacy projects contain hundreds of modules, kits, and packages for every application. So, a logical step would be to find a way to unify both the existing applications in the VMs and new applications in the containers into a single platform based on containers. This is where Red Hat OpenShift, the industry’s leading enterprise Kubernetes platform, comes in.
For the unification, we must carefully investigate the path of migration to the cloud. It’s not as simple as raise and transfer,as cloud adoption means moving into a flexible, distributed, and scalable environment, modernizing apps, and switching to cloud-native architectures. If you think that this indicates a shift toward building infrastructure for microservices, you are correct.
We are, after all, breaking down a monolith to a degree where applications gain functional scalability, mobility, and elasticity in the cloud. In this situation, container-based deployments and container orchestration, such as Kubernetes, are important. The process involves two orchestrations: that of VMs and that of containers. But handling two orchestrations seems like a lot of time and cost-intensive work, right? So, how about an alternative?
A more pragmatic, commercially viable, and less disruptive approach is to first move workloads as VM on a Kubernetes platform such as OpenShift. Then you can attempt to dis-entangle them to microservices, which can run as containers in the same platform and interface with the existing virtualized workload. This is the basic concept of container-native virtualization (CNV).
Figure: 6 Rs approach coupled with Container-native Virtualization
What does container-native virtualization do?
CNV, now known as OpenShift Virtualization, is based on the KubeVirt project—an open source framework that deploys and manages VMs utilizing Kubernetes constructs, such as PVCs and pods. The focus is on helping enterprises move their applications from a VM-based infrastructure to a container-based Kubernetes platform.
Figure: KubeVirt’s container-native virtualization
OpenShift Virtualization makes it easier to explicitly convert conventional virtualized workloads within OpenShift into development workflows. This capability accelerates the automation of applications by:
- Supporting the development of modern technologies in containers that communicate with existing virtualized applications focused on microservices
- Simplifying the separation of monolithic, virtualized workloads into containers by merging traditional virtualized workloads with modern container workloads on the same network
By running VM-based workloads on the same network as container-based applications, OpenShift Virtualization helps teams build containerized applications quicker. This function facilitates the distribution of current workloads, as well as the continuous use of virtualized infrastructure dependencies for cloud-native, containerized applications.
Figure: OpenShift Virtualization workflow
As a result, teams find it easier to handle and deploy applications that already include VMs and containers directly from OpenShift. All they need to do is manage both the virtualized workloads and containerized workloads as part of a single product creation and lifecycle workflow. This improves the possibility of transferring more elements of the application to containers over time.
How does Red Hat help and work internally?
Work on CNV began in 2016 and the developers released it to the open source the next year, and OpenShift Virtualization was formally released in mid-July 2020 with the introduction of the OpenShift 4.5. This is visible in the version number—the CNV naming strategy is still followed and the released version was 2.4.
OpenShift 2.4 enhanced VM management. With this function, companies could simultaneously deploy and handle VMs and container workloads in OpenShift. How? Through Kubernetes' custom tools. New tools are currently under development and can be introduced through the OpenShift cluster.
Figure: VMs into OpenShift run side by side with containers
The new OpenShift Virtualization allows companies to execute activities on various server virtualization systems. Activities such as:
- Building and controlling VMs for Linux and Windows
- Linking VMs using the web management console and command-line tools
- Importing and copying existing VMs
- Managing network interface controllers, and
- Maintaining hard drives linked to VMs
By adding the typical VM workloads to the Kubernetes environment, OpenShift Virtualization targets stacking workflows and development silos for conventional and cloud-native apps.
Figure: Red Hat OpenShift console showing the orchestrated VMs
Container-native virtualization is just another example of Red Hat’s dedication to Kubernetes as the future of orchestration of applications and a shared practice in the open hybrid cloud. The demand for effective and safe portability of software across environments and operating systems has pushed the industry to search for stronger designs for virtualization.
A major innovation is that of a hypervisor software, which provides the foundation for a virtualization platform. It enables effective creation, monitoring, and management of multiple VMs. But while the virtualization hypervisor paradigm is now used to effectively deploy countless applications across the wide spectrum of settings, it has some built-in limitations. Container virtualization is helpful in resolving some key problems, such as:
- Limited ability to concurrently run workloads of virtual guest operating systems (OS)
- Inability to freely delegate staff to systems
- Decreased application performance owing to high overhead from calls from the guest OS to the hypervisor
How can HCLTech help in your endeavor through collaboration with Red Hat?
HCLTech has had a long-standing collaborative relationship with Red Hat and a skilled team of engineers who have dedicatedly worked on Red Hat technologies and bring years of cross-domain expertise to the fore. Our Center of Excellence is focused on Red Hat technologies and specifically on OpenShift. We have led the way for multiple global organizations on their container journeys and Red Hat OpenShift adoptions.
No matter where you are in your container journey, whether evaluating a container platform or wondering how you could transform your current platform, HCLTech is here to help.
Reach out to us to maximize the benefits of adopting Red Hat OpenShift Virtualization.
Conclusion
Container-native virtualization has the potential to accelerate the adoption of container platforms in your organization. Not only will this improve the availability of the applications hosted on VMs, but also ensure that the applications that are containerized are able to communicate better with the virtualized ones running on the same platform. The agility this can bring in containerization of legacy workload is remarkable. Red Hat enables organizations to realize the numerous benefits that this evolution can bring in large enterprises struggling with their legacy workload, and HCLTech has the right resources and expertise to help in this journey.
Next Steps
To know more write to us at HCLEcosystem@hcltech.com