Everyone is migrating to some form of cloud. All physical and virtual machines (VM) running applications are containerized. In fact, those who migrated to public cloud containers previously are today planning on server-less apps. This is how it feels when you attend a technology conference while speaking to selling groups or while watching recorded technology videos.
As a Lead to NxtGen Data Center – Global Practice, I would say it works with a certain set of customers only. The reality is different.
Moreover, if you haven’t taken a step yet, you have not committed a mistake.
We listen to companies struggling to move from Dev to Prod and their challenges to show production cases to business groups. This is because of not having a clearly defined strategy. Not being aware of growing and replacing container tools, which require alignment with other business environments.
While adopting containers, we need to understand how they are being consumed too. The most crucial aspect when leveraging containers is how products and tools around them help the ecosystem. The best route to identify this would be to take the OCI (Open Container Initiative).
Container, Container Engine, and Container Image
They are regular programs running on an operating system (OS) inside Linux kernel. It is important to remember that Docker is not a container, but a container engine that helps run existing containers easily.
“Docker container,” the firstst container engine made container consumption simpler which made people consider it as an important container ecosystem.
Container engine, its APIs help easily and start and stop containers locally. They consume underlying OS, and libraries via container images. Technology professionals who mention "Docker containers" usually refer to the “container engine” which is a privileged set of APIs running on an OS that makes it easy to run those containers.
Container image has the application files, the attributes that are required to run the containers.
Container technology requires the application’s runtime that includes its libraries, binaries and configuration files, the platform, and later the infrastructure that is abstracted, allowing the application to run anywhere.
With horizontal scaling capabilities, we can spread across containers into dozens of nodes. Orchestrator helps to manage across multiple hosts and acts as a shepherd to handle a large number of containers. It decides which code running tasks should be executed in which server, based on the availability, hardware, and application requirements. A DevOps engineer typically operates at Orchestration layer leaving containers to the Orchestrator to manage.
HCL has identified products, tested and defined solutions within our offerings that consists of Mesosphere, Openshift, Pivotal Cloud foundry/PKS.
Choosing a container infrastructure networking should always start from CNI- and CNM-compatible product specs. As containers and their ecosystem become more software defined, other layers such as networks also need to be aligned. A software-defined network (SDN) brings service discovery and network runtime into the overall container infrastructure to communicate within containers and the rest of the world (ROW). Software defined networks take the present plug and play network scenarios to API-based calls to attach, detach, and swap between old and new containers.
Local storage works well only on developer’s laptop and in-house servers.
Software-defined storage should come with storage drivers and block storage compatibility. Container images carry everything related to the software, including the data. Data isolation becomes important when data is critical. This isolation model is being traditionally followed from RDBMS days.
Software based storage and hardware based storage exists to handle container storages. While both provide container storage engine, hardware OEMs are recommended in larger environments. Visualize the number of containers that would exist and the traffic that can be created when instead of a container storage engine, hardware OEMs are functional in a larger environment.
Tracing, Logging, and Monitoring Aspects
An application is a group of microservices. Remember: Application uptime is of primary importance for businesses, not physical servers. Hence, tracing, logging, and monitoring need to work seamlessly. Applications are microservices and containers are short-lived ones (ephemeral nature). Container-centric monitoring is important than server availability uptime in horizontal scaling scenarios. The focus of monitoring should be on microservices. When a container is unable to run in a particular hardware, it comes up in another hardware with the help of Orchestrator. In the container world, physical servers are of lesser importance and have almost become invisible.
The products that we onboard to support monitoring should help us understand how containers interact with the rest of the world. Also, monitoring responsibility goes beyond Ops professionals. Container logs are huge, so tools with visualizing capabilities and AI should be used to reduce the service-level tickets.
We are going to have numerous microservices that define an application. Service-to-service communication will turn out to be complex.
Service mesh is an agent sitting in front of available services within a cluster to normalize service-to-service communication. In the microservices world, applications that run on containers are web-based mostly ending up using http or https as its major protocol. With service mesh being aware of applications based on HTTPS, it can directly route them to destinations based on its policy definitions set. This reduces TCP/IP layer traffic at data plane levels.
Security is an integral part of every layer within containers. The open-source nature of containers, an application with several services opening multiple ports, and faster means of deployment brings complex levels of security access controls. Hence, containers are going to bring unique security challenges as we adopt them in production. HCL partnered with several vendors, aligning to the right set of environments, to address these security concerns.
Before the 2010s, the world used to buy a complete stack from a single vendor. With container-related stack, single-vendor solutions cannot answer everything. From the description of different aspects of container technology, it’s clear that the container platform is not Docker and/or Kubernetes alone. There are other components which play a relevant role within compute, storage, network, security, monitoring, and DevOps topology of the container platform.
Welcome to the new application age. HCL can empower you technologically and make applications run realistically together.