A group of people was taking a computer science course in one of the community colleges. After a few weeks of classes, the professor decided to have some fun one day as a small learning activity. He divided the group into two halves; all the men in one group and all females in another group. The professor told the class that today they would be doing a quick project for the next 10 minutes to determine what ‘gender’ should the computers be categorized as.
All the men unanimously voted that the computers should be categorized as the ‘feminine’ gender. Hearing this, the professor asked the group to share their points as to why they thought so. The group presented their points as follows:
- Only the creator understands the internal logic of the system.
- When computers communicate with each other, they speak in a code language, which they or their experts can only understand.
- Every mistake you make is stored on the hard drive for later retrieval.
- After you buy one, you must spend money on its accessories to keep it running smoothly.
Women group had the last word on this subject. They unanimously voted that the computers must be categorized as the ‘masculine’ gender. As they gave their report, they supported their arguments with the following points:
- They have a lot of data, but still, they cannot think for themselves.
- They are supposed to help you solve the problems, but half of the time they themselves are the problems.
- As soon as you commit to one, you realize that if you had waited a little longer you could have got a better model.
While the above might have brought some giggles to you, there are some takeaways that we can connect to. These are our customers’ problems such as efforts, time, and money required to keep the lights on. Despite having an expensive and modern technology in their IT ecosystem, the business is still not able to improve their time-to-market. Every time a customer does a technology refresh, there is a newer technology in the market that forces the customer to think whether they will get the desired return on investment from the previous investments or not.
Most of the I&O leaders would agree that refresh or transformation of hardware infrastructure technology is easier to accomplish as compared to that of applications. With the datacenters of today becoming centers of data, and the businesses getting closer to their customers, many amazing things are expected to happen in the consumer space. This requires modern applications and faster time-to-market for the application release cycles. While server virtualization matured as a step in that direction by providing improved infrastructure utilization and better resiliency, application modernization and development saw a paradigm shift.
DevOps has evolved at a significant pace after it was first conceptualized in the early 2000s. In a recent finding from Puppet Labs in their State of DevOps Report, companies that adopt DevOps experience “60x fewer failures and recover from failure 168x faster than their lower-performing peers. They also deploy 30x more frequently with 200x shorter lead times.
As the application architecture evolved from monolithic to microservices architecture (or services) built from multiple self-contained components, in many cases these components are the perfect candidates for containerization. Developers are considered as the primary adopters of container technology. For developers, the only thing that matters the most is the application code. They don’t want to get into the nuts and bolts of how the application should be hosted. Putting it another way, developers like containers because they are convenient for building microservices architecture, and one reason for the popularity of the microservices architecture is that containers make them easy to implement.
As compared to the virtual machines, containers are very lightweight and may be only tens of megabytes in size, whereas a virtual machine with its own dedicated operating system may be several gigabytes in size. Containers have shared an operating system, and on top of that one can run many containers depending on the compute and memory resources of the underlying host.
Based on the application requirements, each container will have the application run-time components, Dynamic Link Libraries (DLLs) keep them isolated from other containers running on the same host. Each of these containers would share the parent operating system but will only have its own runtime and applications. Since the underlying operating system is shared with all the containers running on that host, the application and container run-time needs to be compatible with the host operating system.
However, the container hosts can also be virtualized. It does not need to run as the host on a bare metal server. Since it can be virtualized, and virtual Operating System Environments (OSEs) can use host operating system of applications choice, which differs from that of the underline host operating system (which uses some sort of hypervisor to allow multiple virtual OSEs to run on it). It indicates that you can have any number of different operating systems and container solutions on one physical server as well.
There also appears to be some ambiguity when people discuss PaaS and containers together. Some think that containers are a form of PaaS. However, in simpler terms, when you hear of PaaS solution, think of it as a developer-oriented application delivery solution that integrates programming tools, deployment tools, and application-hosting platform bundled together into a single package. While most container solutions do not include development tools, they provide a set of tools to deploy and manage containers, including container run-time, a container image registry, an orchestrator and infrastructure to host the platform.
While container technology adoption is still at its nascent stage and the ecosystem is maturing, the majority of the customers prefer to adopt an integrated container orchestration platform to help them manage and run the application containers at scale. Few early adopters have built out their own container stacks (i.e., select and integrate many open-source components); as they had the skills and resources to build and manage the same. However, most of the organizations do not have the breadth or depth of expertise to do all this on their own.
If we go by mainstream adoption of containers technology, it is best suited for microservices-based applications and much of the activity is happening around Linux-powered workloads. Though enterprises have started showing interest in containerizing Windows-based applications, the Windows container ecosystem is not yet matured.
In its current state, the container ecosystem is quite complicated because of the excessive integration of tools of respective Independent Software Vendors (ISVs) to address their unique problems.
The saying, “One size does not fit all,” is true for container solutions available in the market as well. Some are good in API integrations; whereas some are good at scheduling, orchestration, and cluster management among others. One should consider prior to deciding on a container-management platform.
One should measure container-management platform across its scale, resiliency, and extensibility features before bringing it into the mainstream. The industry has witnessed some standardization in container orchestration ecosystem scheme of things. Docker is by far the most widely used container engine, majorly due to its repository and community. For automating the deployment, scaling, and management of containerized applications, Kubernetes has emerged as the de facto standard. It is supported by the hyperscale cloud providers such as Amazon’s AWS, Google’s Cloud Engine (GCE) and Microsoft’s Azure. As compared to other container orchestration and scheduling solutions, Kubernetes is the most portable, extensible opensource platform (originally designed by Google, now maintained by the Cloud Native Computing Foundation). It has one of the greatest and fastest growing ecosystems. It helps customers avoid vendor lock-in. Connecting back to the customer’s problem statement of technology refreshes dilemmas, containers are emerging as a solution to the problem of how to get applications run reliably when moved from one computing environment to another.
Containers give developers the ability to 'do' microservices, and microservices architecture works well with containers. Also, containers can help enterprises modernize legacy applications and create scalable and agile cloud-native applications.