An Overview of Kubernetes | HCLTech

An Overview of Kubernetes
May 09, 2022

What is Kubernetes?

Container-based microservices architectures have profoundly changed the way development and operations teams test and deploy modern software. Containers help companies modernize by making it easier to scale and deploy applications, but containers have also introduced new challenges and more complexity by creating an entirely new infrastructure ecosystem.

The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s, as an abbreviation, results from counting the eight letters between the "K" and "s".

Kubernetes components

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every Kubernetes cluster has at least one worker node.

The worker node(s) hosts the pods, which are the components of the application workload. The control plane manages the worker nodes and the pods in the cluster. In production environments, the control plane usually runs across multiple computers, and a cluster usually runs multiple nodes, providing fault tolerance and high availability.


Features of Kubernetes

Following are some of the important features of Kubernetes.

  • Continuous development, integration, and deployment
  • Containerized infrastructure
  • Application-centric management
  • Auto-scalable infrastructure
  • Environment consistency across development testing and production
  • Loosely-coupled infrastructure, where each component can act as a separate unit
  • Higher density of resource utilization
  • Predictable infrastructure which is going to be created

One of the key components of Kubernetes is that it can run the application on clusters of physical and virtual machine infrastructure. It also has the capability to run applications on the cloud. It helps in moving from host-centric infrastructure to container-centric infrastructure.


i) High availability

Availability is about setting up Kubernetes, along with its supporting components, in a way that there is no single point of failure. A single master cluster can easily fail, while a multi-master cluster uses multiple master nodes, each of which has access to the same worker nodes.

ii) Scalability

Kubernetes auto-scaling helps optimize resource usage and costs by automatically scaling a cluster up and down in line with demand.

iii) Disaster recovery

The only right way to back up your Kubernetes workloads is to take application-aware, cloud-native backups that don’t hold you back from migrating to a new infrastructure. Manual backup and restoration are possible, and there’s a lot of documentation available on forums that organizations can use to perform effective manual disaster recovery.


Will Kubernetes remain massively important to businesses five or ten years from now? That’s anyone’s guess. The container ecosystem evolves rapidly. If you predicted in 2014 that Kubernetes would become as popular as it is today, many folks might not have believed you.

Still, for today, Kubernetes stands apart from the crowd of container orchestration solutions in several key ways. It’s the clear choice for managing modern container deployments in an efficient, flexible, and business-friendly way.

Kubernetes configuration.

Kubernetes commands

Namespace command

Kubectl create namespace <namespace name>

Ex:kubectl create namespace testapp

Deploy YAML file command

kubectl -n testapp apply -f <name of the yml file>

Ex:kubectl -n testapp apply -f nodejsAdd.yml

Verify how many pods are up and running

kubectl -n testapp get po  Note: “po” means “pods”

Verify the logs

kubectl -n testapp logs <containername>

kubectl -n testapp describe po <containername>

Delete the yaml file command

kubectl -n testapp delete -f  <name of the yml file>

Ex: kubectl -n testapp delete -f nodejsAdd.yml

Reference for installing Kubernetes in the centos Linux box




Get HCLTech Insights and Updates delivered to your inbox