Google open sourced Kubernetes project in 2014. It is a portable, extensible open source platform for managing containerized workloads and services. It has a large, rapidly growing ecosystem.
The name Kubernetes originates from Greek, meaning helmsman or pilot, and is the root of governor and cybernetic. K8s is an abbreviation derived by replacing the 8 “ubernete” with “8”.
Of course, containerized workflows can be complicated. Early adopters of Docker found they were soon running hundreds or even thousands of workloads inside containers. This quickly became an operational challenge, scalability isn’t free. If you have 10 containers and four applications, container orchestration isn’t that big of a deal. BUT if you have a 1000 container and 400 services, it’s much more complicated picture. Continue increasing those numbers and the complexity follows.
Enter Kubernetes and container orchestration. Kubernetes is a tool for automating the deployment, management, and scaling of containerized application. Automation is the key, it’s one reason why Kubernetes is increasingly a go to tool for site reliability engineers and other infrastructure and operations pros.
Kubernetes is becoming to Orchestration what Docker became to containers, virtually synonymous.
If you want an example of the power of Kubernetes, look no further than the launch of the Pokémon Go app. While there were troubles, that app is a prime example of the ability of Kubernetes and containers to scale rapidly. You divide the complete application into service. Assign each service to each container. This approach is known as Microservices. These container in turn are handled by Kubernetes.
If you want to run multiple containers across multiple machines – which you’ll need to do if you’re using microservices – there is still a lot of work left to do. You need to start the right containers at the right time, figure out how they can talk to each other, handle storage considerations, and deal with failed containers or hardware. Doing all this manually would be a nightmare. But Kubernetes does all that for us.
Let’s look at some important terms before diving into the concepts:
A pod is a group of one or more containers, with shared storage/network. Each pod contains specific information on how the containers should be run. They are also a unit for scaling If you need to scale an app component up or down, this can be achieved by adding or removing pods.
Kubernetes target the management of multiple microservices communicating with each other. Often those microservices are tightly coupled forming a group of containers that would typically, in a non-containerized setup run together on one server. This group, the smallest unit that can be scheduled to be deployed through K8s is called a pod.
This group of containers would share storage, Linux namespaces, cgroups, IP Addresses. These are co-located, hence share resources and are always scheduled together. They are not meant to live long. They are created, destroyed and recreated on demand.
As pods have a short lifetime, there is no guarantee about the IP Address they are served on. This could make the communication of microservices hard. Hence, K8s has introduced concept of a service, which is an abstraction on top of a number of pods, typically requiring to run a proxy on top, for other services to communicate with it via a Virtual IP Address. This is where you can configure load balancing for your numerous pods and expose them via a service.
An object that describes a set of pods that provide a useful service. Services are typically useful to define clusters of uniform pods.
Components of Kubernetes
Responsible for the management of Kubernetes cluster. Entry point of all administrative tasks. It takes care of orchestrating the worker nodes, where the actual services are running.
Components of Master Node
1. API Server
- Entry point for all the REST commands used to control the cluster
- Processes REST requests, validates them, and executes the bound business logic
2. etcd storage
- Simple, Distributed, consistent key value store
- Mainly used for shared configuration and service discovery
- Provides REST API for CRUD Operations
- Eg. Jobs being scheduled, created and deployed, pod/service details and state, namespaces, replication information, etc.
- Deployment of configured pods and services onto the nodes happens due to this component.
- It has information regarding resources available on the member of the cluster, as well as the ones required for configured service to run and hence can decide where to deploy a specific service.
- A controller uses apiserver to watch the shared state of the cluster and makes corrective changes to the current state to change it to the desired one.
- You can run different types of controllers inside the master node. Controller-manager is a daemon embedding those.
- Eg. Replication Controller, which takes care of the no. of pods in the system. The Replication factor is configured by the user, and it’s the controller’s responsibility to recreate a failed pod or remove an extra-scheduled one.
- The pods are run here, so the worker node contains all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the containers scheduled.
Components of Worker Node:
- Docker runs on the worker nodes
- Runs the configured Pods. Takes care of downloading the images and starting the containers
- To read more about Docker, you can refer to my previous post on VMs, Containers and Dockers.
- It gets the configuration of a pod from the apiserver and ensures that the described containers are up and running
- This is a worker service responsible for communicating with the Master node and etcd, to get information about services and write the details about newly created ones.
- It acts as a network proxy and a load balancer for a service on a single worker node
- It takes care of the network routing for TCP and UDP Packers
- Command line tool to communicate with the API service and send commands to the master node.
Main Functions of Kubernetes
- Running containers across many different machines
- Scaling up or down by adding or removing containers when demand changes
- Keep storage consistent with multiple instances of an application
- Distributing load between the containers
- Launching new containers on different machines if something fails
How Kubernetes Bulletproofs itself ?
Kubernetes simultaneously runs and controls a set of nodes on virtual or physical machines, achieved by running agents on each node.
The agent talks to the Master via the same API used to send the blueprint to Kubernetes. The agent registers itself in the Master.
Agent determines which containers are required to run on the corresponding node and how they are to be configured.
Master Node make all control decisions about which container needs to be started on which node and how it should be configured.
Docker vs Kubernetes
Docker and Kubernetes aren’t direct competitors. In Containers, Isolation is done on the kernel level without the needs for a guest OS, so containers are much more efficient, fast, and lightweight. These allows the apps to become encapsulated in self-contained environments. Docker is currently the most popular container platform.
Docker provides an open standard for packing and distributed containerized applications, there arose a new problem. How would all these containers be coordinated, scheduled, scaled, and communicate with each other? Solutions for orchestrating soon emerged. Kubernetes, Mesos and Docker Swarm are some of the most popular options for providing an abstraction to make a cluster of machines behave like one big machine, which is vital in a large-scale environment.
So, it’s “Kubernetes vs. Docker Swarm“
Docker Swarm is Docker’s own native clustering solution for Docker Containers, which has the advantage of being tightly integrated into the ecosystem of Docker and uses its own API.
Kubernetes and Docker are both solutions to intelligently manage containerized applications and provide powerful capabilities.
Docker is a platform and tool for building, distributing, and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters.
Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner.
One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.
Docker and Kubernetes work on different levels. Under the hood, Kubernetes can integrate with the Docker engine to coordinate the scheduling and execution of Docker containers on Kubelets. The Docker Engine itself is responsible for running the actual container image. Higher level concepts such as service discovery, load balancing and network policies are handled by Kubernetes as well.
Not only does Kubernetes have everything you need to support your complex container apps, it’s also the most convenient framework on the market for both developers and operations.
Kubernetes works by grouping containers that make up an application into logical units for easy management and discovery. It’s particularly useful for microservice applications, apps made up of small and independent services that come together to create a more meaningful app.
Simply tell Kubernetes what you want to happen, and it does the rest. A useful analogy is hiring a contractor to renovate your kitchen. You don’t need to know stage-by-stage what they’re doing. You just specify the outcome, approve the blueprint and let them handle the rest. Kubernetes works in the same way. DevOps Teams have the potential to deploy, manage, and operate applications with ease. Just send your blueprints to Kubernetes via the API Interface in the master controller.
I hope you are now equipped with the topics we discussed in the post. Don’t forget to hit like and pass it on to people who want to understand Kubernetes! 😊