अर्थ TASK-16

Mangal Hansdah
10 min readSep 30, 2021

Task Description:- Research how Kubernetes is used in Industries and what all use cases are solved by Kubernetes?

WHAT IS KUBERNETES?

Kubernetes is a system for managing containerized applications across a cluster of nodes. In simple terms, you have a group of machines (e.g. VMs) and containerized applications (e.g. Dockerized applications), and Kubernetes will help you to easily manage those apps across those machines.

CONCEPTS:- Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual node and those that are part of the control plane.

Introduction to Containers

Containers offer a way to package code, runtime, system tools, system libraries, and configs altogether. This shipment is a lightweight, standalone executable. This way, your application will behave the same every time no matter where it runs (e.g, Ubuntu, Windows, etc.).

Containerization is a modern virtualization method that accesses a single OS kernel to power multiple distributed applications that are each developed and run in their own container.

What is container orchestration?

Containers support VM-like separation of concerns but with far less overhead and far greater flexibility. As a result, containers have reshaped the way people think about developing, deploying, and maintaining software. In a containerized architecture, the different services that constitute an application are packaged into separate containers and deployed across a cluster of physical or virtual machines. But this gives rise to the need for container orchestration — a tool that automates the deployment, management, scaling, networking, and availability of container-based applications.

What is Kubernetes ??

Kubernetes (also known as K8s), is an open-source system used for managing containerized applications across multiple hosts. It provides basic mechanisms for deployment, maintenance, and scaling of applications.

Why do you need Kubernetes and what it can do??

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

Kubernetes Dropping Docker runtime support🤔

You do not need to panic. It’s not as dramatic as it sounds.

tl;dr Docker as an underlying runtime is being deprecated in favor of runtimes that use the Container Runtime Interface(CRI) created for Kubernetes. Docker-produced images will continue to work in your cluster with all runtimes, as they always have.

If you’re an end-user of Kubernetes, not a whole lot will be changing for you. This doesn’t mean the death of Docker, and it doesn’t mean you can’t, or shouldn’t, use Docker as a development tool anymore. Docker is still a useful tool for building containers, and the images that result from running docker build can still run in your Kubernetes cluster.

If you’re using a managed Kubernetes service like GKE, EKS, or AKS (which defaults to containerd) you will need to make sure your worker nodes are using a supported container runtime before Docker support is removed in a future version of Kubernetes. If you have node customizations you may need to update them based on your environment and runtime requirements. Please work with your service provider to ensure proper upgrade testing and planning.

If you’re rolling your own clusters, you will also need to make changes to avoid your clusters breaking. At v1.20, you will get a deprecation warning for Docker. When Docker runtime support is removed in a future release (currently planned for the 1.22 release in late 2021) of Kubernetes it will no longer be supported and you will need to switch to one of the other compliant container runtimes, like containerd or CRI-O. Just make sure that the runtime you choose supports the docker daemon configurations you currently use (e.g. logging).

Kubernetes Architecture

The above shows a simplified architecture view of the most-used components we typically talk about, but this just scratches the surface. Now that we have a basic understanding of containers and Kubernetes, let us look at the main steps involved in getting your application to run on Kubernetes (K8S)

Working with Kubernetes consists of four main steps:

  1. Develop an application.
  2. Containerize your application.
  3. Create a Kubernetes cluster.
  4. Deploy your container to the cluster.

Wrapping your head around Kubernetes requires an understanding of many abstract concepts, lots of reading, and most importantly to try it out for yourself. The best way to dive in is to get your hands dirty and play around with this revolutionary technology stack that is taking the world by storm. I would highly recommend having a look at the official tutorials on the Kubernetes website.

Key Features of Kubernetes:

1. Self-Healing

The platform heals many problems: restarting failed containers, replacing and rescheduling containers as nodes die, killing containers that don’t respond to your user-defined health check and waiting to advertise containers to clients until they’re ready.

2. Automated Rollbacks and Rollouts

Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers, and adopt all their resources to the new container.

3. Auto Scaling

Kubernetes has the capacity to automatically scale up (spin up more nodes and/or pods) when capacity peaks (e.g. more people watching Netflix on a Friday evening requires a lot more compute resources from Netflix’ servers) and down again after a peak. This is a tremendous asset, especially in the modern cloud, where costs are based on the resources consumed.

4. Load Balancing

Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.

Kubernetes getting so popular🤩

Kubernetes increased adoption is showcased by a number of influential companies which have integrated the technology into their services. Let us take a look at how some of the biggest companies of our time are successfully using Kubernetes.

Benefits of Kubernetes for companies

  • Control and automate deployments and updates
  • Save money by optimizing infrastructural resources thanks to the more efficient use of hardware
  • Orchestrate containers on multiple hosts
  • Solve many common problems deriving by the proliferation of containers by organizing them in “pods” (see the last post!)
  • Scale resources and applications in real-time
  • Test and autocorrection of applications

RealWorld Use-Cases

Pokemon Go is one of the most-popular publicized use cases showing the power of Kubernetes. Before its release, the online multiplayer game was expected to be reasonably popular. But as soon as it launched, it took off like a rocket, garnering 50 times the expected traffic. By using Kubernetes as the infrastructure overlay on top of Google Cloud, Pokemon Go could scale massively to keep up with the unexpected demand.

CASE STUDY: IBM

Building an Image Trust Service on Kubernetes with Notary and TUF

🔹 Challenge

IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud-native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.

🔹 Solution

The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation (CNCF) open source project Notary, according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portier is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters. The offering is that the Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they’re loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.

🔹 Impact

IBM’s intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. “Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem,” Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. “We had a multi-tenant Docker Registry with private image hosting,” Hough says. “The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in-flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose.”

Docker had already created the Notary project as an implementation of The Update Framework (TUF), and this implementation of TUF provided the capabilities for Docker Content Trust.

“After contribution to CNCF of both TUF and Notary, we perceived that it was becoming the de facto standard for image signing in the container ecosystem”, says Michael Hough, a software developer with the IBM Cloud Container Registry team.

The key reason for selecting Notary was that it was already compatible with the existing authentication stack IBM’s container registry was using. So was the design of TUF, which does not require the registry team to have to enter the business of key management. Both of these were “attractive design decisions that confirmed our choice of Notary,” he says.

The introduction of Notary to implement image signing capability in IBM Cloud encourages increased security across IBM’s cloud platform, “where we expect it will include both the signing of official IBM images as well as expected use by security-conscious enterprise customers,” Hough says. “When combined with security policy implementations, we expect an increased use of deployment policies in CI/CD pipelines that allow for fine-grained control of service deployment based on image signers.”

The availability of image signing “is a huge benefit to security-conscious customers who require this level of image provenance and security,” Hough says. “With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment.”

Conclusion

There are many benefits of learning Kubernetes as it releases a lot of deployment stress from the developer’s shoulders. It helps in auto-scaling, intelligent utilization of available resources, and enables low downtime for maintenance.

Without k8s, a lot of human effort and money is gone into these things, but with the help of k8s, everything can be automated.

--

--