Kubernetes

What is Kubernetes (K8s)? A Kubernetes Tutorial

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. In this Kubernetes tutorial, we will discuss the following points:

  • What is Kubernetes? 
  • What is Kubernetes used for?
  • Kubernetes Basic Terms 
  • Kubernetes Architecture 
  • Kubernetes Security
  • Kubernetes vs Docker 
  • Benefits of Kubernetes

What is Kubernetes? 

Kubernetes is described as a “platform for automating deployment, scaling, and management of containerized applications,” it aims to provide a “container-centric” approach to deploying and managing applications inside of clusters.

As a user, you can get started with Kubernetes very quickly using one of the many distributions or tutorials available. As an author, its extensibility, hooks, and APIs allow you to plug in your own customizations to meet your unique needs. As a contributor, you can help build a community around a platform that’s already deployed in some of the largest clouds and most demanding production environments.

Kubernetes can run on almost any cloud or on any server infrastructure, though it is often used with public cloud platforms such as Google Cloud, Amazon Web Services (AWS), Microsoft Azure or Alibaba Cloud, or private clouds.

Click Here to Read: What is Docker? The spark for the container revolution

What is Kubernetes used for?

The most common use case for Kubernetes is to enable a DevOps workflow. Developers write code in their IDE or text editor and then push it to the repository. When they have some changes ready, they trigger an automated build and (after being validated by automated tests) push the built artifacts to a package repository like NPM or Maven Central.

The aim of Kubernetes is to allow developers to create and run distributed applications without having to worry about details like where the application should be deployed, how many servers are needed, or how they should be configured. The user tells these things to Kubernetes instead. Using Kubernetes simplifies tasks like “How many machines do I need for this task?” and “How can I scale my application up or down? Kubernetes can schedule based on hardware available and heuristically determine if an application needs more resources or if a machine is failing.

Some more uses of Kubernetes are:

  • Kubernetes is used in a cloud computing environment to deploy and manage containerized applications.
  • Kubernetes is used to set up a scalable server pool for hosting services.
  • Kubernetes is used to set up a scalable backend system for web-based services such as content delivery networks (CDNs).
  • Kubernetes are used with an overlay network so that different clusters can be logically separated from each other. This allows the clusters to be scaled out without requiring them to have separate network addresses.
  • Kubernetes is designed to scale horizontally
  • Kubernetes has an unmatched level of redundancy, availability, and high-availability
  • Kubernetes cluster is easy to set up and maintain.

Kubernetes Basic Terms 

Kubernetes Dashboard 

The Kubernetes Dashboard is a web-based user interface for monitoring and managing a cluster of Kubernetes (k8s) nodes. Kubernetes dashboard runs in your computer browser and does not need any software to be downloaded on the nodes.

Kubernetes Container

The Kubernetes container is the container that is used to run applications in a Kubernetes cluster. It is user-friendly, very simple and easy to use with efficient working exerience4.. It is designed to be run on a variety of operating systems and platforms, including bare metal, virtual machines, and cloud providers like AWS, Azure, and Google.

Kubernetes Cluster 

Kubernetes cluster is a computing environment comprising of a set of interconnected nodes or virtual machines that perform and process the functions of an application. It supports dynamic workloads and can be expanded. The Kubernetes cluster is also known as K8S or k8s.

Kubernetes Cluster 
Kubernetes Cluster 
API server

The API Server is an HTTP server that exposes Kubernetes APIs as well as serves client requests. It provides health-checking and authentication services for other components.

etcd

etcd is a distributed key-value mart that provides a dependable way to keep data across a cluster of engines.

Controller manager

The controller manager runs controllers that implement Kubernetes control loop logic, such as the Kubernetes scheduler using the API server to handle scheduling requests.

Scheduler

The scheduler watches newly created pods in the “Ready” state and selects a node for them to run on based on resource availability, node selectors, and labels. It also monitors the status of running pods and ensures that they are restarted if they exit (due to hardware failure or application crash).

Replication controller

A replication controller watches for new pods that match specified labels or selectors and causes those new pods to be created by calling API methods on the API server

Pod 

The pod is the lowest level deployment unit in Kubernetes which makes it ideal for creating new applications. Pod are standalone containers that are not bound to any specific node hence are more portable.

Service 

A service is a Kubernetes object which exposes a set of Pods and is created and deleted as a single object. The Service object can be used for access to services provided by Pods. Services should only create new connections to other services or hosts in the cluster…

Volume 

A volume is a file system object specifically designed for sharing between multiple containers.

Deployment

Deployment is the operation that copies resources from one or more source configurations push them into a new namespace, and applies the desired state described by the user.

StatefulSet 

StatefulSet configures an application-wide virtual IP address and port mappings for its Pods. This is useful when you want to access pods by name within your cluster, without needing to know the actual node on which they run.

Daemon set 

A daemonset is a Kubernetes construct that works similarly to a Pod. It does not run any applications but instead runs one or more full system processes on the node. They can be useful for running services like monitoring agents, log collectors, and other things that do not fit inside a pod. The advantage of Daemonsets is that they can be easily re-deployed by changing the config of the DaemonSet and not having to recreate and re-provision an entire pod (can take around 10 minutes).

Job 

Jobs are used to managing sets of tasks, called pods that have the same lifecycle. This includes items such as creating a pod, waiting for it to be ready, and then performing a task on it. Jobs work by binding a pod template to a set of pods through a controller manager.

Kubernetes Architecture 

Kubernetes architecture is one of the most powerful and flexible cloud-native applications. It is designed to deliver a single cluster for multiple workloads and platforms.

Kubernetes Architecture 
Kubernetes Architecture 

The Kubernetes architecture consists of the following components:

  1. Kubernetes Cluster manager Kubernetes provides all of the necessary functionality to run and manage distributed applications. A Kubernetes deployment is a group of one or more “pods” that are deployed on the same node. It handles scheduling, load balancing, service discovery, replication, scaling, etc.
  2. Master node The Kubernetes architecture consists of master components that are responsible for scheduling and running containers. Master nodes can be run on large hypervisors or on a cloud platform. The agent component runs on each node and is responsible for reporting the local resources to the master node and managing the containers automatically as per the instruction received from the master node. Kubernetes uses JSON-based configuration files (Kube config files) to manage the cluster and application instances.` The master node consists of three main sub-components. They are:
    • API Server: This component is responsible for handling all the API requests from the users, both internal and external. For example, an API user can create a new pod and get information about it via this component.
    • A Scheduler: This component is responsible for scheduling the pods across the cluster based on their labels and resource requirements such as CPU, memory, storage, etc.
  3. Controller Manager: This component is responsible for controlling the replica sets of each stateful set object in Kubernetes using ReplicationController objects.
  4. Workers – The workers run in all your nodes/VMs and control the state of your application containers. They are also responsible for scheduling requests from the API server and managing their life cycle.
  5. Kubernetes pods are expected to be stateless and ephemeral except for any persistent data store (such as NFS). Pods are scheduled onto nodes via an API server and controlled by a series of controller managers that handle tasks such as starting up new pods and making sure that all containers in a pod are running before shutting down a pod.
  6. Kubernetes API server is built on top of the gRPC which implements a server/client model for defining services. The API server provides several endpoints for other systems to communicate with including:
  7. Configuration API for retrieving information about clusters, namespaces, deployments, etc.
  8. Metrics API for retrieving statistics from kubelet and cAdvisor.

Click Here to Read: What is DevOps? The Beginner’s Guide

Kubernetes Security

Kubernetes security has three security constructs. Let’s discuss this in detail.

  1. RBAC: Role-Based Access Controls are a declarative approach to access control. Pods can be assigned roles and then these roles are used to determine the level of access that a user or a pod has to the API server.
  2. Node-Level Security Policy and Security Context: These are applied at the node level. They allow administrators to fine-tune the default PodSecurityPolicy when running in non-default namespaces. The node-level PodSecurityPolicy can also be used to prevent pods from accessing certain volumes on a node.
  3. AppArmor: AppArmor is a Mandatory Access Control (MAC) system for Linux that safeguards against exposure and vulnerabilities by implementing paths and access control permissions.. AppArmor policy is set per profile which allows administrators to create fine-grained policies for different types of workloads.

Kubernetes vs Docker 

KUBERNETES VS DOCKER

Kubernetes vs Docker is a comparison between two different tools that both provide application containerization. The comparison will help you understand the advantages and disadvantages of each technology and make an informed decision if you are looking for a technology to run your application in a distributed environment.

  1. Docker is a software technology that automates the deployment of applications inside software containers. Kubernetes is container management and orchestration system. It provides features like high availability, load balancing, service discovery, and roll-back capabilities in a distributed system.
  2. Docker is limited to containers; Kubernetes could orchestrate multiple instances of docker
  3. Docker has a focussed purpose, Kubernetes is more generalized with different tools depending on the purpose
  4. But Kubernetes isn’t just a Docker killer. It’s a cloud-native tool that, when considered as a platform, complements the other tools in this space.

Benefits of Kubernetes 

  • Kubernetes can run containers in production in a way that is similar to how we run them locally. Running containers locally means that we can debug applications much faster by running the same environment as the production environment. 
  • We can run our applications on localhost instead of having to ssh into hosts or use a tool like vagrant for every change.
  • Kubernetes allows us to save time debugging applications by letting us do most of the development using local runtimes like Docker or rkt and then scale up when we are ready to put it into production.
  • Kubernetes provides a new model for data center infrastructure and cloud-based application platforms. In addition to achieving new heights in terms of reliability, scalability, and maintainability.
  • Kubernetes-based cloud applications have the potential to achieve new heights in terms of application quality and customizability. Promises are made, code is migrated and data is replicated with efficiency unmatched by competing for private cloud solutions. 
  • Availability – Unlike other in-house solutions, Kubernetes clusters can be built and provisioned in a matter of minutes. This allows for faster response to trawlers and as a result more time in maintaining high availability. 
  • Robust Infrastructure – We’ve deployed production-grade applications on Kubernetes and been Achievement USV – 100% reliable.

Click Here to Read: 10 Best DevOps Certification Training Program

Kubernetes Useful Links

Conclusion

Kubernetes is quickly becoming one of the hottest cloud-computing technologies on the planet. Organizations are increasingly adding it to their arsenal of tools as they automate their applications and environments. While Kubernetes has already established itself as a force within businesses, it still has yet to make its way into the consumer space.

Leave a Reply