Architecture of Kubernetes

Architecture of Kubernetes

"In a Kubernetes cluster, containers can run on bare metal servers, virtual machine (VM) instances, cloud instances, or a mix of these. Kubernetes designates one or more of these nodes as the master nodes, while the rest become worker nodes. The master nodes are responsible for running a set of Kubernetes processes known as the control plane, which ensures the smooth functioning of the cluster. This control plane can be replicated for high availability, with multiple master nodes working together to maintain cluster operations."

Components of control plane(master)

Kube-API server

  1. The API server interacts directly with users.

  2. Users can apply .yaml or JSON manifests to the Kubernetes API server through this API server.

  3. The Kubernetes API server is designed to scale automatically in response to the load.

  4. It serves as the front end of the control plane.

  5. The Kubernetes API server exposes the Kubernetes API.

  6. It receives and validates requests from clients.

  7. It also validates and configures data for the Kubernetes cluster.

ETCD

  1. Purpose: Etcd is a distributed key-value store used to store metadata and maintain a highly available source of truth for the cluster state, containing critical information about the state of the cluster.

  2. Data Storage: It stores data in key-value pairs and maintains information regarding configuration, application state, and metadata related to the Kubernetes cluster.

  3. Features:

    • Full Replication: Etcd ensures that the entire cluster state is replicated and available on every node within the cluster, ensuring high availability and consistency of data.

    • Security: It implements automated Transport Layer Security (TLS) with the option for client certificate authentication, ensuring data security and encryption.

    • Performance: Etcd demonstrates impressive performance, with benchmarks indicating the capability to handle up to 10,000 write operations per second.

Etcd plays a crucial role in Kubernetes by providing a reliable and distributed storage solution for maintaining cluster information, and supporting the orchestration and management of containerized applications.

Kube- Scheduler

  1. User Requests: When a user makes a request for the creation and management of PODs within a Kubernetes cluster, the Kubernetes scheduler takes action on these requests.

  2. Handling POD Creation and Management: The primary responsibility of the Kubernetes scheduler is to handle the creation and management of PODs.

  3. Matching and Assignment: The scheduler matches and assigns each POD to an appropriate node for its creation and execution.

  4. Assignment Process: Initially, when a newly created POD has no node assigned to it, the scheduler becomes responsible for finding the best node for the POD to run on.

  5. Hardware Configuration: To make this decision, the scheduler uses information about the hardware configuration of available nodes, which is typically stored in a configuration file.

  6. Scheduling Decision: Based on the resource requirements and constraints of the POD, the scheduler determines the most suitable node for its placement.

  7. Continuous Monitoring: The scheduler continually monitors for newly created PODs and, when necessary, assigns them to nodes based on their specific resource needs and any defined constraints.

The Kubernetes scheduler plays a critical role in optimizing the allocation of resources within the cluster, ensuring efficient and reliable execution of containerized workloadsControl manager

Some controllers are:

Node controller, ReplicaSet controller, Endpoints controller, Service Account & Token controllers.

Components on the master that run the controller

  1. Node Controller:

    • Responsible for checking with cloud providers to determine if a node has been deleted in the cloud after it stops responding.
  2. Router Controller:

    • Responsible for setting up network routes on your cloud infrastructure.
  3. Service Controller:

    • Responsible for managing load balancing on your cloud infrastructure for services of the load balancer type.
  4. Volume Controller:

    • Responsible for tasks related to volumes, including creating, attaching, and monitoring volumes.

    • Interacts with cloud providers to orchestrate volume-related operations.

Node

Kubelet and Kube Proxy in Kubernetes

  1. Kubelet:

    • The Kubelet is an agent running on each pod node that listens to the Kubernetes master for instructions.

    • It handles POD creation requests and reports their success or failure back to the master.

    • Its primary responsibility is to ensure that the containers described in a PodSpec are running and healthy.

    • The Kubelet also reports the health status of the node to the master and manages node-level operations.

  2. Container Engines (e.g., Docker):

    • Container engines like Docker work in conjunction with Kubelet.

    • They are responsible for pulling container images, starting and stopping containers, and exposing containers on ports specified in the manifest.

  3. Kube Proxy:

    • Kube Proxy is another component running on each node in the cluster.

    • It assigns a dynamic IP address to each POD, ensuring that every POD has a unique IP address.

    • Kube Proxy runs on every node and is responsible for implementing the Kubernetes service abstraction.

    • It achieves this by maintaining network rules on the node and forwarding traffic to service IPs and ports, allowing for seamless communication between services and pods in the cluster.

POD

The smallest unit of Kubernetes is a POD, which is a group of one or more containers deployed together on the same host. A Kubernetes cluster consists of at least one worker node and one master node. In Kubernetes, the control unit is the POD, not the container. A POD consists of one or more tightly coupled containers and runs on a node controlled by the master. Kubernetes only knows about PODs and does not have knowledge of individual containers. Containers cannot start without being part of a POD. Typically, one POD contains one container. PODs are the smallest deployable units of computing that you can create and manage in Kubernetes. A POD contains one or more tightly coupled and co-located containers.

Multi-container PODs

Containers within a POD share access to memory space and can connect to each other using localhost. They also have access to the same volume. Containers within a POD are deployed in an all-or-nothing manner, meaning the entire POD is hosted on the same Node (the Scheduler will decide which node).

POD Limitation

No auto healing or auto-scaling of POD crashes.

Higher-level Kubernetes object

Replication set:- Auto scaling and auto healing
Deployment:- versioning and rollback
Services:- state(non-ephemeral) IP and network.
Volume:- NON-ephemeral storage.

Other Important Points:

Kubernetes provides various tools for different use cases:

  • kubectl: Generally used for single cloud environments.

  • kubeadm: Typically used in on-premise installations.

  • kubefed: Used in federated Kubernetes setups.

Kubernetes CLI - kubectl:

  • kubectl is a command-line interface (CLI) for executing commands against Kubernetes clusters.

  • It offers a range of useful commands, including:

    • kubectl get: Used to retrieve resources.

    • kubectl describe: Provides detailed information about a resource.

    • kubectl logs: Retrieves logs from a container.

    • kubectl exec: Executes a command within a container.

    • kubectl apply: Applies a configuration to a resource.

    • kubectl delete: Deletes resources.

These tools play crucial roles in managing and interacting with Kubernetes clusters, making it easier to deploy, manage, and maintain containerized applications.

If you Guys like the content then like or share with others and Promote learning in Open. Always smile and take care.