Kubernetes:- Replication set and Controllers

Kubernetes:- Replication set and Controllers

Replication set

A Kubernetes object that ensures that a specified number of pod replicas are running at any given time.

Controller

A Kubernetes object that monitors the state of other objects and takes corrective action as needed.

Key Points for Both

  • Both ReplicationSets and Controllers are essential for ensuring high availability, scaling, and fault tolerance of applications in a Kubernetes cluster.

  • They are key components in achieving the self-healing capabilities of Kubernetes, as they automatically replace failed or terminated pods.

  • They help manage pods and ensure that a specified number of replicas are always maintained.

  • Controllers, including ReplicationSets, can be managed using YAML or JSON manifests in Kubernetes.

  • Kubernetes controllers help abstract the underlying complexity of managing containers and provide a more user-friendly way to manage applications.

Replication controller lab example:-

kind: ReplicationController               
apiVersion: v1
metadata:
  name: myreplica
spec:
  replicas: 2            
  selector:        
    myname: sidharth                             
  template:                
    metadata:
      name: testpod6
      labels:            
        myname: sidharth
    spec:
     containers:
       - name: c00
         image: ubuntu
         command: ["/bin/bash", "-c", "while true; do echo Hello-Bhupinder; sleep 5 ; done"]

check minikube status

minikube status

start minikube

minikube start --driver=docker

apply the pod by the name through which you created yaml file

kubectl apply -f pod1.yml

for getting replica

kubectl get rc

for getting detailed description

kubectl describe rc replica

for getting pods

kubectl get pods

just delete any pod by using their pod name that come in earlier command

kubectl delete pod myreplica-99zwp

then check again RC and also check detail

kubectl get rc
kubectl describe rc replica

Also check by using labels

kubectl get pods --show-labels

for scaling up and down

for scale up

kubectl scale --replicas=8 rc -l myname=sidharth

check it

kubectl get rc

for scale down

kubectl scale --replicas=1 rc -l myname=sidharth

check

kubectl get rc

Finally you can delete it by this commad

kubectl delete -f pod1.yml

Replica set lab example:-

just create a yaml file by name pod2.yml


kind: ReplicaSet                                    
apiVersion: apps/v1                            
metadata:
  name: myrs
spec:
  replicas: 2  
  selector:                  
    matchExpressions:                             # these must match the labels
      - {key: myname, operator: In, values: [Bhupinder, Bupinder, Bhopendra]}
      - {key: env, operator: NotIn, values: [production]}
  template:      
    metadata:
      name: testpod7
      labels:              
        myname: Bhupinder
    spec:
     containers:
       - name: c00
         image: ubuntu
         command: ["/bin/bash", "-c", "while true; do echo Technical-Guftgu; sleep 5 ; done"]

then apply it by this command

kubectl apply -f pod2.yml

check rs and pods

kubectl get rs
kubectl get pods

scale up or down

 kubectl scale --replicas=1 rs/myrs

again check the pods

kubectl get pods

delete the pod by pod name that we get from the earlier command

kubectl delete pod myrs-k6dpg

again check pod and rs .....most probably u get result because it will auto correct.

kubectl get rs
kubectl get pods

again delete by pods name

kubectl delete rs/myrs

check

kubectl get pods

finally, delete everything

kubectl delete pod2.yml

Conclusion

Finally basic lab done based on Replication controller and replica set. If you like it then share with others.