Day 43:  Deployment of a Microservices Application on K8s: Assignment-1

Day 43: Deployment of a Microservices Application on K8s: Assignment-1

Mongo DB Deployment

Kubeadm Installation Guide

Previously we have gone through the process of how to install Minikube.

In this guide, we will go through the steps needed to set up a Kubernetes cluster using "kubeadm".

Pre-requisites

  • Ubuntu OS (Xenial or later)

  • sudo privileges

  • Internet access

  • t2.medium instance type or higher


Setting up 2 EC2 instances

We have gone through these steps multiple times and it's easy to create one.


Install Kubeadm on Master & Worker

Run the following commands on both the master and worker nodes to prepare them for kubeadm.


sudo apt update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo apt install docker.io -y

sudo systemctl enable --now docker # enable and start in single command.

curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg

echo 'deb https://packages.cloud.google.com/apt kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt update 
sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y

Initialize Kubeadm on the Master Node

  1. Initialize the Kubernetes master node.

     sudo kubeadm init
    

    After successfully running, your Kubernetes control plane will be initialized successfully.

  2. Set up local kubeconfig (both for the root user and normal user):

     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

  3. Apply Weave network:

     kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
    

  4. Generate a token for worker nodes to join:

     sudo kubeadm token create --print-join-command
    

  5. Expose port 6443 in the Security group for the Worker to connect to Master Node


Preflight checks on Worker Node

  1. Run the following commands on the worker node.

     sudo kubeadm reset pre-flight checks
    

  2. Paste the join command you got from the master node and append --v=5 at the end. Make sure either you are working as sudo user or use sudo before the command

  3. After succesful join->


Verify Cluster Connection

Master Node:

kubectl get nodes


Mongo DB Project

Master Node:

  1. To create a database, we will need to write a persistent volume YAML file- "mongo-pv.yml" and get 256 MB of storage.

  1. Once the Persistent Volume YAML file is created, we can now apply the changes using the below command:
kubectl apply -f mongo-pv.yml

  1. We now have 256 MB available, ready to be claimed. To check this use the command:
kubectl get pv mongo-pv

  1. To claim the volume we need to write another YAML file- "mongo-pvc.yml"

  1. Once the Persistent Volume claim YAML file is created, we can now apply the changes using the below command:
kubectl apply -f mongo-pvc.yml
  1. Once the command is executed, we get the below results that show the volume is now claimed

  1. After claiming the volume, we can now assign this volume to the Mongo db application and to set up the Mongo db application, we will write a Deployment YAML file- "mongo.yml" and deploy it.

Command to deploy:

kubectl apply -f mongo.yml

Worker Node:

  1. To check if the containers are running:
docker ps

  1. We can access the docker container- mongo using the usual command
docker exec -it <container ID> bash
  1. We have successfully deployed and can access the Mongo container
mongosh


Conclusion:

In this project, we've successfully installed kubeadm on both the EC2 instances- Master node and Worker node. We also have successfully created persistent volume, claimed it and attached it to the Mongo container.

This simple example gives you a glimpse of how to define and deploy containers within a Kubernetes cluster.

Hope you like my post. Don't forget to like, comment, and share.