Deploy NGINX using Kubernetes in the easiest way possible.

Deploy NGINX using Kubernetes in the easiest way possible.

A hands-on project using Kubernetes

Β·

7 min read

Although the Kubernetes universe, also known as K8s, might be intimidating and overwhelming at the same time, it provides benefits for application deployment.

Alright... When we speak of K8s and app deployment, the first question that hits our mind is:

Why deploy apps using Kubernetes?

Here are a few reasons why:

  • Container Orchestration

Kubernetes provides a robust container orchestration platform. It allows you to automate the deployment, scaling, and management of containerized applications. Containers provide consistency across different environments, making it easier to develop and deploy applications.

  • Scalability

Your applications can withstand higher demand thanks to Kubernetes' horizontal scalability feature by adding additional containers or pods as necessary. This aids in preserving application availability and performance during traffic peaks.

  • High Availability or HA

Load balancing, automatic pod rescheduling, and health checks are just a few of the capabilities that Kubernetes offers to keep your applications responsive and accessible even in the event of hardware or software failures.

  • Resource Optimization

Kubernetes intelligently distributes containers across the available infrastructure to maximize resource utilization. This guarantees that resources are used efficiently and affordably, which is crucial in cloud environments where you pay for the resources you use.

  • Rolling Updates

Applications can be updated and rolled back without interruption using Kubernetes. Without any downtime, you may upgrade your application while keeping an eye out for any problems. Old containers can be gradually replaced with new ones. You can quickly revert to an earlier version if issues emerge.

  • Declarative Configuration

To specify the desired state of your applications and infrastructure, Kubernetes uses a declarative approach. Kubernetes strives to realize what you specify while automatically resolving any inconsistencies.

  • Service Discovery and Load Balancing

Applications can more easily discover and communicate with one another because to Kubernetes' built-in service discovery and load balancing. In a containerized system, networking is made simpler by this.

  • Secrets Management

Through Secrets, Kubernetes offers a method for securely managing sensitive data, such as API keys and passwords, which lowers the chance of exposing sensitive data.

  • Ecosystem and Community

A sizable and engaged community supports the growth of Kubernetes and offers a robust ecosystem of plugins and solutions. Helm for package management, Prometheus for monitoring, and Istio for service mesh are just a few of the ecosystem's components.

  • Multi-cloud and Hybrid Support

You may execute your applications on several cloud providers or on-premises infrastructure thanks to Kubernetes' cloud-agnostic nature. For organizations using multi-cloud or hybrid cloud strategies, this flexibility is crucial.

  • Cost Management

Kubernetes can assist businesses in keeping cloud infrastructure expenses under control by efficiently utilizing resources and growing as necessary. Making educated judgments about resource provisioning is made easier by the tools it offers for resource allocation and monitoring.

And definitely topping all the above is...

  • Security

Role-based access control (RBAC), network policies, and pod security policies are just a few of the capabilities that Kubernetes offers to help you secure your containerized applications and safeguard your infrastructure and data.

In conclusion, Kubernetes increases application scalability, availability, and resource utilization while making the deployment and maintenance of containerized applications simpler. It also offers a wide range of capabilities that are advantageous to both the development and operations teams. It is a tempting option for delivering and administering modern apps in a variety of scenarios because of these benefits.

Let's quickly hop in to the talk where Ryan wants to deploy an application using K8s and the application is Nginx.

While the deployment of this project is easy, it comes with some pre-requisites:

  • Ubuntu (Xenial or Later)

  • Superuser or Sudo access

  • Internet access

  • T2.medium or higher type of AWS EC2 instance.

Constructing the Master and Worker Nodes

1> Login to the AWS console and spin up two EC2 instances with t2.medium as the size and Ubuntu (Xenial or later) as the OS.

2> Once the machines are ready, ssh on each of the machines, execute the following commands:

  • sudo apt update

  • sudo apt-get install -y apt-transport-https ca-certificates curl

  • sudo apt install docker.io -y

  • sudo systemctl enable --now docker

  • curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg

  • echo 'deb https://packages.cloud.google.com/apt kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

  • sudo apt update

  • sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y

Phew. What did we do on both machines here? Let me explain:

we updated the machine in the first statement/bullet point

we installed the necessary certificates required using command line in bullet point #2

we installed Docker since it necessary to pull the images from the Dockerhub repository using bullet point #3

We enabled and started Docker service in single command using bullet point #4

We added GPG keys using bullet point #5

Next task was to add the repository to the sourcelist. Command is bullet point #6

Once all this is done, update the machine again using bullet point #7

Install kubelet and kubeadm for using kubernetes using command line in bullet point #8

The Master Node has a few more actions

  • sudo kubeadm init

This command initiates kubernetes in the master node

  • mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    These set of commands help us to setup local kubeconfig (both for root user and normal user)

Applies the weave network

  • sudo kubeadm token create --print-join-command

This command provides us a string that can be used to join the worker node machine on to the master node.

The last but not the least task, outside this master node to be done is to navigate into the security groups of the EC2 instance created for the master node and add port number 6443 into the inbound rules for the kubernetes to function without issues.

All set with Master Node!

The Worker Node has a few more actions

Execute the following commands to setup the worker node

sudo kubeadm reset pre-flight checks

Next, paste the join command you got from the master node and append --v=5 at the end. Make sure either you are working as sudo user or use sudo before the command

All set!! Now the worker node has successfully joined the Master node... If the join is successful, you should see the below message:

image

Verify Cluster Connection

On the master node:kubectl get nodes

On the worker node:docker ps

Running Nginx Pod on Kubernetes

There are two phases in this next stage of deployment. Let's begin with Part 1.

Creating a directory to host all the manifest files and namespace for the application.

Using the mkdir and cd commands, create the project folder. For my convenience, I have constructed a folder by the name "nginx". Very creative 😎 I know πŸ˜‚

Ryan had this question for me: why create a namespace?

The answer is:

Within a cluster, namespaces are used to establish distinct contexts. They provide the segmentation and organization of resources like deployments, services, and pods. You can share the same cluster with numerous projects or teams while preventing resource conflicts by utilizing namespaces. Additionally, it lessens the chance of name disputes and improves resource management and access control.

To create a namespace, use the following command:

kubectl create namespace <name of the namespace you want to create>

For my convenience again, I used nginx 😁

Once the namespace is created, we are all set to move into Part 2 of the deployment. The actual application deployment!

Create a file called pod.yml and host all the data into the file

The next step is to apply this configuration

Once the pod file executes, the next step is to create a deployment manifest

Now apply the deployment configuration created using kubectl apply command

To understand if my deployment has succeeded and if my pod has been created, use:

kubectl get pods -n nginx

This command lists all the pods within the namespace nginx

To see a more descriptive state, use kubectl describe deployment command as shown below:

While I was trying to execute this, I also wondered if rolling update works! So... I tried this 😎

and boom!! the deployment scaled!!!

Once this is completed, all you have to do is add port 30007 to the inbound rules of the security group attached to the worker node. This basically exposes the deployment you have created to the outside world.

All that is left to do is check the cluster IP:

🎯🎯Target Hit 🎯🎯 Nginx is now set and running!!!

Pro-tip

Once all this is completed, make sure to terminate the instances created, else the bill will skyrocket in your account

Happy Learning!!

Β