Request change

Kubernetes Project: Deploy Cloud Native Voting Application on EKS

This Blog is inspired by Cloudchamp's EKS

Kubernetes Project: Deploy Cloud Native Voting Application on EKS

This Blog is inspired by Cloudchamp’s YouTube video

APP REPO https://github.com/Aj7Ay/K8s-voting-app.git

Navigate to Your Aws Console

Click the “Search” field and search For EKS or select directly Elastic Kubernetes Service on the Recently visited tab

Click “Add cluster”

Click “Create”

Click the “Name” field and enter a unique name for the cluster that is anything you want. For example, I used Cloud and version 1.27

Click on Amazon EKS User Guide for New IAM role creation.

You will get the below webpage and

Click “console.aws.amazon.com/iam

You will be redirected to the IAM dashboard

Click “Roles”

Click “Create role”

Click “Allow AWS services like EC2, Lambda, or others to perform actions in this account.”

Click “Choose a service or use case”

Type “EKS”

Click this radio button with EKS-Cluster

Click “Next” and you will directly redirect to policy and click Next ( we have only one policy for it and it selects by default for EKS) that is AmazonEKSClusterPolicy

Click the “Role name” field and provide the name (myAmazonEKSClusterRole)

Click “Create role”

Click “myAmazonEKSClusterRole” that is created at Cluster Service Role.

Click “Next”

Click “Select security groups” and Use the existing security group or create a new security Group

Click “Next”

Click “Next”

No changes Click “Next” (Default no need to change anything)

No changes Click “Next” (Default no need to change anything)

Click “Create”

In your cluster Click “Add-ons”

Click “Get more add-ons”

Click this checkbox. with Amazon EBS CSI Driver

No changes Click “Next” (Default no need to change anything)

No changes Click “Next” (Default no need to change anything)

Click “Create”

Once your Cluster up to active status

Click “Compute”

Click on “Add node group”

Click the “Name” field.

Write any Name you want (NodeGroup)

Click “Select role” and click on the IAM console

Click “Create role”

Click “Allow AWS services like EC2, Lambda, or others to perform actions in this account.”

Click “Choose a service or use case”

Click “EC2”

Click “Next”

Click the “Search” field.

Search these Policy Names and make it check

AmazonEC2ContainerRegistryReadOnly

AmazonEKS_CNI_Policy

AmazonEBSCSIDriverPolicy

AmazonEKSWorkerNodePolicy

Click “Next”

Click the “Role name” field.

Add Role name as myAmazonNodeGroupPolicy

Click “Create role”

Add a role that was created before “myAmazonNodeGroupPolicy

Click “Next”

On the next page remove t3.medium and add t2.medium as instance type.

Select t2.medium

Click “Next”

Click “Next”

Click “Create”

Node Groups will take some time to create, Click “EC2” or Search for Ec2

Click “Launch instance”

Add Name and AMI as Amazon Linux

Take instance type as t2.micro and select keypair with default security Group.

Click “Advanced details”

Click on the IAM instance Profile and Create a New IAM profile.

Click “Create role”

Click “Choose a service or use case”

Click “EC2”

Click the “Search” field.

Type “EBS”

Click this checkbox with the policy name AmazonEBSCSIDriverPolicy.

Click “Next”

Click the “Role name” field and provide the name as EKSaccess.

Click “Create role”

Click on the newly created role “EKSaccess”

Click “Add permissions”

Click “Create inline policy”

Click “JSON”

REMOVE EVERYTHING FROM THE POLICY EDITOR

And add this

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "eks:DescribeCluster",
            "eks:ListClusters",
            "eks:DescribeNodegroup",
            "eks:ListNodegroups",
            "eks:ListUpdates",
            "eks:AccessKubernetesApi"
        ],
        "Resource": "*"
    }]
}

Click “Next”

Click the “Policy name” field and add the name as eksaccesspolicy

click on create policy.

Add That Role to your instance and launch the instance

Once the instance comes up copy the SSH client to connect to Putty.

ssh -i "m.pem" ec2-user@```



![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694620420382/62cb4429-cfd1-4e01-9065-751ef5bb88ed.png?auto=compress,format&format=webp)

Install git on the instance

sudo yum install git -y

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694620629480/a2ed3786-e21b-4292-bed9-cdad5be318c6.png?auto=compress,format&format=webp)

Once Git is installed, install Kubectl on the instance

installs kubectl on instance

curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.11/2023-03-17/bin/linux/amd64/kubectl chmod +x ./kubectl sudo cp ./kubectl /usr/local/bin export PATH=/usr/local/bin:$PATH

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694620774080/55d0b82f-d293-4cdd-97a6-586d649c6cfa.png?auto=compress,format&format=webp)

to check version

kubectl version –client

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694621014305/cfd16d46-2213-4b6b-9c08-64eac25018dd.png?auto=compress,format&format=webp)

Install AWS CLI on the instance

install awscli

curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o “awscliv2.zip” unzip awscliv2.zip sudo ./aws/install

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694621118054/6625a6bc-34c5-4152-8a5c-d1f74ee45f4a.png?auto=compress,format&format=webp)

check awscli version

aws –version

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694621280498/e913db97-0e86-41fe-98e3-ff6ff06c1192.png?auto=compress,format&format=webp)

Now check whether nodes are up or not

kubectl get nodes

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694621477569/7adbbd7b-cbec-4a79-ab09-02160aa28ac6.png?auto=compress,format&format=webp)

You will get a refused error because we haven't set up the context yet. lets set context

aws eks update-kubeconfig –name EKS_CLUSTER_NAME –region CLUSTER_REGION aws eks update-kubeconfig –name Cloud –region ap-south-1

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694621751839/4e482d2d-aaa3-4dbc-bf07-4a137277311b.png?auto=compress,format&format=webp)

The `aws eks update-kubeconfig` command updates the Kubernetes configuration file with details for connecting to an Amazon EKS cluster.

Specifically:

- `--name EKS_CLUSTER_NAME` provides the name of the EKS cluster to configure access for. This should be replaced with your actual cluster name.
- `--Region us-west-2` specifies the AWS region where the EKS cluster is located. You should update this if your cluster is in a different region.
- This will update your local kubeconfig file (usually located at ~/.kube/config) with the endpoint and certificate authority data to allow kubectl to communicate with your EKS cluster.

Let's check again whether nodes are up or not from instance.

kubectl get nodes

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694621945122/d3dd605b-be6a-4259-a540-7eb766567530.png?auto=compress,format&format=webp)

We will get an error that `You must be logged in to the server (unauthorized)`

The error message "You must be logged in to the server (Unauthorized)" in Kubernetes indicates that the user or service account trying to access the cluster does not have the necessary permissions. This error typically occurs when the authentication and authorization mechanisms in Kubernetes deny access.

Let's Resolve the issue

Go to Aws console

Click on the AWS cloud shell icon on the top right

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694622738138/393fc02b-d445-4cee-ab87-f1b42233e227.png?auto=compress,format&format=webp)

click on connect

First set context by providing the following command

aws eks update-kubeconfig –name EKS_CLUSTER_NAME –region CLUSTER_REGION aws eks update-kubeconfig –name Cloud –region ap-south-1

Edit the config map for access

kubectl edit configmap aws-auth –namespace kube-system

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694623125064/6933e0ab-6901-4a73-8660-980cb411b63d.png?auto=compress,format&format=webp)

Go to your Iam roles and copy the arn of iam role of ec2 instance that is attached

Add your Role arn to the config map

`EKSaccess role is added to the Instance (While creating the instance)`

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694623997954/5fbc9c4c-c3da-4adf-8171-2fe60d27e998.png?auto=compress,format&format=webp)
  • rolearn: arn:aws:iam::XXXXXXXXXXXX:role/testrole #change arn of your instance role arn
    username: testrole #rolename groups:
    • system:masters

example

  • rolearn: arn:aws:iam::672618677785:role/EKSaccess #change arn username: EKSaccess # iam role name groups:
    • system:masters
![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694623307422/034b35c0-e72c-4b41-bcf4-fd36e6d94870.png?auto=compress,format&format=webp)

save and exit

Esc –> shift+: wq!

You will get this output (aws-auth edited)

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694623549114/bdca2a20-699c-4096-98b7-5ee0a5d8e240.png?auto=compress,format&format=webp)

Check now whether nodes are up or not

`(Not only in cloud shell you will get output like this in putty also)`

kubectl get nodes

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694624309970/d08bb5cc-022a-462d-9d14-d39da654982a.png?auto=compress,format&format=webp)

Let's clone our Project Repository

git clone

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694661982932/77d66a31-3697-42f5-bf4a-1bd6016e46a4.png?auto=compress,format&format=webp)

Go inside the K8s-voting app once it is cloned.

cd K8s-voting-app cd manifests ls cat api-deployment.yaml

In the API deployment, we used a namespace as cloudchamp. By default we get 4 namespaces only, we have to create cloudchamp namespace.

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694662321899/3dcedcd8-ec08-417e-9201-8d63919c308d.png?auto=compress,format&format=webp)

default namespaces

kubectl get ns

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694662425231/75f379e1-9412-4cb6-aa31-ac1c388d345e.png?auto=compress,format&format=webp)

Let's create our cloudchamp namespace.

kubectl create ns cloudchamp kubectl get ns

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694662614303/73e501db-d570-45d7-8d40-a6b71575904f.png?auto=compress,format&format=webp)

when you want to work within a specific namespace for your Kubernetes operations.

We have to set our namespace as current

kubectl config set-context –current –namespace cloudchamp

- ![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694663067981/16463c4c-76fb-447e-9349-5bd4331d0cf5.png?auto=compress,format&format=webp)
- `kubectl config set-context`: This is the main command for configuring your kubectl context.
- `--Current`: This flag indicates that you want to modify the current context.
- `--namespace cloudchamp`: This specifies the namespace you want to set as the current namespace for your kubectl context. In this case, it sets the namespace to "cloudchamp."

After running this command, any subsequent kubectl commands you execute will be scoped to the "cloudchamp" namespace, unless you specify a different namespace explicitly in your commands. This can be particularly useful when you have multiple namespaces in your Kubernetes cluster, and you want to ensure that your operations are isolated to a specific namespace.

### MONGO Database Setup

To create a Mongo stateful set with Persistent volumes, run the command in the manifests folder:

to apply manifest file

kubectl apply -f mongo-statefulset.yaml

to check pods

kubectl get pods (or) kubectl get pods -n cloudchamp -w kubectl get all

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694667388392/a53b9bf0-f79f-4469-b628-65e850ca553d.png?auto=compress,format&format=webp)

Go to Aws console and click on nodes and storage You can see now new 1Gb storage has been added to both nodes

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694667171593/78ea6de0-d5b0-437a-8c23-22e2838c44bd.png?auto=compress,format&format=webp)

Check whether persistent volumes are created or not

kubectl get pv -n cloudchamp kubectl get pvc -n cloudchamp

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694667607095/7b9a6e31-84cd-481e-88c1-c273cfb91c07.png?auto=compress,format&format=webp)

create Mongo Service

kubectl apply -f mongo-service.yaml kubectl get svc

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694667778936/14ab962a-1694-48cc-8a76-96cade48f274.png?auto=compress,format&format=webp)

Now let's go inside the mongo-0 pod and we have to initialise the Mongo database Replica set.

kubectl get pods kubectl exec -it mongo-0 – mongo

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694668026894/aaf45c78-f584-4bf1-8527-cc9868689697.png?auto=compress,format&format=webp)

In the terminal run the following command:

rs.initiate(); sleep(2000); rs.add(“mongo-1.mongo:27017”); sleep(2000); rs.add(“mongo-2.mongo:27017”); sleep(2000); cfg = rs.conf(); cfg.members[0].host = “mongo-0.mongo:27017”; rs.reconfig(cfg, {force: true}); sleep(5000);

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694668271131/1a318767-e4dd-4c73-b254-ef5d30abc10c.png?auto=compress,format&format=webp)

Note: Wait until this command completes successfully, it typically takes 10-15 seconds to finish and completes with the message: bye

Load the Data in the database by running this command:

use langdb

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694668584077/af5ba483-e36c-458f-a91a-aa216e3c8b0f.png?auto=compress,format&format=webp)

db.languages.insert({“name” : “csharp”, “codedetail” : { “usecase” : “system, web, server-side”, “rank” : 5, “compiled” : false, “homepage” : “https://dotnet.microsoft.com/learn/csharp”, “download” : “https://dotnet.microsoft.com/download/”, “votes” : 0}}); db.languages.insert({“name” : “python”, “codedetail” : { “usecase” : “system, web, server-side”, “rank” : 3, “script” : false, “homepage” : “https://www.python.org/”, “download” : “https://www.python.org/downloads/”, “votes” : 0}}); db.languages.insert({“name” : “javascript”, “codedetail” : { “usecase” : “web, client-side”, “rank” : 7, “script” : false, “homepage” : “https://en.wikipedia.org/wiki/JavaScript”, “download” : “n/a”, “votes” : 0}}); db.languages.insert({“name” : “go”, “codedetail” : { “usecase” : “system, web, server-side”, “rank” : 12, “compiled” : true, “homepage” : “https://golang.org”, “download” : “https://golang.org/dl/”, “votes” : 0}}); db.languages.insert({“name” : “java”, “codedetail” : { “usecase” : “system, web, server-side”, “rank” : 1, “compiled” : true, “homepage” : “https://www.java.com/en/”, “download” : “https://www.java.com/en/download/”, “votes” : 0}}); db.languages.insert({“name” : “nodejs”, “codedetail” : { “usecase” : “system, web, server-side”, “rank” : 20, “script” : false, “homepage” : “https://nodejs.org/en/”, “download” : “https://nodejs.org/en/download/”, “votes” : 0}});

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694668699929/c8171765-2fcd-4274-bd14-90b9496f3498.png?auto=compress,format&format=webp)

db.languages.find().pretty();


exit #exit from conatiner

To confirm run this in the terminal:

kubectl exec -it mongo-0 – mongo –eval “rs.status()” | grep “PRIMARY|SECONDARY”

Create Mongo secret:

kubectl apply -f mongo-secret.yaml

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694668813091/e8e9dcc2-cbd0-4790-9563-903960216414.png?auto=compress,format&format=webp)

### API Setup

Create GO API deployment by running the following command:

kubectl apply -f api-deployment.yaml kubectl get all

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694668971230/9f317f20-74e8-4c3c-a7bc-536560c23360.png?auto=compress,format&format=webp)

Expose API deployment through service using the following command:

kubectl expose deploy api \ –name=api \ –type=LoadBalancer \ –port=80 \ –target-port=8080 kubectl get svc

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669159694/b707061c-c215-4ba8-8c23-8f9cb36fdf3c.png?auto=compress,format&format=webp)

One load Balancer will be created in your AWS account

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669268847/c598c7af-42e2-4bf7-879c-38669f5012df.png?auto=compress,format&format=webp)

Next, set the environment variable:

{ API_ELB_PUBLIC_FQDN=$(kubectl get svc api -ojsonpath=”{.status.loadBalancer.ingress[0].hostname}”) until nslookup $API_ELB_PUBLIC_FQDN >/dev/null 2>&1; do sleep 2 && echo waiting for DNS to propagate…; done curl $API_ELB_PUBLIC_FQDN/ok echo }

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669339153/f175d2e9-5518-43f0-be07-35e647977e5d.png?auto=compress,format&format=webp)

Test and confirm that the API route URL /languages, and /languages/{name} endpoints can be called successfully. In the terminal run any of the following commands:

#in browser

In the browser, you have to use your external IP of Api to see this output

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669618401/e184bcf3-0958-404c-8b9a-f7fa2d057c73.png?auto=compress,format&format=webp)

If everything works fine, go ahead with the Frontend setup.

### Frontend setup

kubectl apply -f frontend-deployment.yaml kubectl get pods

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669819531/39c110ab-dfc5-48b1-bc9f-72108ffe4556.png?auto=compress,format&format=webp)

kubectl get all

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669866275/bda15dfc-ae91-4966-a08f-29a367ef624d.png?auto=compress,format&format=webp)

Expose API deployment through service using the following command:

kubectl expose deploy frontend \ –name=frontend \ –type=LoadBalancer \ –port=80 \ –target-port=8080

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669947292/cc975f2e-faa7-413b-9331-24536f605d20.png?auto=compress,format&format=webp)

kubectl get svc

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694669973671/2030992b-d3f1-4195-afd4-99364618fb39.png?auto=compress,format&format=webp)

Next, set the environment variable:

{ FRONTEND_ELB_PUBLIC_FQDN=$(kubectl get svc frontend -ojsonpath=”{.status.loadBalancer.ingress[0].hostname}”) until nslookup $FRONTEND_ELB_PUBLIC_FQDN >/dev/null 2>&1; do sleep 2 && echo waiting for DNS to propagate…; done curl -I $FRONTEND_ELB_PUBLIC_FQDN }

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694670033862/86a4443c-4df1-4508-b8c7-d8b1f692aab5.png?auto=compress,format&format=webp)

Generate the Frontend URL for browsing. In the terminal run the following command:

echo http://$FRONTEND_ELB_PUBLIC_FQDN

Test the full end-to-end cloud-native application

frontend external ip in browser

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694677942312/84e4a813-88d7-4b16-9e95-4e094699af2b.png?auto=compress,format&format=webp)

If you get output like this, Delete the service of frontend and deployment

kubectl delete -f frontend-service.yaml kubectl delete -f frontend-deployment.yaml

Now copy your API External service ip

kubectl get svc

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694678162955/c78b9141-b4a9-4e87-a280-bb110018558c.png?auto=compress,format&format=webp)

Now open your frontend-deployment. yaml file

sudo vi frontend-deployment.yaml

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694678302571/17ca0fb6-e1a6-4cc3-9272-33a97eae959a.png?auto=compress,format&format=webp)

Update the frontend-deployment.yaml file with your api-ip

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694678461444/cb053519-8a4d-4569-ba5b-2c731d17bdcb.png?auto=compress,format&format=webp)

esc –shift+: wq!

Now again deploy the frontend

kubectl apply -f frontend-deployment.yaml

and now expose the frontend-service

kubectl expose deploy frontend \ –name=frontend \ –type=LoadBalancer \ –port=80 \ –target-port=8080

Copy your external ip of the frontend service and paste it into the browser You will get an application like this

![](https://cdn.hashnode.com/res/hashnode/image/upload/v1694678733771/10a4d746-1fa9-4b43-80ab-7e38887520aa.png?auto=compress,format&format=webp)

Using your local workstation's browser - browse to the URL created in the previous output.

After the voting application has loaded successfully, vote by clicking on several of the **+1** buttons, This will generate AJAX traffic which will be sent back to the API via the API's assigned ELB.

Query the MongoDB database directly to observe the updated vote data. In the terminal execute the following command:

kubectl exec -it mongo-0 – mongo langdb –eval “db.languages.find().pretty()” ```

Thanks for Reading my Blog.

Share
Like this post?

Request a change or update

Suggest a correction or content update. The post author or an admin will be notified and can resolve or respond.

Comments (0)

No comments yet. Be the first to share your thoughts.

Leave a comment