Automated Node.js Application Deployment on EKS with Jenkins, Terraform, Docker, and Kubernetes

Atin Mondal
17 min readSep 28, 2023

Project Description:

In this endeavor, we will present a thorough and automated process for deploying a Node.js application onto an Amazon Elastic Kubernetes Service (EKS) cluster using Jenkins. Our methodology encompasses the creation of Jenkins pipeline stages, the establishment of the EKS cluster using Terraform, the containerization of the Node.js application through a Dockerfile, and the coordination of its deployment within the Kubernetes cluster via deployment code.
Notable Project Features:

  1. Infrastructure as Code with Terraform: We will leverage Terraform to define and provision the Amazon EKS cluster infrastructure, ensuring consistency, scalability, and ease of management.
  2. Containerization with Docker: The Node.js application will be containerized using Docker, enabling portability, isolation, and efficient resource utilization.
  3. Continuous Integration and Deployment (CI/CD) with Jenkins: A Jenkins pipeline will be set up to automate the building, testing, and deployment of the Node.js application. This CI/CD pipeline will ensure a streamlined and reliable deployment process.
  4. Kubernetes Orchestration: The containerized application will be deployed onto the EKS cluster using Kubernetes, providing robust orchestration capabilities for scaling, load balancing, and self-healing.
  5. Scalability and Resilience: The project will demonstrate how to scale the application up or down based on demand, ensuring high availability and resilience.

Upon completing this project, you will gain a comprehensive grasp of deploying Node.js applications in a Kubernetes environment, employing a seamlessly automated and highly efficient deployment pipeline. This journey will equip you with industry-standard tools and optimal methodologies. I trust this information proves valuable.

We’ll be utilizing a Node.Js application for this project. Below, I’ve provided a link to my GitHub repository, which contains the YAML file for Kubernetes (K8S) deployment, Terraform modules for EKS cluster , and a Dockerfile. The repository also includes a Jenkinsfile for automating the following tasks:

▹ Git Checout
▹ Build docker image using Dockerfile
▹ Scan docker image using Trivy
▹ Push docker image to ECR
▹ Clean docker image from Local system
▹Create EKS cluster using Terraform module
▹Connect to EKS cluster
▹ Deploy Node.js applcation into EKS cluster.

GitHub: Jenkins_pipeline_nodejsApplication & jenkins_shared_lib

Overview:

Initially, we will launch an EC2 instance where we will install our DevOps tools, such as Jenkins, Docker, kubectl and Java.

1. Setup Jenkins

Go to the AWS EC2 console, and launch an EC2 instance. Here, I’m selecting Ubuntu to run the Jenkins server, and I’m choosing an instance type of t2.medium for a smoother experience. Don’t forget to add an inbound rule for the Jenkins server, allowing traffic on port 8080, in the security group

Jenkins Server

Process to install Java and Jenkins in our EC2 instance
Java Installation:
step 1: sudo apt-get update
step 2: sudo apt-get install openjdk-11-jdk
Alternatively we can try
sudo apt-get install openjdk-11-jre
step 3: javac -version

As Jenkins is dependent on Java, we can now install Jenkins on our EC2
Jenkins Installation:
Perform update first :
sudo apt update

step 1:
A LTS (Long-Term Support) release is chosen every 12 weeks from the stream of regular releases as the stable release for that time period. It can be installed from the debian-stable apt repository.

curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \
/usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins

You can enable the Jenkins service to start at boot with the command:
step 2: sudo systemctl enable jenkins
You can start the Jenkins service with the command:
step 3: sudo systemctl start jenkins
You can check the status of the Jenkins service using the command:
step 4: sudo systemctl status jenkins
If everything has been set up correctly, you should see an output like this:

Jenkins server status

Now that we have successfully set up our Jenkins server, let’s access the server from a web browser.

EC2 instance ip : 8080 | for me 54.211.1.198:8080

After that we have to provide Jenkins admin password to access Jenkins server.
Get the password to unlock Jenkins:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Copy the password and paste in the browser (in administrator password box).
Then click on Install suggested plugins to install all the suggested plugins

Before accessing the Jenkins dashboard, we need to set a Jenkins username and password. For simplicity, I’m using ‘admin’ as the username and password. You can choose different credentials. After completing all the setup steps, you will see a screen like the one below:

2. Docker installation

Now Login to Jenkins EC2 instance, execute below commands:
Perform update first :
sudo apt update

step 1: docker installation
sudo apt install docker.io -y

step 2: Add Ubuntu user to Docker group
sudo usermod -aG docker $USER

step 3: Now you need to logout and relogin , run exit in command line or execute the below command:
newgrp docker

step 4: The Docker service needs to be setup to run at startup. To do so, execute each command followed by enter:
sudo systemctl start docker
sudo systemctl enable docker

step 5: Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:
sudo systemctl status docker
The output should be similar to the following, showing that the service is active and running:

If you see the above screen, then your Docker daemon is up and running.

3. Jenkins plug-ins

Before building our pipeline need to install some necessary plug-ins, such as- docker , Amazon ECR

4. Install aws-cli , eksctl and kubectl

Install aws-cli - The AWS CLI provides direct access to the public APIs of AWS services.
Execute below command from the command line to install AWS CLI on Linux/ubuntu:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

If you face any difficulty with unzip tool try “sudo apt install unzip” first then execute the above command.
aws — version

Below screenshot will confirm that AWS CLI is successfully installed on your system.

Install eksctl — It’s a command line tool which will require to automate our individual tasks . So we need this for working with EKS clusters.
Perform update first :
sudo apt update

Step 1: Well, unlike AWS CLI, eksctl is not available to install using the default Ubuntu’s base repositories, therefore, we need to download it from its GitHub repository. Here is the command to do so:
curl — silent — location “https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz” | tar xz -C /tmp

Step 2: Move the extracted binary to the /usr/local/bin directory using the following command:
sudo mv /tmp/eksctl /usr/local/bin

Step 3: After completing the installation, to confirm the eksctl tool is on our system, let’s use its command to check the version.
eksctl version

Above screenshot will confirm that eksctl is successfully installed on your system.
Install kubectl — It’s command line tool to work with Kubernetes clusters.
Step1:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

Step 2: At this stage we have successfully installed minikube in our instance . Now we are ready to go to install kubectl
minikube start — driver=docker — force
sudo snap install kubectl — classic
Below screenshot will confirm that kubectl is successfully installed on your system

5. Create ECR

Before deploying the Docker image to the EKS cluster, we need to create an ECR repository to store our image
Step 1:
Go to AWS console , search ECR and create a repository with a suitable name. I went with jenkins-pipeline.

To grant permission to our Jenkins instance to push Docker images to ECR, we need to attach an IAM role to our EC2 instance. Here’s how to do it:

  1. From the AWS console, navigate to the IAM Dashboard.
  2. Select Trusted entity type as AWS service and Use case as EC2.
  3. Assign the ‘AmazonEC2ContainerRegistryFullAccess’ permission to our role and at last click on Create role

Lets add the newly created IAM role to our EC2 instance.
First, select the EC2 instance, then click on ‘Actions’ > ‘Security’ > ‘Modify IAM Role’

Next from the drop down menu select the IAM role and click Update IAM role

At this stage, all installation and setup are done, and we are ready to write our Jenkins pipeline stage script and Terraform script. Let’s get started!

6. Jenkins Groovy file

We will write our Jenkins groovy file for defining and customizing Jenkins pipelines. Groovy files in Jenkins serve as the foundation for defining automation pipelines and tailoring Jenkins to specific project requirements, offering flexibility and extensibility in the continuous integration and continuous delivery (CI/CD) process.
GitHub Repository for groovy files: https://github.com/atinmondal/jenkins_shared_lib.git

For our pipeline we need 5 groovy files name as :

i. gitCheckout.groovy : Use to check out a GitHub Repo
ii. dockerBuild.groovy : Use to build docker image
iii. dockerImageScan.groovy : Use to scan docker image before push to Amazon ECR
iv. dockerImagePush.groovy : Use to Push the docker image to Amazon ECR
v. dockerImageCleanup.groovy : Use to clean-up the docker image from local system
To access these Groovy files, provide the GitHub repo URL to Jenkins so it can pull the Groovy code directly.
Step 1:
From Jenkins dashboard redirect to Manage Jenkins > System > search for “Global Pipeline Libraries”
Step 2:
i. Add any Library Name .I’m adding “my-shared-library”
ii. Default version will be “main”
iii. Check the two boxes below:
Allow the default version to be overridden.
Include @Library changes in job recent changes.

Step 3:
i. Select “Modern SCM” from the Retrieval Method drop-down menu.
ii. Select “Git” from the Source Code Management drop-down menu.
iii. In the Project Repository field, enter the URL of your GitHub repository where you have pushed all the Groovy files. For example, the GitHub repo URL might be: https://github.com/atinmondal/jenkins_shared_lib.git.
iv. In the Library Path field enter “/”
Apply and Save

Note: In my case, my GitHub repo is public, so there’s no need to provide any credentials. However, if your repo is private, you’ll need to provide your username and credentials. To set up credentials for GitHub, follow the instructions below.

Step 1: First click on add button under Credential

Step 2: Add your GitHub username and password, and set the description as ‘git-cred’ then save the changes.

Furthermore, we need to securely store our AWS Access Key ID and Secret Access Key within our Jenkins server to enable AWS to authorize Jenkins for executing such operations. Let’s proceed with this task.
Step 1:
From the Jenkins Dashboard, navigate to ‘Manage Jenkins’ > ‘Credentials’ and click on the ‘Domain(global)’.

Enter your AWS user account Access Key ID under Secret box and for ID and Description provide a suitable name; for example, I’m using ‘Access_key_ID.’

Finally, click the ‘Save’ button to complete the process.

Repeat the same procedure for storing the Secret Access Key.

7. Automating the Creation of EKS Cluster Using Terraform Modules.

In this stage, we will not manually launch our EKS cluster. Instead, we will create the EKS cluster using Terraform to automate most scenarios.

Code Setup for Terraform

Step 1:
Please adhere to the directory structure to create all the necessary directories and files for running Terraform code.

Step 2:

i. Under cofig directory terraform.tfvars contains — variable values that we want to parameterize in our Terraform configurations.

aws_eks_cluster_config = {

"demo-eks-cluster" = {

eks_cluster_name = "eks-cluster1"
eks_subnet_ids = ["subnet-0e2c4a20a01a174f1","subnet-053bbccbfc781613a","subnet-011a1f48320609f3d","subnet-02e952061c020475f"]
tags = {
"Name" = "demo-eks-cluster"
}
eks_iam_role = "eks_iam_role"
}
}

eks_node_group_config = {

"node1" = {

eks_cluster_name = "demo-eks-cluster"
node_group_name = "myEksNode"
nodes_iam_role = "eks-node-group-general1"
node_subnet_ids = ["subnet-0e2c4a20a01a174f1","subnet-053bbccbfc781613a","subnet-011a1f48320609f3d","subnet-02e952061c020475f"]

tags = {
"Name" = "node1"
}
}
}

ii. In the EKS directory:
main.tf :- Contains resource for aws_eks_cluster , aws_iam_role, two aws_iam_role_policy_attachment — AmazonEKSClusterPolicy and AmazonEKSVPCResourceController
output.tf: Contains the aws_eks_cluster name, which is used to expose the name of the EKS cluster for external use.
variable.tf: Contains variable declarations. These variable declarations define the variables that we can use within our Terraform configuration.

# main.tf

resource "aws_eks_cluster" "eks" {
# Name of the EKS cluster
name = var.eks_cluster_name

# The Amazon Resource Name (ARN) of the IAM role that provides permissions for
# the Kubernetes control plane to make calls to AWS API operations on your behalf
role_arn = aws_iam_role.example.arn

# Must be in at least two different availability zones
vpc_config {
subnet_ids = var.subnet_ids
}
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.example-AmazonEKSVPCResourceController,
]
}

data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"

principals {
type = "Service"
identifiers = ["eks.amazonaws.com"]
}

actions = ["sts:AssumeRole"]
}
}

resource "aws_iam_role" "example" {
# The name of the role
name = var.eks_role_name
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_iam_role_policy_attachment" "example-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.example.name
}
resource "aws_iam_role_policy_attachment" "example-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.example.name
}
# output.tf

output "eks_cluster_name" {
value = aws_eks_cluster.eks.name
}
# variable.tf

variable "eks_cluster_name" {

}
variable "subnet_ids" {

}
variable "tags" {

}
variable "eks_role_name" {

}

iii. In the EKS_NodeGroup directory
★ main.tf :- Contains resource for aws_eks_node_group , aws_iam_role, three aws_iam_role_policy_attachment — AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy and AmazonEC2ContainerRegistryReadOnly
★ variable.tf:- Contains variable declarations. These variable declarations define the variables that we can use within our Terraform configuration.

# main.tf

resource "aws_eks_node_group" "node_group" {
# Name of the EKS Cluster.
cluster_name = var.eks_cluster_name
# Name of the EKS Node Group.
node_group_name = var.node_group_name
node_role_arn = aws_iam_role.example.arn
subnet_ids = var.subnet_ids

# Configuration block with scaling settings
scaling_config {
# Desired number of worker nodes.
desired_size = 1
# Maximum number of worker nodes.
max_size = 1
# Minimum number of worker nodes.
min_size = 1
}

update_config {
max_unavailable = 1
}

# Type of Amazon Machine Image (AMI) associated with the EKS Node Group.
# Valid values: AL2_x86_64, AL2_x86_64_GPU, AL2_ARM_64

ami_type = "AL2_x86_64"

# Type of capacity associated with the EKS Node Group.
# Valid values: ON_DEMAND, SPOT
capacity_type = "ON_DEMAND"

# Disk size in GiB for worker nodes
disk_size = 20

# Force version update if existing pods are unable to be drained due to a pod disruption budget issue.
force_update_version = false

# List of instance types associated with the EKS Node Group
instance_types = ["t2.medium"]

# Kubernetes version
#version = "1.24"

# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.example-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.example-AmazonEC2ContainerRegistryReadOnly,
]
}

resource "aws_iam_role" "example" {
name = var.node_group_role_name

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy_attachment" "example-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.example.name
}

resource "aws_iam_role_policy_attachment" "example-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.example.name
}

resource "aws_iam_role_policy_attachment" "example-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.example.name
}
# variable.tf

variable "node_group_name" {

}
variable "eks_cluster_name" {

}
variable "subnet_ids" {

}
variable "tags" {

}
variable "node_group_role_name" {

}

At this stage we have created two modules-
1. EKS cluster
2. Node Group
Now its time to setup terraform project directory
★ main.tf — Contains two modules of two resurces aws_eks_cluster and aws_eks_node_group. main.tf file serves as the main entry point for our terraform configuration.

module "aws_eks_cluster" {
source = "../modules/EKS"

for_each = var.aws_eks_cluster_config
eks_cluster_name = each.value.eks_cluster_name
subnet_ids = each.value.eks_subnet_ids
tags = each.value.tags
eks_role_name = each.value.eks_iam_role
}

module "aws_eks_node_group" {
source = "../modules/EKS_NodeGroup"
for_each = var.eks_node_group_config
node_group_name = each.value.node_group_name
eks_cluster_name = module.aws_eks_cluster[each.value.eks_cluster_name].eks_cluster_name
subnet_ids = each.value.node_subnet_ids
tags = each.value.tags
node_group_role_name = each.value.nodes_iam_role
}

★ provider.tf: Contains the configuration settings for the provider(s) used in our Terraform project. A provider in Terraform is responsible for managing resources in a specific cloud or infrastructure platform, such as AWS, Azure, Google Cloud, or others. In this project, we are using AWS as the provider.

provider "aws" {
region = var.region
access_key = var.access_key
secret_key = var.secret_key
}

★ version.tf:- Contains the required version of terraform

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# variable.tf file for project directory

variable "access_key" {
default = ""
}
variable "secret_key" {
default = ""
}

variable "region" {
default = "us-east-1"
}

variable "aws_eks_cluster_config" {

}

variable "eks_node_group_config" {

}

After creating those files, we are ready to automate the EKS cluster creation using Terraform.

Amazon EKS Cluster Architecture

8. Dockerfile for building docker image

Let’s create our Dockerfile for building the image of a Node.js application
Dockerfile contents:

FROM node:alpine
RUN mkdir /app
COPY node-js /app
WORKDIR /app
RUN npm install
RUN npm i
RUN npm install -g nodemon
CMD ["npm", "start"]

9. Create Jenkins pipeline project

Step 1:
To create a pipeline job click on + New Item button from Jenkins dashboard. Then enter project name , here I’m giving project name as Devops-pipeline-nodejs and choose pipeline as job.

Step 2:
In the Pipeline section, choose ‘Pipeline script from SCM’ as the Definition.

Choose SCM as Git. Then enter the GitHub repository URL. For my scenario, I will enter my GitHub repo URL: https://github.com/atinmondal/Jenkins_pipeline_nodejsApplication.git

Under ‘Branches to build,’ enter your branch name, and set the ‘Script Path’ to ‘Jenkinsfile.
At the end click on apply and save

As we have specified the ‘Script Path’ to the Jenkinsfile, Jenkins will automatically fetch all the stages written in the Jenkinsfile from the GitHub repo above.

10. Jenkins Piepline Stages

In the Jenkinsfile, there are several stages that will run in the pipeline:
▹ Git Checout
▹ Build docker image using Dockerfile
▹ Scan docker image using Trivy
▹ Push docker image to ECR
▹ Clean docker image from Local system
▹Create EKS cluster using Terraform module
▹Connect to EKS cluster
▹ Deploy node-js image to EKS cluster.

Jenkinsfile Contents:

@Library('my-shared-library') _
pipeline{
agent any

parameters{
choice(name: 'action', choices: 'create\ndelete', description: 'Choose Create/Destroy')
string(name:'aws_account_id',description: 'AWS account Id', defaultValue: '243731798106')
string(name:'region',description: 'Region of the ECR', defaultValue: 'us-east-1')
string(name:'ecrRepositoryName',description: 'Name of the ECR', defaultValue: 'jenkins-pipeline')
string(name:'eksClusterName',description: 'Name of the EKS cluster', defaultValue: 'eks-cluster1')
}

environment{
ACCESS_KEY = credentials('Access_key_ID')
SECRET_KEY = credentials('Secret_access_key')
}

stages{
stage('Git Checkout'){
when { expression { params.action == 'create'}}
steps{
gitCheckout(
branch: "main",
url: "https://github.com/atinmondal/Jenkins_pipeline_nodejsApplication.git"
)
}
}
stage('Docker Image Build'){
when { expression { params.action == 'create'}}
steps{
script{

dockerBuild("${params.aws_account_id}","${params.region}","${params.ecrRepositoryName}")
}
}
}

stage('Docker Image Scan Using Trivy'){
when { expression { params.action == 'create'}}
steps{
script{

dockerImageScan("${params.aws_account_id}","${params.region}","${params.ecrRepositoryName}")
}
}
}
stage('Docker Image Push To ECR'){
when { expression { params.action == 'create'}}
steps{
script{
dockerImagePush("${params.aws_account_id}","${params.region}","${params.ecrRepositoryName}")
}
}
}

stage('Docker Image CleanUp From Local System'){
when { expression { params.action == 'create'}}
steps{
script{
dockerImageCleanup("${params.aws_account_id}","${params.region}","${params.ecrRepositoryName}")
}
}
}

stage('Create EKS Cluster: Terraform'){
when { expression { params.action == 'create'}}
steps{
script{
dir('EKS_module\\project') {
sh """
terraform init
terraform plan -var 'access_key=${ACCESS_KEY}' -var 'secret_key=${SECRET_KEY}' -var 'region=${params.region}' --var-file=../config/terraform.tfvars
terraform apply -var 'access_key=${ACCESS_KEY}' -var 'secret_key=${SECRET_KEY}' -var 'region=${params.region}' --var-file=../config/terraform.tfvars -auto-approve
"""
}
}
}
}

stage('Connect To EKS'){
when { expression { params.action == 'create'}}
steps{
script{
sh """
aws eks --region ${params.region} update-kubeconfig --name ${params.eksClusterName}
"""
}
}
}

stage('Deployment of Node.js Image on EKS Cluster'){
when { expression { params.action == 'create'}}
steps{
script{
def apply = false

try{
input message: 'Please confirm to deploy on EKS', ok: 'Ready to apply the config ?'
apply = true
}catch(err){
apply = false
currentBuild.result = 'UNSTABLE'
}
if (apply){

sh """
kubectl apply -f .
"""
}
}
}
}

}
}

Now that all the setup is complete, click the ‘Build with Parameters’ button to initiate the pipeline build. Jenkins will fetch the latest code from our main GitHub repo and proceed to build the architecture stage by stage. ★Below are the images of the Jenkins pipeline stages

Trigger Jenkins pipeline

★At this stage, Jenkins has triggered our Terraform module stage to create the EKS cluster.

To deploy the Node.js image to EKS, I have added a manual verification stage, which is a best practice for production deployment.
In this stage, manual intervention is required by clicking on ‘Ready to apply the configuration?’ button, If this step is not confirmed and ‘Abort’ is clicked, Jenkins will skip the deployment process.
★Below is the image of the Manual Intervention stage.

★Here’s the ultimate overview of the Jenkins pipeline stages, where we’ve accomplished the successful deployment of our Node.js application into the EKS cluster.

Jenkins Pipeline Stage View
Console Output

From the deployment.yaml file, Jenkins has created a deployment, a service and three pods in the Kubernetes cluster. The service type is set to LoadBalancer.
★ Below is the image of AWS Load Balancer

AWS Load Balancer

Once all the instances are InService, proceed to copy the DNS name and attempt to access the copied URL from your browser to view the image.

Bravo! 🥳 We’ve ultimately achieved the successful deployment of our Node.js application into EKS, and it’s functioning as expected.

Node.Js Application

It’s quite an extensive post, but I trust it has provided you with a wealth of insights and information.

Recapitulation

Ensure you clone my repository at https://github.com/atinmondal/Jenkins_pipeline_nodejsApplication.git
and make modifications to deployment.yaml to fetch the Docker image from your AWS ECR repository. Also don’t forget to add Access_key_ID and Secret_access_key under Jenkins credential .
Feel free to customize your Node.js application as desired; I initially developed it for experimentation.

_____

Have any reflections to share? Leave a comment!
Found the article valuable? Give it a round of applause below and share your level of enjoyment! :-)
Don’t hesitate to contact me if you have any questions or concerns.

Appreciate your engagement.

--

--

Atin Mondal
0 Followers

AWS Certified Developer - Associate || Cloud DevOps Engineer || Automation , CI/CD, AWS Cloud Infrastructure || Software Engineer @ WIPRO