A full-stack DevOps implementation showcasing how to build, deploy, and monitor a containerized app using DevOps tools like Docker, Kubernetes, Terraform, Ansible, Jenkins, ArgoCD, Prometheus, and Grafana.
- Local Kubernetes Cluster using kubeadm on VMware
- AWS EC2 Jenkins deployment with Terraform
- EC2 provisioning with Ansible and Dynamic Inventory
- CI/CD pipelines with Jenkins, integrated with GitHub
- GitOps deployment using ArgoCD
- VPN connection via Tailscale between cloud and local
- Monitoring via Prometheus and Grafana
π Project Architecture Diagram
π Project Tree Overview
CloudDevOpsProject
βββ ansible
βΒ Β βββ ansible.cfg
βΒ Β βββ aws_ec2.yml
βΒ Β βββ group_vars
βΒ Β βΒ Β βββ Jenkins_Slave.yml
βΒ Β βββ mykey.pem
βΒ Β βββ playbooks
βΒ Β βΒ Β βββ jenkins-full-setup.yml
βΒ Β βΒ Β βββ slave.yml
βΒ Β βββ roles
βΒ Β βΒ Β βββ common
βΒ Β βΒ Β βΒ Β βββ tasks
βΒ Β βΒ Β βΒ Β βββ main.yml
βΒ Β βΒ Β βββ jenkins_master
βΒ Β βΒ Β βΒ Β βββ tasks
βΒ Β βΒ Β βΒ Β βββ main.yml
βΒ Β βΒ Β βββ jenkins_slave
βΒ Β βΒ Β βββ tasks
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ vars
βΒ Β βββ main.yml
βββ argocd
βΒ Β βββ argocd-app.yaml
βββ docker
βΒ Β βββ Dockerfile
βββ Jenkinsfile
βββ k8s
βΒ Β βββ deployment.yaml
βΒ Β βββ generate-cm.py
βΒ Β βββ grafana-dashboard.yaml
βΒ Β βββ grafana-deployment.yaml
βΒ Β βββ grafana-service.yaml
βΒ Β βββ namespace.yaml
βΒ Β βββ new
βΒ Β βΒ Β βββ cluster-role-binding.yaml
βΒ Β βΒ Β βββ cluster-role.yaml
βΒ Β βΒ Β βββ deployment.yaml
βΒ Β βΒ Β βββ service-account.yaml
βΒ Β βΒ Β βββ service.yaml
βΒ Β βββ node-exporter.json
βΒ Β βββ node-exporter.yaml
βΒ Β βββ prometheus-config.yaml
βΒ Β βββ prometheus-deployment.yaml
βΒ Β βββ prometheus-rbac.yaml
βΒ Β βββ prometheus-service.yaml
βΒ Β βββ README.md
βΒ Β βββ service.yaml
βββ my_app
βΒ Β βββ app.py
βΒ Β βββ README.md
βΒ Β βββ requirements.txt
βΒ Β βββ static
βΒ Β βΒ Β βββ logos
βΒ Β βΒ Β βΒ Β βββ ivolve-logo.png
βΒ Β βΒ Β βΒ Β βββ nti-logo.png
βΒ Β βΒ Β βββ style.css
βΒ Β βββ templates
βΒ Β βββ index.html
βββ README.md
βββ terraform
βΒ Β βββ backend.tf
βΒ Β βββ CloudDevOpsProject-keypair.pem
βΒ Β βββ images
βΒ Β βΒ Β βββ ec2 in aws.png
βΒ Β βΒ Β βββ jenkins.png
βΒ Β βΒ Β βββ terraform.png
βΒ Β βΒ Β βββ tst jenkins.png
βΒ Β βββ jenkins_password.txt
βΒ Β βββ main.tf
βΒ Β βββ modules
βΒ Β βΒ Β βββ network
βΒ Β βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βΒ Β βββ outputs.tf
βΒ Β βΒ Β βΒ Β βββ variables.tf
βΒ Β βΒ Β βββ server
βΒ Β βΒ Β βββ main.tf
βΒ Β βΒ Β βββ outputs.tf
βΒ Β βΒ Β βββ variables.tf
βΒ Β βββ outputs.tf
βΒ Β βββ providers.tf
βΒ Β βββ README.md
βΒ Β βββ variables.tf
βββ vars
βββ buildDockerImage.groovy
βββ deleteLocalDockerImage.groovy
βββ pushDockerImage.groovy
βββ scanDockerImage.groovy
βββ updateK8sManifests.groovy
The project started by setting up a Kubernetes cluster locally on VMware using kubeadm.
- β 1x Master node
- β 2x Worker nodes
- Installing Docker and kubeadm
- Initializing the master node
- Joining worker nodes
- Creating namespace
ivolve
sudo apt update
# Install a prerequisite package that allows apt to utilize HTTPS:
sudo apt-get install apt-transport-https ca-certificates curl gpg
sudo install -m 0755 -d /etc/apt/keyrings
# Add GPG key for the official Docker repo to the Ubuntu system:
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the Docker repo to APT sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-
by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update the database with the Docker packages from the added repo:
sudo apt-get update
# Install Docker software:
sudo apt install -y containerd.io docker-ce docker-ce-cli
# Docker should now be installed, the daemon started, and the process enabled to start on boot. To verify:
sudo systemctl status docker
# Make the docker enable to start automatic when reboot the machine:
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl enable --now containerd
# Add user to docker Groups:
sudo usermod -aG docker ${USER} wget
https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.0.tgz
sudo systemctl restart containerd
sudo systemctl enable containerd
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
## Install selinux (in all Machine):
sudo apt install selinux-utils
# Disable selinux (in all Machine):
setenforce 0
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
# Enable IP forwarding permanently:
# Apply the changes:
sudo sysctl -p
# Validate Containerd
sudo crictl info
# Validate Containerd
cat /proc/sys/net/ipv4/ip_forward
# Give a unique hostname for all machines:
sudo hostnamectl set-hostname master
sudo hostnamectl set-hostname node1
sudo hostnamectl set-hostname node2
Set the hostname for all machines
sudo vim /etc/hosts
ADD
<ip-master> <hostname-master>
<ip-node1> <hostname-node1>
<ip-node2> <hostname-node2>
Check swap config, ensure swap is 0
free -m
Update your existing packages:
sudo apt-get update
# Install packages needed to use the Kubernetes apt repository:
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Download the public signing key for the Kubernetes package repositories.
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add Kubernetes Repository:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Update your existing packages:
sudo apt-get update -y
# Install Kubeadm:
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
# Enable the kubelet service:
sudo systemctl enable --now kubelet
# Enable kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Update Iptables Settings
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Configure persistent loading of modules
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
# Reload sysctl
sudo sysctl --system
# Start and enable Services
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
sudo systemctl enable kubelet
step 7: Initialize Kubernetes on the Master Node
# Run the following command as sudo on the master node:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# To Create a new token as root
sudo kubeadm token create --print-join-command
# Deploy a Pod Network through the master node:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kubeflannel.yml
worker node and all other worker nodes that you want to join onto this cluster. Next, copy-paste this command as you will use it later in the worker node.
You will use your kubeadm join command that was shown in your terminal when we initialized the master node. The command would be similar of this:
sudo kubeadm join 172.31.6.233:6443 --token 9lspjd.t93etsdpwm9gyfib -discovery-token-ca-cert-hash
sha256:37e35d7ea83599356de1fc5c80c282285cc3c749443a1dafd8e73f40
sudo kubeadm reset -f
πΌοΈ Cluster Name Space:
I wrote and executed Terraform code to provision the following on AWS:
- A VPC (
10.0.0.0/16) - Public subnet for Jenkins Master (10.0.1.0/24)
- Private subnet for Jenkins Slave (10.0.10.0/24)
- Internet Gateway + Route Tables
- Security Groups for Jenkins
- EC2 instances (Amazon linux):
- Jenkins Master in public subnet
- Jenkins Slave in private subnet
- Remote state storage using S3 and DynamoDB
- CloudWatch for EC2 monitoring
π Modules Used:
network/server/
πΌοΈ Terraform Output & Screenshots:
Once EC2 instances were up, I used Ansible to configure the Jenkins Master:
- Used Dynamic Inventory
- Installed:
- Docker
- Python
- Git
- Java
- Jenkins
- Configured Jenkins to:
- Start on boot
- Print Admin password in the playbook output for easy access
- Everything structured via Ansible Roles
ποΈ Structure:
ansible
βββ ansible.cfg
βββ aws_ec2.yml
βββ mykey.pem
βββ playbooks
βΒ Β βββ jenkins-full-setup.yml
βΒ Β βββ slave.yml
βββ roles
βΒ Β βββ common
βΒ Β βΒ Β βββ tasks
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ jenkins_master
βΒ Β βΒ Β βββ tasks
βΒ Β βΒ Β βββ main.yml
βΒ Β βββ jenkins_slave
βΒ Β βββ tasks
βΒ Β βββ main.yml
βββ vars
βββ main.yml
πΌοΈ Ansible playbook
Since the Jenkins server is in AWS and the Kubernetes cluster is local, they can't communicate directly.
To solve this:
- Installed Tailscale on both:
- Jenkins EC2 instance (cloud)
- Kubernetes master node (local)
- Connected both nodes via VPN tunnel
- Ensured Jenkins could connect to the local Kubernetes API securely
πΌοΈ
π§ Note: Also transferred the Kubernetes kubeconfig to EC2 Jenkins Master to allow deployment.
πΌοΈ from EC2
On the Jenkins web dashboard:
- Added a Jenkins Agent (Slave) to run builds in private subnet.
- Created a multibranch pipeline project.
- Connected GitHub repo with Webhook triggers.
π§ͺ Jenkins Pipeline Stages:
- Build Docker image
- Scan image with Trivy
- Push image to registry
- Delete local image
- Update build number in K8s manifest
- Commit updated YAML to GitHub
π Uses Shared Library (vars/) to manage stages in modular form.
Deployed ArgoCD inside the Kubernetes cluster:
- Installed using official Helm/Manifests
- Accessed ArgoCD UI and connected it to the GitHub repo
- Created ArgoCD
Applicationmanifest to deploy the app from repo to namespaceivolve - ArgoCD continuously watches the repo for changes and syncs to cluster
π argocd/app.yaml contains the config.
πΌοΈ

To monitor the cluster:
- Installed Prometheus for metrics collection
- Installed Grafana for visualization
- Exposed Grafana on NodePort and accessed dashboards
- Created dashboards for:
- Node usage
- Pod CPU/Memory
- Jenkins Job metrics (via exporters)
Mahmoud Yassen
π DevOps Trainee at iVolve
π [(www.linkedin.com/in/myassenn01)]

