Automated Kubernetes cluster deployment on DigitalOcean using Terraform.
This repository automates the provisioning and configuration of a managed Kubernetes cluster on DigitalOcean. It uses Infrastructure as Code (IaC) principles to create a reproducible, scalable Kubernetes environment with integrated database and dashboard components.
The goal is to build an application with an event-driven architecture that can be deployed to this cluster.
-
DigitalOcean Kubernetes Service (DOKS)
- Version: 1.33.1-do.5
- Worker Pool: s-1vcpu-2gb nodes
- Auto-scaling: 1-3 nodes
- Managed control plane
-
PostgreSQL Database Cluster
- Version: 18
- Size: db-s-1vcpu-1gb (single node)
- Managed by DigitalOcean
-
Kubernetes Dashboard
- Deployed via Helm
- Admin user configured with RBAC
- DigitalOcean: Cloud platform
- Terraform: Infrastructure provisioning (DigitalOcean provider v2.68.0)
- Kubernetes: Container orchestration (v1.33.1-do.5)
- Helm: Package manager for Kubernetes
- PostgreSQL: Database (v18)
- Jenkins: CI/CD automation
.
├── infra/
│ ├── terraform/ # Infrastructure provisioning
│ │ ├── do.tf # Main Terraform configuration
│ │ ├── do.tfvars # Variable values (not in git)
│ │ ├── variables.tf # Variable definitions
│ │ ├── dashboard-adminuser.yaml # K8s ServiceAccount config
│ │ ├── dashboard-adminrole.yaml # K8s ClusterRoleBinding config
│ │ ├── cluster/ # Kubernetes cluster module
│ │ │ ├── main.tf # DOKS cluster resource
│ │ │ ├── variables.tf # Module variables
│ │ │ └── outputs.tf # Module outputs
│ │ ├── db/ # Database module
│ │ │ ├── main.tf # PostgreSQL resource
│ │ │ ├── variables.tf # Module variables
│ │ │ └── outputs.tf # Module outputs
│ │ └── dashboard/ # Dashboard module
│ │ ├── main.tf # Helm deployment
│ │ └── variables.tf # Module variables
│ └── Jenkinsfile # CI/CD pipeline definition
└── README.md
- DigitalOcean account with API token
- PostgreSQL database for Terraform state backend
- Database:
glaedr - Schema:
terraform_backend
- Database:
- Jenkins server with
infralabel - Jenkins credentials configured:
do-api-token: DigitalOcean API tokenpostgres_user: PostgreSQL usernamepostgres_password: PostgreSQL password
Required variables (set via do.tfvars file):
do_token: DigitalOcean personal access tokendo_region: DigitalOcean region for resources (default:sfo3)
- Kubernetes Version: 1.33.1-do.5
- Node Pool:
- Size: s-1vcpu-2gb
- Auto-scaling: 1-3 nodes
- Name: worker-pool
- PostgreSQL Version: 18
- Cluster Size: db-s-1vcpu-1gb (single node)
- Region: Configured via
do_regionvariable
State is stored in PostgreSQL:
- Database:
glaedr - Schema:
terraform_backend
-
Create
do.tfvarsfile with your configuration:do_token = "your-digitalocean-api-token" do_region = "sfo3"
-
Deploy infrastructure with Terraform:
cd infra/terraform terraform init terraform plan terraform apply
The Jenkins pipeline automates the entire deployment:
- Checks out code from SCM
- Provisions all infrastructure using Terraform
- Cleans up workspace
Pipeline stages:
- Checkout: Retrieves code from version control
- Deploy Infrastructure: Runs Terraform plan and apply to provision:
- DigitalOcean Kubernetes cluster
- PostgreSQL database cluster
- Kubernetes Dashboard via Helm
The Terraform configuration provisions the following components:
-
DigitalOcean Project:
- Creates a project to organize all resources
- Groups cluster, database, and associated resources
-
Kubernetes Cluster (via
clustermodule):- Provisions managed DOKS cluster (v1.33.1-do.5)
- Creates auto-scaling worker pool (1-3 nodes, s-1vcpu-2gb)
- Outputs cluster endpoint, CA certificate, and token
- Fully managed control plane by DigitalOcean
-
PostgreSQL Database (via
dbmodule):- Provisions managed PostgreSQL 18 cluster
- Single-node configuration (db-s-1vcpu-1gb)
- Outputs database connection details
-
Kubernetes Dashboard (via
dashboardmodule):- Creates
kubernetes-dashboardnamespace - Deploys dashboard via Helm chart
- Configures admin ServiceAccount with ClusterRoleBinding
- Enables full cluster access for admin user
- Creates
-
Download cluster configuration:
doctl kubernetes cluster kubeconfig save <cluster-name>
Or get the cluster credentials from Terraform outputs and configure manually.
-
Access the cluster:
kubectl get nodes kubectl get pods --all-namespaces
-
Get the admin token:
kubectl -n kubernetes-dashboard create token admin-user
-
Start proxy:
kubectl proxy
-
Access dashboard: Open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Use the token from step 1 to authenticate.
Database connection details are available as Terraform outputs:
cd infra/terraform
terraform output- Sensitive files (
do.tfvars,.env) are excluded from version control - Terraform state contains sensitive data - secured in PostgreSQL backend
- DigitalOcean API uses token-based authentication
- Jenkins credentials used for automation (stored securely in Jenkins)
- Dashboard admin user has full cluster access - use with caution
- Database credentials available via Terraform outputs - protect access
- All resources are created in the specified DigitalOcean region (default: sfo3)
- Kubernetes cluster uses DigitalOcean's managed control plane
- Worker nodes auto-scale between 1-3 nodes based on demand
- Database is a single-node PostgreSQL cluster (suitable for development)
- Dashboard is deployed in
kubernetes-dashboardnamespace - Cluster version is 1.33.1-do.5 (managed by DigitalOcean)
- Verify DigitalOcean API token is valid and has sufficient permissions
- Ensure PostgreSQL backend is accessible for Terraform state storage
- Check Jenkins credentials are properly configured (
do-api-token,postgres_user,postgres_password) - Verify DigitalOcean account has sufficient quota for resources
- Check that the specified region supports DOKS clusters
- If Helm deployment fails, verify cluster credentials are correct
- For dashboard access issues, ensure the admin-user ServiceAccount exists
See docs/README.md for full documentation including:
- Project requirements and context
- System architecture and component dependencies
- Component-specific requirements
Stay tuned...