WARNING: THIS SITE IS A MIRROR OF GITHUB.COM / IT CANNOT LOGIN OR REGISTER ACCOUNTS / THE CONTENTS ARE PROVIDED AS-IS / THIS SITE ASSUMES NO RESPONSIBILITY FOR ANY DISPLAYED CONTENT OR LINKS / IF YOU FOUND SOMETHING MAY NOT GOOD FOR EVERYONE, CONTACT ADMIN AT ilovescratch@foxmail.com
Skip to content

droach282/g9t

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

G9T - Kubernetes Cluster Automation

Automated Kubernetes cluster deployment on DigitalOcean using Terraform.

Overview

Present Contents

This repository automates the provisioning and configuration of a managed Kubernetes cluster on DigitalOcean. It uses Infrastructure as Code (IaC) principles to create a reproducible, scalable Kubernetes environment with integrated database and dashboard components.

Long-Term Objective

The goal is to build an application with an event-driven architecture that can be deployed to this cluster.

Architecture

Infrastructure Components

  • DigitalOcean Kubernetes Service (DOKS)

    • Version: 1.33.1-do.5
    • Worker Pool: s-1vcpu-2gb nodes
    • Auto-scaling: 1-3 nodes
    • Managed control plane
  • PostgreSQL Database Cluster

    • Version: 18
    • Size: db-s-1vcpu-1gb (single node)
    • Managed by DigitalOcean
  • Kubernetes Dashboard

    • Deployed via Helm
    • Admin user configured with RBAC

Technology Stack

  • DigitalOcean: Cloud platform
  • Terraform: Infrastructure provisioning (DigitalOcean provider v2.68.0)
  • Kubernetes: Container orchestration (v1.33.1-do.5)
  • Helm: Package manager for Kubernetes
  • PostgreSQL: Database (v18)
  • Jenkins: CI/CD automation

Project Structure

.
├── infra/
│   ├── terraform/                    # Infrastructure provisioning
│   │   ├── do.tf                     # Main Terraform configuration
│   │   ├── do.tfvars                 # Variable values (not in git)
│   │   ├── variables.tf              # Variable definitions
│   │   ├── dashboard-adminuser.yaml  # K8s ServiceAccount config
│   │   ├── dashboard-adminrole.yaml  # K8s ClusterRoleBinding config
│   │   ├── cluster/                  # Kubernetes cluster module
│   │   │   ├── main.tf               # DOKS cluster resource
│   │   │   ├── variables.tf          # Module variables
│   │   │   └── outputs.tf            # Module outputs
│   │   ├── db/                       # Database module
│   │   │   ├── main.tf               # PostgreSQL resource
│   │   │   ├── variables.tf          # Module variables
│   │   │   └── outputs.tf            # Module outputs
│   │   └── dashboard/                # Dashboard module
│   │       ├── main.tf               # Helm deployment
│   │       └── variables.tf          # Module variables
│   └── Jenkinsfile                   # CI/CD pipeline definition
└── README.md

Prerequisites

  • DigitalOcean account with API token
  • PostgreSQL database for Terraform state backend
    • Database: glaedr
    • Schema: terraform_backend
  • Jenkins server with infra label
  • Jenkins credentials configured:
    • do-api-token: DigitalOcean API token
    • postgres_user: PostgreSQL username
    • postgres_password: PostgreSQL password

Configuration

Terraform Variables

Required variables (set via do.tfvars file):

  • do_token: DigitalOcean personal access token
  • do_region: DigitalOcean region for resources (default: sfo3)

Cluster Configuration

  • Kubernetes Version: 1.33.1-do.5
  • Node Pool:
    • Size: s-1vcpu-2gb
    • Auto-scaling: 1-3 nodes
    • Name: worker-pool

Database Configuration

  • PostgreSQL Version: 18
  • Cluster Size: db-s-1vcpu-1gb (single node)
  • Region: Configured via do_region variable

Terraform Backend

State is stored in PostgreSQL:

  • Database: glaedr
  • Schema: terraform_backend

Deployment

Manual Deployment

  1. Create do.tfvars file with your configuration:

    do_token = "your-digitalocean-api-token"
    do_region = "sfo3"
  2. Deploy infrastructure with Terraform:

    cd infra/terraform
    terraform init
    terraform plan
    terraform apply

Automated Deployment (Jenkins)

The Jenkins pipeline automates the entire deployment:

  1. Checks out code from SCM
  2. Provisions all infrastructure using Terraform
  3. Cleans up workspace

Pipeline stages:

  • Checkout: Retrieves code from version control
  • Deploy Infrastructure: Runs Terraform plan and apply to provision:
    • DigitalOcean Kubernetes cluster
    • PostgreSQL database cluster
    • Kubernetes Dashboard via Helm

Infrastructure Setup

The Terraform configuration provisions the following components:

  1. DigitalOcean Project:

    • Creates a project to organize all resources
    • Groups cluster, database, and associated resources
  2. Kubernetes Cluster (via cluster module):

    • Provisions managed DOKS cluster (v1.33.1-do.5)
    • Creates auto-scaling worker pool (1-3 nodes, s-1vcpu-2gb)
    • Outputs cluster endpoint, CA certificate, and token
    • Fully managed control plane by DigitalOcean
  3. PostgreSQL Database (via db module):

    • Provisions managed PostgreSQL 18 cluster
    • Single-node configuration (db-s-1vcpu-1gb)
    • Outputs database connection details
  4. Kubernetes Dashboard (via dashboard module):

    • Creates kubernetes-dashboard namespace
    • Deploys dashboard via Helm chart
    • Configures admin ServiceAccount with ClusterRoleBinding
    • Enables full cluster access for admin user

Access

Kubernetes Cluster Access

  1. Download cluster configuration:

    doctl kubernetes cluster kubeconfig save <cluster-name>

    Or get the cluster credentials from Terraform outputs and configure manually.

  2. Access the cluster:

    kubectl get nodes
    kubectl get pods --all-namespaces

Kubernetes Dashboard Access

  1. Get the admin token:

    kubectl -n kubernetes-dashboard create token admin-user
  2. Start proxy:

    kubectl proxy
  3. Access dashboard: Open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

    Use the token from step 1 to authenticate.

Database Access

Database connection details are available as Terraform outputs:

cd infra/terraform
terraform output

Security Considerations

  • Sensitive files (do.tfvars, .env) are excluded from version control
  • Terraform state contains sensitive data - secured in PostgreSQL backend
  • DigitalOcean API uses token-based authentication
  • Jenkins credentials used for automation (stored securely in Jenkins)
  • Dashboard admin user has full cluster access - use with caution
  • Database credentials available via Terraform outputs - protect access

Notes

  • All resources are created in the specified DigitalOcean region (default: sfo3)
  • Kubernetes cluster uses DigitalOcean's managed control plane
  • Worker nodes auto-scale between 1-3 nodes based on demand
  • Database is a single-node PostgreSQL cluster (suitable for development)
  • Dashboard is deployed in kubernetes-dashboard namespace
  • Cluster version is 1.33.1-do.5 (managed by DigitalOcean)

Troubleshooting

  • Verify DigitalOcean API token is valid and has sufficient permissions
  • Ensure PostgreSQL backend is accessible for Terraform state storage
  • Check Jenkins credentials are properly configured (do-api-token, postgres_user, postgres_password)
  • Verify DigitalOcean account has sufficient quota for resources
  • Check that the specified region supports DOKS clusters
  • If Helm deployment fails, verify cluster credentials are correct
  • For dashboard access issues, ensure the admin-user ServiceAccount exists

Documentation

See docs/README.md for full documentation including:

  • Project requirements and context
  • System architecture and component dependencies
  • Component-specific requirements

What does G9T stand for?

Stay tuned...

About

Fun with IaC, k8s, and more

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages