Just as ships are built in dry docks, platforms are crafted in DoKa Seca
⚠️ NoteDoKa Seca is still in relatively early development. At this time, do not use Doka Seca for critical production systems.
Welcome to DoKa Seca - Distributed Orchestration Kubernetes Automation with Scalable Edge Computing Applications - a comprehensive framework for bootstrapping cloud-native platforms using Kubernetes in Docker (Kind)!
The name "DoKa Seca" is a playful Portuguese phrase where "DoKa" incorporates the "K" from Kubernetes (representing the containerized orchestration at the heart of this project), and "Seca" means "dry" - drawing inspiration from the concept of a dry dock. Just as ships are built, repaired, and maintained in dry docks - controlled, isolated environments where all the necessary infrastructure and tooling are readily available - DoKa Seca provides a "dry dock" for Kubernetes platforms. It creates an isolated, controlled environment where entire cloud-native platforms can be rapidly assembled, configured, and tested before being deployed to production waters.
DoKa Seca provides an opinionated, production-ready framework that automates the entire platform bootstrap process using Kind clusters. Rather than just being a collection of configurations, it's a complete platform engineering solution that provisions infrastructure, installs essential tooling, configures GitOps workflows, and sets up observability - all with a single command, in your local "dry dock" environment.
This project serves as both a personal learning journey into modern DevOps practices and a comprehensive resource for platform engineers and developers interested in rapidly spinning up production-grade Kubernetes environments. Here you'll find real-world implementations of GitOps workflows, infrastructure as code, observability stacks, and cloud-native security practices - all designed to run efficiently in local development or homelab environments while following enterprise-grade patterns and best practices.
DoKa Seca consists of 3 GitHub repositories:
| Repository | Description |
|---|---|
| dokaseca-control-plane | Control plane infrastructure and cluster management |
| dokaseca-addons | Platform addons and Kubernetes extensions |
| dokaseca-workloads | Application workloads and deployments |
This repository contains ArgoCD ApplicationSets for managing workloads across multiple teams and environments. It serves as the centralized GitOps configuration repository for deploying applications to Kubernetes clusters using ArgoCD.
dokaseca-workloads/
├── README.md
└── argocd/
└── workloads/
├── exclude/ # Excluded configurations
├── team-a/ # Team A workloads
│ ├── project.yaml # ArgoCD AppProject for team-a
│ ├── project-a/
│ │ ├── applicationset.yaml
│ │ └── kargo/
│ │ ├── project.yaml
│ │ └── warehouse.yaml
│ └── project-b/
│ └── applicationset.yaml
├── team-b/ # Team B workloads
│ ├── project.yaml # ArgoCD AppProject for team-b
│ ├── project-a/
│ └── project-b/
└── team-c/ # Team C workloads
├── project.yaml # ArgoCD AppProject for team-c
├── project-a/
└── project-b/
This repository implements a multi-tenant GitOps architecture with:
- 3 Teams:
team-a,team-b, andteam-c - Multiple Projects per team (project-a, project-b, etc.)
- Multi-cluster Deployments with environment-specific configurations
- ArgoCD ApplicationSets for automated application deployment
- Kargo Integration for progressive delivery (team-a)
Each team has its own isolated ArgoCD AppProject (project.yaml) that defines:
- Resource access policies
- Allowed destinations (clusters/namespaces)
- Source repository permissions
- Team-specific RBAC
Each project contains ApplicationSets that:
- Deploy applications across multiple clusters (dev, staging, prod)
- Use cluster generators with selectors for automatic cluster discovery
- Apply environment-specific configurations via Helm values
- Support automated sync policies
The ApplicationSets are configured to deploy to multiple clusters based on cluster labels:
- Development Clusters:
env=dev - Staging Clusters:
env=stg - Production Clusters:
env=prod
Clusters should be registered with appropriate labels:
# Register development cluster
argocd cluster add dev-cluster --label env=dev --label "type=workload"
# Register staging cluster
argocd cluster add stg-cluster --label env=staging --label "type=workload"
# Register production cluster
argocd cluster add prod-cluster --label env=prod --label "type=workload"- Navigate to the appropriate team directory
- Create or update the ApplicationSet in the project folder
- Configure cluster selectors and Helm values
- Commit and push changes
- Create a new team directory under
argocd/workloads/ - Add a
project.yamlwith the ArgoCD AppProject configuration - Create project subdirectories with ApplicationSets
- Update this README
Applications use Helm values files based on cluster environment labels:
values-dev.yamlfor developmentvalues-stg.yamlfor stagingvalues-prod.yamlfor production
- Each team operates within its own ArgoCD AppProject
- Resource access is restricted by team boundaries
- Cluster access is controlled via destination policies
- Source repository access is limited per team
- Code Changes: Developers push application code to source repositories
- Configuration Updates: Infrastructure/deployment changes are made in this repository
- ArgoCD Sync: ApplicationSets automatically detect changes and deploy to target clusters
- Progressive Delivery: Kargo manages promotion between environments
- ArgoCD installed and configured
- Multiple Kubernetes clusters registered with ArgoCD
- Proper cluster labeling for environment identification
- Helm charts available in source repositories
- Kargo installed for progressive delivery features
- Follow the established directory structure
- Use meaningful commit messages
- Test ApplicationSets in development clusters first
- Update documentation when adding new teams or projects
- Follow team-specific naming conventions