Virtual Kubernetes Control Planes
it's a cluster f*ck.
Run thousands of isolated Kubernetes control planes on a single API server. ~3 MB overhead each. Native semantics — no syncing, no translation layers.
Cluster topology
Management Plane
API Server
Controllers
Scheduler
us-westus-centralus-east
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vCP
vcp-terminal
Getting started
Three commands.
01Install the CLI
$ brew tap kplane-dev/tap$ brew install kplane
02Create management plane
$ kplane up03Create a virtual cluster
$ kp create cluster example04Full Kubernetes API, ready immediately
$ kubectl get namespaces --context=example# Full Kubernetes API, ready immediately
How it works
kplane virtualizes control planes at the API server level. Instead of separate infrastructure per tenant, one process serves all clusters with path-based isolation.
- Each control plane gets its own Kubernetes API endpoint
- Full RBAC, CRD, and namespace isolation per control plane
- Controllers and informers share caches — adding a cluster adds ~3 MB, not another cache tree
- Workloads schedule directly on shared or dedicated nodes — no syncer copying resources between layers
- kubectl, Helm, ArgoCD, Terraform, and every Kubernetes tool works unchanged
Shared API Server
Single process, path-isolated storage
/cp-1
/cp-2
/cp-3
/cp-4
...
Shared Node Pool
A better multi-tenancy primitive
Namespace Isolation
- Namespaces + RBAC + NetworkPolicy on a shared cluster
- No CRD isolation, no cluster-scoped resource separation
- Fine for trusted teams, breaks when tenants need cluster-admin or custom CRDs
Synced Virtual Clusters
- Separate API server + syncer per tenant
- Syncer copies Pods and Secrets to a host cluster for scheduling
- Higher-level resources stay in the virtual cluster and don't sync
- ~128 MB+ overhead per cluster, 30-60 s provisioning
- Behavioral gaps for resource types the syncer doesn't handle
Virtual Control Planes
- One shared API server, path-isolated per tenant
- No syncer, no per-cluster process, no per-cluster datastore
- ~3 MB overhead per control plane, ~2 s provisioning
- Full Kubernetes API — nothing is translated or approximated
Use Cases
Who runs kplane
Density
~3MBper control plane
~2sprovisioning
Nconcurrent planes