Go to file
Felix Wolf c7bfd4953c feat: Wire ArgoCD to Forgejo for GitOps management
Configure myks with global repoURL pointing to Forgejo, in-cluster
destination, and disabled placeholder cluster Secret. Implement App of
Apps pattern with a root Application that syncs all child apps.

Add argocd-deploy-key-init Job that generates an ed25519 SSH keypair,
registers it as a deploy key via Forgejo API, and creates the ArgoCD
repository secret with insecure host key verification (avoids
chicken-and-egg with ArgoCD managing its own known hosts ConfigMap).

Additional changes:
- Ignore /status field diffs globally (K8s 1.32 compat)
- Add Replace=true sync option on Jobs (immutable resource compat)
- Switch job images from bitnami/kubectl to alpine/k8s
- Update CLAUDE.md with ArgoCD status and no-bitnami rule

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 23:09:50 +02:00
envs feat: Wire ArgoCD to Forgejo for GitOps management 2026-03-30 23:09:50 +02:00
prototypes feat: Wire ArgoCD to Forgejo for GitOps management 2026-03-30 23:09:50 +02:00
rendered feat: Wire ArgoCD to Forgejo for GitOps management 2026-03-30 23:09:50 +02:00
talos feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00
.envrc feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00
.gitignore feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00
.myks.yaml feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00
.sops.yaml feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00
CLAUDE.md feat: Wire ArgoCD to Forgejo for GitOps management 2026-03-30 23:09:50 +02:00
flake.lock feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00
flake.nix feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00
README.md feat: Initial setup of GitOps-managed Kubernetes cluster 2026-03-30 18:21:05 +02:00

k8s-and-chill

Private Kubernetes cluster running on 3x Hetzner CAX11 (ARM64) instances with Talos Linux, managed by myks.

Cluster Setup

Prerequisites

Enter the dev shell (via direnv or nix develop), which provides:

  • talosctl
  • kubectl
  • helm
  • myks
  • hcloud

Infrastructure

Node Public IP Private IP Location
ubuntu-4gb-nbg1-1 195.201.219.17 10.0.0.3 nbg1
ubuntu-4gb-nbg1-2 195.201.140.75 10.0.0.4 nbg1
ubuntu-4gb-nbg1-3 195.201.219.111 10.0.0.2 nbg1

All nodes are control plane nodes (3-node HA etcd). The Kubernetes API endpoint is https://195.201.219.111:6443.

The nodes are connected via a Hetzner private network (thalos-k8s), which is used for inter-node communication.

Installing Talos on Hetzner Cloud

The servers were originally provisioned with Ubuntu. Talos was installed by writing the Talos disk image via Hetzner rescue mode.

1. Get the Talos image URL

Talos images for Hetzner Cloud are generated via the Talos Image Factory. For vanilla Talos (no extensions), get the schematic ID:

curl -sX POST https://factory.talos.dev/schematics \
  -H 'Content-Type: application/json' \
  -d '{"customization":{"systemExtensions":{"officialExtensions":[]}}}'
# Returns: {"id":"376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba"}

The image URL follows this pattern:

https://factory.talos.dev/image/<schematic-id>/<talos-version>/hcloud-arm64.raw.xz

2. Enable rescue mode and reboot

For each server:

hcloud server enable-rescue <server-name> --ssh-key "<ssh-key-name>"
hcloud server reboot <server-name>

3. Write Talos to disk

SSH into each server's rescue system and write the image:

ssh root@<server-ip> "curl -fsSL '<image-url>' | xz -d | dd of=/dev/sda bs=4M status=progress && sync"

4. Reboot into Talos

hcloud server reboot <server-name>

Bootstrapping the Cluster

1. Generate machine configs

mkdir -p talos
talosctl gen config k8s-and-chill https://195.201.219.111:6443 --output talos/

This creates controlplane.yaml, worker.yaml, and talosconfig.

2. Configure talosctl

export TALOSCONFIG=talos/talosconfig
talosctl config endpoint 195.201.219.111 195.201.140.75 195.201.219.17
talosctl config node 195.201.219.111 195.201.140.75 195.201.219.17

3. Apply configs

Apply the controlplane config to each node (use --insecure on first apply since the nodes don't have matching certs yet):

talosctl apply-config --insecure --nodes 195.201.219.111 --file talos/controlplane.yaml
talosctl apply-config --insecure --nodes 195.201.140.75  --file talos/controlplane.yaml
talosctl apply-config --insecure --nodes 195.201.219.17  --file talos/controlplane.yaml

4. Bootstrap etcd

Run this on exactly one node:

talosctl bootstrap --nodes 195.201.219.111

5. Get kubeconfig

talosctl kubeconfig talos/kubeconfig --nodes 195.201.219.111

6. Verify

export KUBECONFIG=talos/kubeconfig
kubectl get nodes -o wide
kubectl get pods -A