A read-only Kubernetes controller for Rancher that generates lifecycle views for managed downstream clusters.
cluster-timeline runs in the Rancher management cluster and consolidates cluster lifecycle information from multiple Rancher CRDs (provisioning.cattle.io/v1.Cluster and management.cattle.io/v3.Cluster) into a single, easy-to-read ClusterTimeline custom resource.
- Read-only: Never modifies Rancher cluster resources
- Lifecycle phases: Tracks clusters through Provisioning, WaitingForClusterAgent, Active, Updating, Error, and Unknown states
- Historical tracking: Maintains phase transition history
- Operator guidance: Provides recommended actions based on cluster state
- Simple UX: View status via standard kubectl commands
- Go 1.24.0+
- Kubernetes cluster with Rancher installed (for testing)
# Generate clientsets and deepcopy code
make generate
# Build binary
make build
# Run tests
make test.
├── cmd/cluster-timeline/ # Main controller binary
├── pkg/
│ ├── apis/
│ │ └── lifecycle.cattle.io/v1/ # ClusterTimeline CRD types
│ ├── generated/ # Wrangler-generated clients (auto-generated)
│ ├── controllers/ # Controller reconciliation logic
│ ├── mapper/ # Phase computation logic
│ └── codegen/ # Code generation tool
├── crds/ # CRD manifests
├── manifests/ # Deployment manifests
└── hack/ # Build scripts
# Install CRD
kubectl apply -f crds/lifecycle.cattle.io_clustertimelines.yaml
# Deploy controller (coming soon)
kubectl apply -f manifests/install.yamlConfiguring recommendations
- The controller reads
manifests/recommendations.yamlat startup. We provide a ConfigMap inmanifests/install.yamlnamedcluster-timeline-recommendationswhich is mounted into the pod at/manifests/recommendations.yaml. - To edit rules, update the
ConfigMapand re-apply:
# Edit and apply the ConfigMap
kubectl edit configmap cluster-timeline-recommendations -n cattle-system
# or re-apply the manifest
kubectl apply -f manifests/configmap-recommendations.yaml
# Restart controller pods to pick up changes (or redeploy)
kubectl -n cattle-system rollout restart deployment/cluster-timeline-controllerKustomize-based deployment (recommended)
- We use a
manifests/kustomization.yamlto compose RBAC, ConfigMap and Deployment without duplicating resources. - To install using kustomize (preferred):
# Apply resources using kustomize
kubectl apply -k manifests/Updating the image (avoid latest)
- The manifest uses a placeholder tag; to deploy a specific, immutable image tag use
kustomize edit set imageor editmanifests/kustomization.yamland replaceREPLACE_WITH_TAG.
# Set the image tag in the kustomization and apply
kustomize edit set image jferrazbr/cluster-timeline=jferrazbr/cluster-timeline:v20251204-141353 -k manifests/
kubectl apply -k manifests/
# Or directly edit the kustomization.yaml and then apply
kubectl apply -k manifests/Building and publishing images
- Use
make releaseto build a timestamped image, push it to Docker Hub, and print the resulting tag. That tag can be used withmake pin-image IMAGE_TAG=<tag>andmake apply-pinnedto deploy an immutable image.
# Build, tag and push; output will print IMAGE_TAG=<tag>
make release
# Pin the image and apply
make pin-image IMAGE_TAG=v20251204-141353
make apply-pinnedNote: manifests/install.yaml was removed — use the manifests/ kustomize directory instead.
# List all cluster timelines
kubectl get clustertimelines -A
# Get detailed view
kubectl describe clustertimeline <cluster-name>HackWeek Day 1 Complete ✅
- Project scaffolding with Wrangler
- ClusterTimeline CRD defined
- Code generation working
- Build system functional
Day 2 (2025-12-03): ✅
- Controller wiring and reconciliation loop
- In-cluster deployment flow
- Status persistence with phase tracking
Day 3 (2025-12-04): ✅
- Added observations tracking to ClusterTimeline status and history
- Implemented rule-driven recommendation engine with YAML-based rules
- Added ConfigMap-based rule configuration for in-cluster customization
- Migrated to kustomize for manifest composition and image management
- Added
make releaseworkflow for immutable image tagging and deployment - Comprehensive test coverage with mocks for controller logic
- Documentation updates including AI assistant guidance
Next: Day 4 - Additional lifecycle phases, enhanced observability, and operational polish
Apache 2.0 - See LICENSE file