diff --git a/README.md b/README.md index bf30147..814b8ed 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,254 @@ # adapter-validation-gcp -The validation adapter used for GCP HCP preflight validations + +This repository provides the foundation for validating GCP environments before cluster deployment. + +## Overview + +This repository contains components for validating GCP prerequisites and reporting validation results in Kubernetes environments. It serves as the foundational infrastructure for all future GCP validators. + +## Components + +### 1. Status Reporter (Implemented ✅) + +A **cloud-agnostic**, **reusable** Kubernetes sidecar container that monitors adapter operation results and updates Job status. It works with any adapter container (validation, DNS, pull secret, etc.) that follows the defined result contract. + +**Key Features:** +- Monitors adapter container execution via file polling and container state watching +- Handles various failure scenarios (OOMKilled, crashes, timeouts, invalid results) +- Updates Kubernetes Job status with detailed condition information +- Zero-dependency on adapter implementation - uses simple JSON contract + +**Location:** `status-reporter/` + +### 2. Fake GCP Validator (Planned 🚧) + +A **simulated** GCP validator that mimics real validation behavior without making actual GCP API calls. This component is essential for: +- Local development and testing +- CI/CD pipeline validation +- Integration testing without GCP credentials +- Rapid iteration on validation logic + +**Planned Features:** +- Configurable success/failure scenarios +- Deterministic test cases for all validation types +- No GCP credentials or API quotas required + +**Status:** Not yet implemented + +### 3. Minimal Real GCP Validator (Planned 🚧) + +A **minimal production** GCP validator that performs actual API calls to validate the foundational requirements before cluster creation. + +**Planned Features:** +- Workload Identity Federation (WIF) configuration validation +- Minimal required GCP API enablement checks (e.g., `compute.googleapis.com`, `iam.googleapis.com`) +- Service account permissions verification +- Real GCP API integration with proper error handling +- Serves as reference implementation for future validators + +**Validation Scope (Minimal Set):** +- ✓ Workload Identity configured correctly +- ✓ Essential GCP APIs enabled +- ✓ Service account has minimum required permissions + +**Status:** Not yet implemented + +## Adapter Contract + +The status reporter works with any adapter container that follows this simple JSON contract: + +1. **Result File Requirements:** + - **Location:** Write results to the result file (configurable via `RESULTS_PATH` env var) + - **Format:** Valid JSON file (max size: 1MB) + - **Timing:** Must be written before the adapter container exits or within the configured timeout + +2. **JSON Schema:** + ```json + { + "status": "success", // Required: "success" or "failure" + "reason": "AllChecksPassed", // Required: Machine-readable identifier (max 128 chars) + "message": "All validation checks passed successfully", // Required: Human-readable description (max 1024 chars) + "details": { // Optional: Adapter-specific data (any valid JSON), this information will not be reflected in k8s Job Status + "checks_run": 5, + "duration_ms": 1234 + } + } + ``` + +3. **Field Validation:** + - `status`: Must be exactly `"success"` or `"failure"` (case-sensitive) + - `reason`: Trimmed and truncated to 128 characters. Defaults to `"NoReasonProvided"` if empty/missing + - `message`: Trimmed and truncated to 1024 characters. Defaults to `"No message provided"` if empty/missing + - `details`: Optional JSON object containing any adapter-specific information + +4. **Examples:** + + **Success result:** + + Adapter writes to the result file: + ```json + { + "status": "success", + "reason": "ValidationPassed", + "message": "GCP environment validated successfully" + } + ``` + + Resulting Kubernetes Job status: + ```yaml + status: + conditions: + - type: Available + status: "True" + reason: ValidationPassed + message: GCP environment validated successfully + lastTransitionTime: "2024-01-15T10:30:00Z" + ``` + + **Failure result with details:** + + Adapter writes to the result file: + ```json + { + "status": "failure", + "reason": "MissingPermissions", + "message": "Service account lacks required IAM permissions", + "details": { + "missing_permissions": ["compute.instances.list", "iam.serviceAccounts.get"], + "service_account": "my-sa@project.iam.gserviceaccount.com" + } + } + ``` + + Resulting Kubernetes Job status: + ```yaml + status: + conditions: + - type: Available + status: "False" + reason: MissingPermissions + message: Service account lacks required IAM permissions + lastTransitionTime: "2024-01-15T10:30:00Z" + ``` + + **Timeout scenario:** + + If adapter doesn't write result file within timeout, Job status will be: + ```yaml + status: + conditions: + - type: Available + status: "False" + reason: AdapterTimeout + message: "Adapter did not produce results within 5m0s" + lastTransitionTime: "2024-01-15T10:30:00Z" + ``` + + **Container crash scenario:** + + If adapter container exits with non-zero code, Job status will be: + ```yaml + status: + conditions: + - type: Available + status: "False" + reason: AdapterExitedWithError + message: "Adapter container exited with code 1: Error" + lastTransitionTime: "2024-01-15T10:30:00Z" + ``` + + **OOMKilled scenario:** + + If adapter container is killed due to memory limits: + ```yaml + status: + conditions: + - type: Available + status: "False" + reason: AdapterOOMKilled + message: "Adapter container was killed due to out of memory (OOMKilled)" + lastTransitionTime: "2024-01-15T10:30:00Z" + ``` + + **Invalid result format:** + + If adapter writes invalid JSON or schema: + ```yaml + status: + conditions: + - type: Available + status: "False" + reason: InvalidResultFormat + message: "Failed to parse adapter result: status: must be either 'success' or 'failure'" + lastTransitionTime: "2024-01-15T10:30:00Z" + ``` + +5. **Shared Volume Configuration:** + + Both adapter and status reporter containers must share a volume mounted at `/results`: + + ```yaml + volumes: + - name: results + emptyDir: {} + + containers: + - name: adapter + volumeMounts: + - name: results + mountPath: /results + + - name: status-reporter + volumeMounts: + - name: results + mountPath: /results + ``` + +## Repository Structure + +```text +adapter-validation-gcp/ +├── status-reporter/ # ✅ Cloud-agnostic Kubernetes status reporter +│ ├── cmd/reporter/ # Main entry point +│ ├── pkg/ # Core packages (reporter, k8s, result parser) +│ ├── Dockerfile # Container image definition +│ ├── Makefile # Build, test, and image targets +│ └── README.md # Component-specific documentation +├── fake-validator/ # 🚧 Simulated GCP validator (planned) +├── validator/ # 🚧 Real GCP validator (planned) +└── README.md # This file +``` + +## Quick Start + +### Status Reporter + +The status reporter is production-ready and can be used with any adapter container. + +#### Makefile Usage + +```bash +$ make +Available targets: +binary Build binary +clean Clean build artifacts and test coverage files +fmt Format code with gofmt and goimports +help Display this help message +image-dev Build and push to personal Quay registry (requires QUAY_USER) +image-push Build and push container image to registry +image Build container image with Docker or Podman +lint Run golangci-lint +mod-tidy Tidy Go module dependencies +test-coverage-html Generate HTML coverage report +test-coverage Run unit tests with coverage report +test Run unit tests with race detection +verify Run all verification checks (lint + test) +``` + +## License + +See LICENSE file for details. + +## Contact + +For questions or issues, please open a GitHub issue in this repository. diff --git a/status-reporter/Dockerfile b/status-reporter/Dockerfile new file mode 100644 index 0000000..40058f8 --- /dev/null +++ b/status-reporter/Dockerfile @@ -0,0 +1,32 @@ +# Build stage +FROM golang:1.25-alpine AS builder + +WORKDIR /build + +# Copy go mod files for dependency caching +COPY go.mod go.sum ./ + +# Download and verify dependencies +RUN go mod download && go mod verify + +# Copy source code +COPY . . + +# Build binary for amd64 (most common k8s node architecture) +RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o status-reporter ./cmd/reporter + +# Runtime stage +FROM gcr.io/distroless/static-debian12:nonroot + +WORKDIR /app + +# Copy binary from builder +COPY --from=builder /build/status-reporter /app/status-reporter + +ENTRYPOINT ["/app/status-reporter"] + +LABEL name="status-reporter" \ + vendor="Red Hat" \ + version="0.0.1" \ + summary="Status Reporter - Kubernetes Job status reporter for adapter" \ + description="Monitors adapter execution, parses results, and updates Kubernetes Job status conditions based on adapter outcomes" diff --git a/status-reporter/Makefile b/status-reporter/Makefile new file mode 100644 index 0000000..5136ff7 --- /dev/null +++ b/status-reporter/Makefile @@ -0,0 +1,171 @@ +# Makefile for status-reporter + +# Project metadata +IMAGE_NAME := status-reporter +VERSION ?= 0.0.1 +IMAGE_REGISTRY ?= quay.io/openshift-hyperfleet +IMAGE_TAG ?= latest + +# Build metadata +GIT_COMMIT := $(shell git rev-parse --short HEAD 2>/dev/null || echo "unknown") +GIT_TAG := $(shell git describe --tags --exact-match 2>/dev/null || echo "") + +# Dev image configuration - set QUAY_USER to push to personal registry +# Usage: QUAY_USER=myuser make image-dev +QUAY_USER ?= +DEV_TAG ?= dev-$(GIT_COMMIT) +BUILD_DATE := $(shell date -u +'%Y-%m-%dT%H:%M:%SZ') + +# LDFLAGS for build +LDFLAGS := -w -s +LDFLAGS += -X main.version=$(VERSION) +LDFLAGS += -X main.commit=$(GIT_COMMIT) +LDFLAGS += -X main.buildDate=$(BUILD_DATE) +ifneq ($(GIT_TAG),) +LDFLAGS += -X main.tag=$(GIT_TAG) +endif + +# Go parameters +GOCMD := go +GOBUILD := $(GOCMD) build +GOTEST := $(GOCMD) test +GOMOD := $(GOCMD) mod +GOFMT := gofmt +GOIMPORTS := goimports + +# Test parameters +TEST_TIMEOUT := 10m +RACE_FLAG := -race +COVERAGE_OUT := coverage.out +COVERAGE_HTML := coverage.html + +# Container runtime detection +DOCKER_AVAILABLE := $(shell if docker info >/dev/null 2>&1; then echo "true"; else echo "false"; fi) +PODMAN_AVAILABLE := $(shell if podman info >/dev/null 2>&1; then echo "true"; else echo "false"; fi) + +ifeq ($(DOCKER_AVAILABLE),true) + CONTAINER_RUNTIME := docker + CONTAINER_CMD := docker +else ifeq ($(PODMAN_AVAILABLE),true) + CONTAINER_RUNTIME := podman + CONTAINER_CMD := podman +else + CONTAINER_RUNTIME := none + CONTAINER_CMD := sh -c 'echo "No container runtime found. Please install Docker or Podman." && exit 1' +endif + +# Directories +# Find all Go packages, excluding vendor and test directories +PKG_DIRS := $(shell $(GOCMD) list ./... 2>/dev/null | grep -v /vendor/ | grep -v /test/) + +.PHONY: help +help: ## Display this help message + @echo "Available targets:" + @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' + +.PHONY: test +test: ## Run unit tests with race detection + @echo "Running unit tests..." + $(GOTEST) -v $(RACE_FLAG) -timeout $(TEST_TIMEOUT) $(PKG_DIRS) + +.PHONY: test-coverage +test-coverage: ## Run unit tests with coverage report + @echo "Running unit tests with coverage..." + $(GOTEST) -v $(RACE_FLAG) -timeout $(TEST_TIMEOUT) -coverprofile=$(COVERAGE_OUT) -covermode=atomic $(PKG_DIRS) + @echo "Coverage report generated: $(COVERAGE_OUT)" + @echo "To view HTML coverage report, run: make test-coverage-html" + +.PHONY: test-coverage-html +test-coverage-html: test-coverage ## Generate HTML coverage report + @echo "Generating HTML coverage report..." + $(GOCMD) tool cover -html=$(COVERAGE_OUT) -o $(COVERAGE_HTML) + @echo "HTML coverage report generated: $(COVERAGE_HTML)" + +.PHONY: lint +lint: ## Run golangci-lint + @echo "Running golangci-lint..." + @if command -v golangci-lint > /dev/null; then \ + golangci-lint cache clean && golangci-lint run; \ + else \ + echo "Error: golangci-lint not found. Please install it:"; \ + echo " go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest"; \ + exit 1; \ + fi + +.PHONY: fmt +fmt: ## Format code with gofmt and goimports + @echo "Formatting code..." + @if command -v $(GOIMPORTS) > /dev/null; then \ + $(GOIMPORTS) -w .; \ + else \ + $(GOFMT) -w .; \ + fi + +.PHONY: mod-tidy +mod-tidy: ## Tidy Go module dependencies + @echo "Tidying Go modules..." + $(GOMOD) tidy + $(GOMOD) verify + +.PHONY: binary +binary: ## Build binary + @echo "Building $(IMAGE_NAME)..." + @echo "Version: $(VERSION), Commit: $(GIT_COMMIT), BuildDate: $(BUILD_DATE)" + @mkdir -p bin + CGO_ENABLED=0 $(GOBUILD) -ldflags="$(LDFLAGS)" -o bin/$(IMAGE_NAME) ./cmd/reporter + +.PHONY: clean +clean: ## Clean build artifacts and test coverage files + @echo "Cleaning..." + rm -rf bin/ + rm -f $(COVERAGE_OUT) $(COVERAGE_HTML) + +.PHONY: image +image: ## Build container image with Docker or Podman +ifeq ($(CONTAINER_RUNTIME),none) + @echo "❌ ERROR: No container runtime found" + @echo "Please install Docker or Podman" + @exit 1 +else + @echo "Building container image with $(CONTAINER_RUNTIME)..." + $(CONTAINER_CMD) build -t $(IMAGE_REGISTRY)/$(IMAGE_NAME):$(IMAGE_TAG) . + @echo "✅ Image built: $(IMAGE_REGISTRY)/$(IMAGE_NAME):$(IMAGE_TAG)" +endif + +.PHONY: image-push +image-push: image ## Build and push container image to registry +ifeq ($(CONTAINER_RUNTIME),none) + @echo "❌ ERROR: No container runtime found" + @echo "Please install Docker or Podman" + @exit 1 +else + @echo "Pushing image $(IMAGE_REGISTRY)/$(IMAGE_NAME):$(IMAGE_TAG)..." + $(CONTAINER_CMD) push $(IMAGE_REGISTRY)/$(IMAGE_NAME):$(IMAGE_TAG) + @echo "✅ Image pushed: $(IMAGE_REGISTRY)/$(IMAGE_NAME):$(IMAGE_TAG)" +endif + +.PHONY: image-dev +image-dev: ## Build and push to personal Quay registry (requires QUAY_USER) +ifndef QUAY_USER + @echo "❌ ERROR: QUAY_USER is not set" + @echo "" + @echo "Usage: QUAY_USER=myuser make image-dev" + @echo "" + @echo "This will build and push to: quay.io/$$QUAY_USER/$(IMAGE_NAME):$(DEV_TAG)" + @exit 1 +endif +ifeq ($(CONTAINER_RUNTIME),none) + @echo "❌ ERROR: No container runtime found" + @echo "Please install Docker or Podman" + @exit 1 +else + @echo "Building dev image quay.io/$(QUAY_USER)/$(IMAGE_NAME):$(DEV_TAG)..." + $(CONTAINER_CMD) build -t quay.io/$(QUAY_USER)/$(IMAGE_NAME):$(DEV_TAG) . + @echo "Pushing dev image..." + $(CONTAINER_CMD) push quay.io/$(QUAY_USER)/$(IMAGE_NAME):$(DEV_TAG) + @echo "" + @echo "✅ Dev image pushed: quay.io/$(QUAY_USER)/$(IMAGE_NAME):$(DEV_TAG)" +endif + +.PHONY: verify +verify: lint test ## Run all verification checks (lint + test) diff --git a/status-reporter/cmd/reporter/main.go b/status-reporter/cmd/reporter/main.go new file mode 100644 index 0000000..808ce67 --- /dev/null +++ b/status-reporter/cmd/reporter/main.go @@ -0,0 +1,138 @@ +package main + +import ( + "context" + "errors" + "fmt" + "log" + "os" + "os/signal" + "runtime/debug" + "syscall" + "time" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/config" + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/reporter" +) + +const ( + shutdownTimeout = 5 * time.Second +) + +func main() { + log.SetFlags(log.LstdFlags | log.Lshortfile) + log.Println("Status Reporter starting...") + + cfg, err := config.Load() + if err != nil { + log.Fatalf("Failed to load configuration: %v", err) + } + + logConfig(cfg) + + rep, err := reporter.NewReporter( + cfg.ResultsPath, + cfg.GetPollInterval(), + cfg.GetMaxWaitTime(), + cfg.ConditionType, + cfg.PodName, + cfg.AdapterContainerName, + cfg.JobName, + cfg.JobNamespace, + ) + if err != nil { + log.Fatalf("Failed to create reporter: %v", err) + } + + sigChan := make(chan os.Signal, 1) + signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT) + defer signal.Stop(sigChan) + + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + // Run reporter in background with panic recovery + done := make(chan error, 1) + go func() { + defer func() { + if r := recover(); r != nil { + log.Printf("PANIC in reporter: %v\nStack trace:\n%s", r, debug.Stack()) + done <- fmt.Errorf("reporter panicked: %v", r) + } + }() + done <- rep.Run(ctx) + }() + + // Wait for completion or interruption and exit + os.Exit(waitForCompletion(sigChan, cancel, done)) +} + +// waitForCompletion handles both normal completion and signal-driven shutdown. +// It returns the appropriate exit code based on the outcome. +func waitForCompletion(sigChan <-chan os.Signal, cancel context.CancelFunc, done <-chan error) int { + select { + case err := <-done: + // Normal completion path + return handleNormalCompletion(err) + + case sig := <-sigChan: + // Shutdown requested + return handleShutdown(sig, cancel, done) + } +} + +// handleNormalCompletion processes normal reporter completion +func handleNormalCompletion(err error) int { + if err != nil { + log.Printf("Reporter finished with error: %v", err) + return 1 + } + log.Println("Reporter finished successfully") + return 0 +} + +// handleShutdown manages graceful shutdown with timeout +func handleShutdown(sig os.Signal, cancel context.CancelFunc, done <-chan error) int { + log.Printf("Received signal %v, initiating graceful shutdown...", sig) + cancel() + + // Create timer with explicit cleanup to avoid resource leak + timer := time.NewTimer(shutdownTimeout) + defer timer.Stop() + + // Wait for graceful shutdown with timeout + select { + case err := <-done: + // Reporter stopped within timeout + if err != nil && !errors.Is(err, context.Canceled) { + // Real error occurred (context.Canceled is expected during shutdown) + log.Printf("Reporter stopped with error: %v", err) + return 1 + } + log.Println("Shutdown complete") + return 0 + + case <-timer.C: + // Timeout exceeded - force exit + log.Printf("Shutdown timeout (%s) exceeded; forcing exit", shutdownTimeout) + return 1 + } +} + +// logConfig logs the loaded configuration +func logConfig(cfg *config.Config) { + log.Println("Configuration:") + log.Printf(" JOB_NAME: %s", cfg.JobName) + log.Printf(" JOB_NAMESPACE: %s", cfg.JobNamespace) + log.Printf(" POD_NAME: %s", cfg.PodName) + if cfg.AdapterContainerName != "" { + log.Printf(" ADAPTER_CONTAINER_NAME: %s", cfg.AdapterContainerName) + } else { + log.Printf(" ADAPTER_CONTAINER_NAME: (auto-detect)") + } + log.Printf(" RESULTS_PATH: %s", cfg.ResultsPath) + log.Printf(" POLL_INTERVAL_SECONDS: %d", cfg.PollIntervalSeconds) + log.Printf(" MAX_WAIT_TIME_SECONDS: %d", cfg.MaxWaitTimeSeconds) + log.Printf(" CONDITION_TYPE: %s", cfg.ConditionType) + log.Printf(" LOG_LEVEL: %s", cfg.LogLevel) +} diff --git a/status-reporter/cmd/reporter/main_suite_test.go b/status-reporter/cmd/reporter/main_suite_test.go new file mode 100644 index 0000000..91fdd97 --- /dev/null +++ b/status-reporter/cmd/reporter/main_suite_test.go @@ -0,0 +1,13 @@ +package main + +import ( + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" +) + +func TestMain(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "Main Suite") +} diff --git a/status-reporter/cmd/reporter/main_test.go b/status-reporter/cmd/reporter/main_test.go new file mode 100644 index 0000000..ec3b9a6 --- /dev/null +++ b/status-reporter/cmd/reporter/main_test.go @@ -0,0 +1,382 @@ +package main + +import ( + "context" + "errors" + "os" + "syscall" + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" +) + +var _ = Describe("Main", func() { + Describe("handleNormalCompletion", func() { + Context("when reporter completes successfully", func() { + It("returns exit code 0 for nil error", func() { + exitCode := handleNormalCompletion(nil) + Expect(exitCode).To(Equal(0)) + }) + }) + + Context("when reporter completes with error", func() { + It("returns exit code 1 for generic error", func() { + exitCode := handleNormalCompletion(errors.New("test error")) + Expect(exitCode).To(Equal(1)) + }) + + It("returns exit code 1 for context.Canceled (unexpected in normal path)", func() { + exitCode := handleNormalCompletion(context.Canceled) + Expect(exitCode).To(Equal(1)) + }) + + It("returns exit code 1 for wrapped error", func() { + wrappedErr := errors.New("operation failed: database connection lost") + exitCode := handleNormalCompletion(wrappedErr) + Expect(exitCode).To(Equal(1)) + }) + }) + }) + + Describe("handleShutdown", func() { + var ( + done chan error + ctx context.Context + cancel context.CancelFunc + ) + + BeforeEach(func() { + done = make(chan error, 1) + ctx, cancel = context.WithCancel(context.Background()) + }) + + AfterEach(func() { + cancel() + }) + + Context("when reporter stops within timeout", func() { + It("returns exit code 0 for nil error", func() { + go func() { + <-ctx.Done() + time.Sleep(10 * time.Millisecond) + done <- nil + }() + + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("returns exit code 0 for context.Canceled (expected during shutdown)", func() { + go func() { + <-ctx.Done() + time.Sleep(10 * time.Millisecond) + done <- context.Canceled + }() + + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("returns exit code 1 for real errors", func() { + go func() { + <-ctx.Done() + time.Sleep(10 * time.Millisecond) + done <- errors.New("database connection failed") + }() + + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(1)) + }) + + It("handles SIGINT signal", func() { + go func() { + <-ctx.Done() + time.Sleep(10 * time.Millisecond) + done <- nil + }() + + exitCode := handleShutdown(syscall.SIGINT, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + }) + + Context("when reporter exceeds shutdown timeout", func() { + It("returns exit code 1 after timeout", func() { + // Simulate reporter hanging (never sends to done) + go func() { + <-ctx.Done() + // Never send to done - simulating hang + time.Sleep(10 * time.Second) + }() + + start := time.Now() + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + duration := time.Since(start) + + Expect(exitCode).To(Equal(1)) + Expect(duration).To(BeNumerically(">=", shutdownTimeout)) + Expect(duration).To(BeNumerically("<", shutdownTimeout+200*time.Millisecond)) + }) + + It("does not block indefinitely", func() { + go func() { + <-ctx.Done() + // Hang forever + select {} + }() + + done := make(chan int, 1) + go func() { + exitCode := handleShutdown(syscall.SIGTERM, cancel, make(chan error, 1)) + done <- exitCode + }() + + Eventually(done, shutdownTimeout+1*time.Second).Should(Receive(Equal(1))) + }) + }) + + Context("timer cleanup", func() { + It("stops timer when reporter completes quickly", func() { + go func() { + <-ctx.Done() + done <- nil + }() + + // This shouldn't leak - timer should be stopped by defer + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(0)) + + // Give time for any leaked goroutines to show up + time.Sleep(50 * time.Millisecond) + }) + }) + }) + + Describe("waitForCompletion", func() { + var ( + sigChan chan os.Signal + done chan error + ctx context.Context + cancel context.CancelFunc + ) + + BeforeEach(func() { + sigChan = make(chan os.Signal, 1) + done = make(chan error, 1) + ctx, cancel = context.WithCancel(context.Background()) + }) + + AfterEach(func() { + cancel() + }) + + Context("normal completion path", func() { + It("returns exit code 0 when reporter succeeds", func() { + done <- nil + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("returns exit code 1 when reporter fails", func() { + done <- errors.New("validation failed") + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(1)) + }) + + It("returns exit code 1 for context.Canceled in normal path", func() { + done <- context.Canceled + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(1)) + }) + }) + + Context("signal-driven shutdown path", func() { + It("returns exit code 0 for graceful shutdown with SIGTERM", func() { + go func() { + <-ctx.Done() + time.Sleep(10 * time.Millisecond) + done <- context.Canceled + }() + + sigChan <- syscall.SIGTERM + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("returns exit code 0 for graceful shutdown with SIGINT", func() { + go func() { + <-ctx.Done() + time.Sleep(10 * time.Millisecond) + done <- nil + }() + + sigChan <- syscall.SIGINT + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("returns exit code 1 when shutdown encounters real error", func() { + go func() { + <-ctx.Done() + time.Sleep(10 * time.Millisecond) + done <- errors.New("cleanup failed") + }() + + sigChan <- syscall.SIGTERM + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(1)) + }) + }) + + Context("race conditions", func() { + It("handles reporter completing just as signal arrives", func() { + // Both channels ready almost simultaneously + go func() { + time.Sleep(1 * time.Millisecond) + done <- nil + }() + + go func() { + time.Sleep(1 * time.Millisecond) + sigChan <- syscall.SIGTERM + }() + + exitCode := waitForCompletion(sigChan, cancel, done) + // Either path is acceptable, should succeed + Expect(exitCode).To(Equal(0)) + }) + }) + }) + + Describe("context.Canceled handling", func() { + var ( + done chan error + ctx context.Context + cancel context.CancelFunc + ) + + BeforeEach(func() { + done = make(chan error, 1) + ctx, cancel = context.WithCancel(context.Background()) + }) + + AfterEach(func() { + cancel() + }) + + Context("during shutdown", func() { + It("treats context.Canceled as success (exit code 0)", func() { + go func() { + <-ctx.Done() + done <- context.Canceled + }() + + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("treats nil as success (exit code 0)", func() { + go func() { + <-ctx.Done() + done <- nil + }() + + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("treats real errors as failure (exit code 1)", func() { + go func() { + <-ctx.Done() + done <- errors.New("database connection lost") + }() + + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(1)) + }) + + It("treats wrapped errors as failure even if they mention 'canceled'", func() { + go func() { + <-ctx.Done() + done <- errors.New("operation canceled: database error") + }() + + exitCode := handleShutdown(syscall.SIGTERM, cancel, done) + Expect(exitCode).To(Equal(1)) + }) + }) + }) + + Describe("shutdown timeout configuration", func() { + It("has expected timeout value", func() { + Expect(shutdownTimeout).To(Equal(5 * time.Second)) + }) + }) + + Describe("signal handling", func() { + Context("supported signals", func() { + It("handles SIGTERM correctly", func() { + sigChan := make(chan os.Signal, 1) + done := make(chan error, 1) + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + go func() { + <-ctx.Done() + done <- nil + }() + + sigChan <- syscall.SIGTERM + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + + It("handles SIGINT correctly", func() { + sigChan := make(chan os.Signal, 1) + done := make(chan error, 1) + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + + go func() { + <-ctx.Done() + done <- nil + }() + + sigChan <- syscall.SIGINT + + exitCode := waitForCompletion(sigChan, cancel, done) + Expect(exitCode).To(Equal(0)) + }) + }) + }) + + Describe("error propagation", func() { + Context("various error types", func() { + It("correctly propagates validation errors", func() { + err := errors.New("validation failed: invalid configuration") + exitCode := handleNormalCompletion(err) + Expect(exitCode).To(Equal(1)) + }) + + It("correctly propagates I/O errors", func() { + err := errors.New("failed to read results file") + exitCode := handleNormalCompletion(err) + Expect(exitCode).To(Equal(1)) + }) + + It("correctly propagates network errors", func() { + err := errors.New("connection refused") + exitCode := handleNormalCompletion(err) + Expect(exitCode).To(Equal(1)) + }) + }) + }) +}) diff --git a/status-reporter/go.mod b/status-reporter/go.mod new file mode 100644 index 0000000..6942253 --- /dev/null +++ b/status-reporter/go.mod @@ -0,0 +1,58 @@ +module github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter + +go 1.25.0 + +require ( + github.com/onsi/ginkgo/v2 v2.27.3 + github.com/onsi/gomega v1.38.2 + k8s.io/api v0.34.1 + k8s.io/apimachinery v0.34.1 + k8s.io/client-go v0.34.1 +) + +require ( + github.com/Masterminds/semver/v3 v3.4.0 // indirect + github.com/davecgh/go-spew v1.1.1 // indirect + github.com/emicklei/go-restful/v3 v3.12.2 // indirect + github.com/fxamacker/cbor/v2 v2.9.0 // indirect + github.com/go-logr/logr v1.4.3 // indirect + github.com/go-openapi/jsonpointer v0.21.0 // indirect + github.com/go-openapi/jsonreference v0.20.2 // indirect + github.com/go-openapi/swag v0.23.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/google/gnostic-models v0.7.0 // indirect + github.com/google/go-cmp v0.7.0 // indirect + github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 // indirect + github.com/google/uuid v1.6.0 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/mailru/easyjson v0.7.7 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/pkg/errors v0.9.1 // indirect + github.com/x448/float16 v0.8.4 // indirect + go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.4 // indirect + golang.org/x/mod v0.27.0 // indirect + golang.org/x/net v0.43.0 // indirect + golang.org/x/oauth2 v0.27.0 // indirect + golang.org/x/sync v0.16.0 // indirect + golang.org/x/sys v0.35.0 // indirect + golang.org/x/term v0.34.0 // indirect + golang.org/x/text v0.28.0 // indirect + golang.org/x/time v0.9.0 // indirect + golang.org/x/tools v0.36.0 // indirect + google.golang.org/protobuf v1.36.7 // indirect + gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + k8s.io/klog/v2 v2.130.1 // indirect + k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b // indirect + k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 // indirect + sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect + sigs.k8s.io/randfill v1.0.0 // indirect + sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect + sigs.k8s.io/yaml v1.6.0 // indirect +) diff --git a/status-reporter/go.sum b/status-reporter/go.sum new file mode 100644 index 0000000..ee1d2ef --- /dev/null +++ b/status-reporter/go.sum @@ -0,0 +1,184 @@ +github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= +github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU= +github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= +github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= +github.com/gkampitakis/ciinfo v0.3.2 h1:JcuOPk8ZU7nZQjdUhctuhQofk7BGHuIy0c9Ez8BNhXs= +github.com/gkampitakis/ciinfo v0.3.2/go.mod h1:1NIwaOcFChN4fa/B0hEBdAb6npDlFL8Bwx4dfRLRqAo= +github.com/gkampitakis/go-diff v1.3.2 h1:Qyn0J9XJSDTgnsgHRdz9Zp24RaJeKMUHg2+PDZZdC4M= +github.com/gkampitakis/go-diff v1.3.2/go.mod h1:LLgOrpqleQe26cte8s36HTWcTmMEur6OPYerdAAS9tk= +github.com/gkampitakis/go-snaps v0.5.15 h1:amyJrvM1D33cPHwVrjo9jQxX8g/7E2wYdZ+01KS3zGE= +github.com/gkampitakis/go-snaps v0.5.15/go.mod h1:HNpx/9GoKisdhw9AFOBT1N7DBs9DiHo/hGheFGBZ+mc= +github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= +github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= +github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= +github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= +github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= +github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= +github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= +github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE= +github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw= +github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo= +github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8= +github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/joshdk/go-junit v1.0.0 h1:S86cUKIdwBHWwA6xCmFlf3RTLfVXYQfvanM5Uh+K6GE= +github.com/joshdk/go-junit v1.0.0/go.mod h1:TiiV0PqkaNfFXjEiyjWM3XXrhVyCa1K4Zfga6W52ung= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= +github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= +github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo= +github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg= +github.com/mfridman/tparse v0.18.0 h1:wh6dzOKaIwkUGyKgOntDW4liXSo37qg5AXbIhkMV3vE= +github.com/mfridman/tparse v0.18.0/go.mod h1:gEvqZTuCgEhPbYk/2lS3Kcxg1GmTxxU7kTC8DvP0i/A= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8= +github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/onsi/ginkgo/v2 v2.27.3 h1:ICsZJ8JoYafeXFFlFAG75a7CxMsJHwgKwtO+82SE9L8= +github.com/onsi/ginkgo/v2 v2.27.3/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo= +github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A= +github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII= +github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= +github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o= +github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= +github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= +github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= +github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= +github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA= +github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= +github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= +github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= +github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= +github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= +go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= +go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= +go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ= +golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= +golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= +golang.org/x/oauth2 v0.27.0 h1:da9Vo7/tDv5RH/7nZDz1eMGS/q1Vv1N/7FCrBhI9I3M= +golang.org/x/oauth2 v0.27.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw= +golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= +golang.org/x/term v0.34.0 h1:O/2T7POpk0ZZ7MAzMeWFSg6S5IpWd/RXDlM9hgM3DR4= +golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= +golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= +golang.org/x/time v0.9.0 h1:EsRrnYcQiGH+5FfbgvV4AP7qEZstoyrHB0DzarOQ4ZY= +golang.org/x/time v0.9.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg= +golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +google.golang.org/protobuf v1.36.7 h1:IgrO7UwFQGJdRNXH/sQux4R1Dj1WAKcLElzeeRaXV2A= +google.golang.org/protobuf v1.36.7/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= +gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +k8s.io/api v0.34.1 h1:jC+153630BMdlFukegoEL8E/yT7aLyQkIVuwhmwDgJM= +k8s.io/api v0.34.1/go.mod h1:SB80FxFtXn5/gwzCoN6QCtPD7Vbu5w2n1S0J5gFfTYk= +k8s.io/apimachinery v0.34.1 h1:dTlxFls/eikpJxmAC7MVE8oOeP1zryV7iRyIjB0gky4= +k8s.io/apimachinery v0.34.1/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw= +k8s.io/client-go v0.34.1 h1:ZUPJKgXsnKwVwmKKdPfw4tB58+7/Ik3CrjOEhsiZ7mY= +k8s.io/client-go v0.34.1/go.mod h1:kA8v0FP+tk6sZA0yKLRG67LWjqufAoSHA2xVGKw9Of8= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b h1:MloQ9/bdJyIu9lb1PzujOPolHyvO06MXG5TUIj2mNAA= +k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b/go.mod h1:UZ2yyWbFTpuhSbFhv24aGNOdoRdJZgsIObGBUaYVsts= +k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 h1:hwvWFiBzdWw1FhfY1FooPn3kzWuJ8tmbZBHi4zVsl1Y= +k8s.io/utils v0.0.0-20250604170112-4c0f3b243397/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 h1:gBQPwqORJ8d8/YNZWEjoZs7npUVDpVXUUOFfW6CgAqE= +sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg= +sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU= +sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco= +sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE= +sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= +sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4= diff --git a/status-reporter/pkg/config/config.go b/status-reporter/pkg/config/config.go new file mode 100644 index 0000000..0fdcad6 --- /dev/null +++ b/status-reporter/pkg/config/config.go @@ -0,0 +1,188 @@ +package config + +import ( + "fmt" + "os" + "path/filepath" + "strconv" + "strings" + "time" +) + +// Config represents the status reporter configuration +type Config struct { + JobName string + JobNamespace string + PodName string + ResultsPath string + PollIntervalSeconds int + MaxWaitTimeSeconds int + ConditionType string + LogLevel string + AdapterContainerName string +} + +const ( + DefaultResultsPath = "/results/adapter-result.json" + DefaultPollIntervalSeconds = 2 + DefaultMaxWaitTimeSeconds = 300 + DefaultConditionType = "Available" + DefaultLogLevel = "info" + DefaultAdapterContainerName = "" +) + +const ( + EnvJobName = "JOB_NAME" + EnvJobNamespace = "JOB_NAMESPACE" + EnvPodName = "POD_NAME" + EnvResultsPath = "RESULTS_PATH" + EnvPollIntervalSeconds = "POLL_INTERVAL_SECONDS" + EnvMaxWaitTimeSeconds = "MAX_WAIT_TIME_SECONDS" + EnvConditionType = "CONDITION_TYPE" + EnvLogLevel = "LOG_LEVEL" + EnvAdapterContainerName = "ADAPTER_CONTAINER_NAME" +) + +// ValidationError represents a validation error for configuration or data validation +type ValidationError struct { + Field string + Message string +} + +func (e *ValidationError) Error() string { + return e.Field + ": " + e.Message +} + +// Load loads configuration from environment variables +func Load() (*Config, error) { + jobName, err := getRequiredEnv(EnvJobName) + if err != nil { + return nil, err + } + + jobNamespace, err := getRequiredEnv(EnvJobNamespace) + if err != nil { + return nil, err + } + + podName, err := getRequiredEnv(EnvPodName) + if err != nil { + return nil, err + } + + resultsPath := getEnvOrDefault(EnvResultsPath, DefaultResultsPath) + conditionType := getEnvOrDefault(EnvConditionType, DefaultConditionType) + logLevel := getEnvOrDefault(EnvLogLevel, DefaultLogLevel) + adapterContainerName := getEnvOrDefault(EnvAdapterContainerName, DefaultAdapterContainerName) + + pollIntervalSeconds, err := getEnvIntOrDefault(EnvPollIntervalSeconds, DefaultPollIntervalSeconds) + if err != nil { + return nil, err + } + + maxWaitTimeSeconds, err := getEnvIntOrDefault(EnvMaxWaitTimeSeconds, DefaultMaxWaitTimeSeconds) + if err != nil { + return nil, err + } + + config := &Config{ + JobName: jobName, + JobNamespace: jobNamespace, + PodName: podName, + ResultsPath: resultsPath, + PollIntervalSeconds: pollIntervalSeconds, + MaxWaitTimeSeconds: maxWaitTimeSeconds, + ConditionType: conditionType, + LogLevel: logLevel, + AdapterContainerName: adapterContainerName, + } + + if err := config.Validate(); err != nil { + return nil, err + } + + return config, nil +} + +// Validate validates the configuration +func (c *Config) Validate() error { + if c.PollIntervalSeconds <= 0 { + return &ValidationError{Field: "PollIntervalSeconds", Message: "must be positive"} + } + if c.MaxWaitTimeSeconds <= 0 { + return &ValidationError{Field: "MaxWaitTimeSeconds", Message: "must be positive"} + } + if c.PollIntervalSeconds >= c.MaxWaitTimeSeconds { + return &ValidationError{Field: "PollIntervalSeconds", Message: "must be less than MaxWaitTimeSeconds"} + } + + if err := c.validateResultsPath(); err != nil { + return err + } + + return nil +} + +// validateResultsPath ensures the results path is safe +func (c *Config) validateResultsPath() error { + if strings.HasSuffix(c.ResultsPath, "/") { + return &ValidationError{ + Field: "ResultsPath", + Message: "path must be a file, not a directory", + } + } + + cleanPath := filepath.Clean(c.ResultsPath) + + if !filepath.IsAbs(cleanPath) { + return &ValidationError{ + Field: "ResultsPath", + Message: "path must be absolute", + } + } + + return nil +} + +// GetPollInterval returns poll interval as duration +func (c *Config) GetPollInterval() time.Duration { + return time.Duration(c.PollIntervalSeconds) * time.Second +} + +// GetMaxWaitTime returns max wait time as duration +func (c *Config) GetMaxWaitTime() time.Duration { + return time.Duration(c.MaxWaitTimeSeconds) * time.Second +} + +func getEnvOrDefault(key, defaultValue string) string { + value := strings.TrimSpace(os.Getenv(key)) + if value == "" { + return defaultValue + } + return value +} + +func getRequiredEnv(key string) (string, error) { + value := strings.TrimSpace(os.Getenv(key)) + if value == "" { + return "", &ValidationError{Field: key, Message: "required"} + } + return value, nil +} + +func getEnvIntOrDefault(key string, defaultValue int) (int, error) { + value := strings.TrimSpace(os.Getenv(key)) + if value == "" { + return defaultValue, nil + } + + intValue, err := strconv.Atoi(value) + if err != nil { + return 0, &ValidationError{ + Field: key, + Message: fmt.Sprintf("must be a valid integer, got: %s", value), + } + } + + return intValue, nil +} diff --git a/status-reporter/pkg/config/config_suite_test.go b/status-reporter/pkg/config/config_suite_test.go new file mode 100644 index 0000000..c6e29ba --- /dev/null +++ b/status-reporter/pkg/config/config_suite_test.go @@ -0,0 +1,13 @@ +package config_test + +import ( + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" +) + +func TestConfig(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "Config Suite") +} diff --git a/status-reporter/pkg/config/config_test.go b/status-reporter/pkg/config/config_test.go new file mode 100644 index 0000000..3d35a5f --- /dev/null +++ b/status-reporter/pkg/config/config_test.go @@ -0,0 +1,250 @@ +package config_test + +import ( + "os" + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/config" +) + +var _ = Describe("Config", func() { + var originalEnv map[string]string + + BeforeEach(func() { + originalEnv = make(map[string]string) + envVars := []string{ + "JOB_NAME", "JOB_NAMESPACE", "POD_NAME", + "RESULTS_PATH", "POLL_INTERVAL_SECONDS", "MAX_WAIT_TIME_SECONDS", + "CONDITION_TYPE", "LOG_LEVEL", "ADAPTER_CONTAINER_NAME", + } + for _, key := range envVars { + originalEnv[key] = os.Getenv(key) + os.Unsetenv(key) + } + }) + + AfterEach(func() { + for key, value := range originalEnv { + if value != "" { + os.Setenv(key, value) + } else { + os.Unsetenv(key) + } + } + }) + + Describe("Load", func() { + Context("with valid required configuration", func() { + BeforeEach(func() { + os.Setenv("JOB_NAME", "test-job") + os.Setenv("JOB_NAMESPACE", "test-namespace") + os.Setenv("POD_NAME", "test-pod") + }) + + It("loads configuration successfully", func() { + cfg, err := config.Load() + Expect(err).NotTo(HaveOccurred()) + Expect(cfg).NotTo(BeNil()) + Expect(cfg.JobName).To(Equal("test-job")) + Expect(cfg.JobNamespace).To(Equal("test-namespace")) + Expect(cfg.PodName).To(Equal("test-pod")) + }) + + It("uses default values for optional fields", func() { + cfg, err := config.Load() + Expect(err).NotTo(HaveOccurred()) + Expect(cfg.ResultsPath).To(Equal("/results/adapter-result.json")) + Expect(cfg.PollIntervalSeconds).To(Equal(2)) + Expect(cfg.MaxWaitTimeSeconds).To(Equal(300)) + Expect(cfg.ConditionType).To(Equal("Available")) + Expect(cfg.LogLevel).To(Equal("info")) + Expect(cfg.AdapterContainerName).To(Equal("")) + }) + + It("uses custom values when provided", func() { + os.Setenv("RESULTS_PATH", "/results/custom/path.json") + os.Setenv("POLL_INTERVAL_SECONDS", "5") + os.Setenv("MAX_WAIT_TIME_SECONDS", "600") + os.Setenv("CONDITION_TYPE", "Ready") + os.Setenv("LOG_LEVEL", "debug") + os.Setenv("ADAPTER_CONTAINER_NAME", "my-adapter") + + cfg, err := config.Load() + Expect(err).NotTo(HaveOccurred()) + Expect(cfg.ResultsPath).To(Equal("/results/custom/path.json")) + Expect(cfg.PollIntervalSeconds).To(Equal(5)) + Expect(cfg.MaxWaitTimeSeconds).To(Equal(600)) + Expect(cfg.ConditionType).To(Equal("Ready")) + Expect(cfg.LogLevel).To(Equal("debug")) + Expect(cfg.AdapterContainerName).To(Equal("my-adapter")) + }) + + It("trims whitespace from values", func() { + os.Setenv("JOB_NAME", " test-job ") + os.Setenv("JOB_NAMESPACE", " test-namespace ") + + cfg, err := config.Load() + Expect(err).NotTo(HaveOccurred()) + Expect(cfg.JobName).To(Equal("test-job")) + Expect(cfg.JobNamespace).To(Equal("test-namespace")) + }) + }) + + Context("with missing required configuration", func() { + It("returns error when JOB_NAME is missing", func() { + os.Setenv("JOB_NAMESPACE", "test-namespace") + os.Setenv("POD_NAME", "test-pod") + + _, err := config.Load() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("JOB_NAME")) + }) + + It("returns error when JOB_NAMESPACE is missing", func() { + os.Setenv("JOB_NAME", "test-job") + os.Setenv("POD_NAME", "test-pod") + + _, err := config.Load() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("JOB_NAMESPACE")) + }) + + It("returns error when POD_NAME is missing", func() { + os.Setenv("JOB_NAME", "test-job") + os.Setenv("JOB_NAMESPACE", "test-namespace") + + _, err := config.Load() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("POD_NAME")) + }) + }) + + Context("with invalid integer values", func() { + BeforeEach(func() { + os.Setenv("JOB_NAME", "test-job") + os.Setenv("JOB_NAMESPACE", "test-namespace") + os.Setenv("POD_NAME", "test-pod") + }) + + It("returns error for invalid POLL_INTERVAL_SECONDS", func() { + os.Setenv("POLL_INTERVAL_SECONDS", "invalid") + + _, err := config.Load() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("POLL_INTERVAL_SECONDS")) + }) + + It("returns error for invalid MAX_WAIT_TIME_SECONDS", func() { + os.Setenv("MAX_WAIT_TIME_SECONDS", "invalid") + + _, err := config.Load() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("MAX_WAIT_TIME_SECONDS")) + }) + }) + }) + + Describe("Validate", func() { + Context("with valid configuration", func() { + It("validates successfully", func() { + cfg := &config.Config{ + JobName: "test-job", + JobNamespace: "test-namespace", + PodName: "test-pod", + ResultsPath: "/results/result.json", + PollIntervalSeconds: 2, + MaxWaitTimeSeconds: 300, + } + Expect(cfg.Validate()).To(Succeed()) + }) + }) + + Context("with invalid timing parameters", func() { + It("returns error for zero poll interval", func() { + cfg := &config.Config{ + ResultsPath: "/results/result.json", + PollIntervalSeconds: 0, + MaxWaitTimeSeconds: 300, + } + err := cfg.Validate() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("must be positive")) + }) + + It("returns error for negative poll interval", func() { + cfg := &config.Config{ + ResultsPath: "/results/result.json", + PollIntervalSeconds: -1, + MaxWaitTimeSeconds: 300, + } + err := cfg.Validate() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("must be positive")) + }) + + It("returns error for zero max wait time", func() { + cfg := &config.Config{ + ResultsPath: "/results/result.json", + PollIntervalSeconds: 2, + MaxWaitTimeSeconds: 0, + } + err := cfg.Validate() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("must be positive")) + }) + + It("returns error when poll interval >= max wait time", func() { + cfg := &config.Config{ + ResultsPath: "/results/result.json", + PollIntervalSeconds: 300, + MaxWaitTimeSeconds: 300, + } + err := cfg.Validate() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("must be less than MaxWaitTimeSeconds")) + }) + }) + + Context("with invalid results path", func() { + It("returns error for relative path", func() { + cfg := &config.Config{ + ResultsPath: "results/result.json", + PollIntervalSeconds: 2, + MaxWaitTimeSeconds: 300, + } + err := cfg.Validate() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("must be absolute")) + }) + + + It("returns error for directory path", func() { + cfg := &config.Config{ + ResultsPath: "/results/", + PollIntervalSeconds: 2, + MaxWaitTimeSeconds: 300, + } + err := cfg.Validate() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("must be a file")) + }) + }) + }) + + Describe("GetPollInterval", func() { + It("returns poll interval as duration", func() { + cfg := &config.Config{PollIntervalSeconds: 5} + Expect(cfg.GetPollInterval()).To(Equal(5 * time.Second)) + }) + }) + + Describe("GetMaxWaitTime", func() { + It("returns max wait time as duration", func() { + cfg := &config.Config{MaxWaitTimeSeconds: 600} + Expect(cfg.GetMaxWaitTime()).To(Equal(600 * time.Second)) + }) + }) +}) diff --git a/status-reporter/pkg/k8s/client.go b/status-reporter/pkg/k8s/client.go new file mode 100644 index 0000000..2502da1 --- /dev/null +++ b/status-reporter/pkg/k8s/client.go @@ -0,0 +1,132 @@ +package k8s + +import ( + "context" + "fmt" + "time" + + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + "k8s.io/client-go/util/retry" +) + +const ( + // StatusReporterContainerName is the name of the status reporter sidecar container + StatusReporterContainerName = "status-reporter" +) + +// Client wraps Kubernetes client operations +type Client struct { + clientset *kubernetes.Clientset + namespace string + jobName string +} + +// NewClient creates a new Kubernetes client using in-cluster config +func NewClient(namespace, jobName string) (*Client, error) { + config, err := rest.InClusterConfig() + if err != nil { + return nil, fmt.Errorf("failed to get in-cluster config: %w", err) + } + + clientset, err := kubernetes.NewForConfig(config) + if err != nil { + return nil, fmt.Errorf("failed to create clientset: %w", err) + } + + return &Client{ + clientset: clientset, + namespace: namespace, + jobName: jobName, + }, nil +} + +// JobCondition represents a Kubernetes Job condition +type JobCondition struct { + Type string + Status string + Reason string + Message string + LastTransitionTime time.Time +} + +// UpdateJobStatus updates the Job status with the given condition +func (c *Client) UpdateJobStatus(ctx context.Context, condition JobCondition) error { + return retry.RetryOnConflict(retry.DefaultBackoff, func() error { + job, err := c.clientset.BatchV1().Jobs(c.namespace).Get(ctx, c.jobName, metav1.GetOptions{}) + if err != nil { + if errors.IsNotFound(err) { + return fmt.Errorf("job %s/%s not found: %w", c.namespace, c.jobName, err) + } + return err + } + + transitionTime := condition.LastTransitionTime + if transitionTime.IsZero() { + transitionTime = time.Now() + } + + newCondition := batchv1.JobCondition{ + Type: batchv1.JobConditionType(condition.Type), + Status: corev1.ConditionStatus(condition.Status), + LastTransitionTime: metav1.NewTime(transitionTime), + Reason: condition.Reason, + Message: condition.Message, + } + + conditionUpdated := false + for i, existingCondition := range job.Status.Conditions { + if existingCondition.Type == newCondition.Type { + job.Status.Conditions[i] = newCondition + conditionUpdated = true + break + } + } + + if !conditionUpdated { + job.Status.Conditions = append(job.Status.Conditions, newCondition) + } + + _, err = c.clientset.BatchV1().Jobs(c.namespace).UpdateStatus(ctx, job, metav1.UpdateOptions{}) + return err + }) +} + +// GetPodStatus retrieves pod status by name +func (c *Client) GetPodStatus(ctx context.Context, podName string) (*corev1.PodStatus, error) { + pod, err := c.clientset.CoreV1().Pods(c.namespace).Get(ctx, podName, metav1.GetOptions{}) + if err != nil { + return nil, fmt.Errorf("failed to get pod: namespace=%s pod=%s: %w", c.namespace, podName, err) + } + + return &pod.Status, nil +} + +// GetAdapterContainerStatus finds the adapter container status +func (c *Client) GetAdapterContainerStatus(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + podStatus, err := c.GetPodStatus(ctx, podName) + if err != nil { + return nil, err + } + + if containerName != "" { + for _, cs := range podStatus.ContainerStatuses { + if cs.Name == containerName { + return &cs, nil + } + } + return nil, fmt.Errorf("container not found: namespace=%s pod=%s container=%s", c.namespace, podName, containerName) + } + + for _, cs := range podStatus.ContainerStatuses { + if cs.Name != StatusReporterContainerName { + return &cs, nil + } + } + + return nil, fmt.Errorf("adapter container not found: namespace=%s pod=%s", c.namespace, podName) +} diff --git a/status-reporter/pkg/k8s/client_test.go b/status-reporter/pkg/k8s/client_test.go new file mode 100644 index 0000000..2aff20c --- /dev/null +++ b/status-reporter/pkg/k8s/client_test.go @@ -0,0 +1,42 @@ +package k8s_test + +import ( + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/k8s" +) + +var _ = Describe("JobCondition", func() { + Describe("creation", func() { + It("can be created with all fields", func() { + now := time.Now() + condition := k8s.JobCondition{ + Type: "Available", + Status: "True", + Reason: "TestPassed", + Message: "Test completed successfully", + LastTransitionTime: now, + } + + Expect(condition.Type).To(Equal("Available")) + Expect(condition.Status).To(Equal("True")) + Expect(condition.Reason).To(Equal("TestPassed")) + Expect(condition.Message).To(Equal("Test completed successfully")) + Expect(condition.LastTransitionTime).To(Equal(now)) + }) + + It("can be created with zero LastTransitionTime", func() { + condition := k8s.JobCondition{ + Type: "Available", + Status: "False", + Reason: "TestFailed", + Message: "Test failed", + } + + Expect(condition.LastTransitionTime.IsZero()).To(BeTrue()) + }) + }) +}) diff --git a/status-reporter/pkg/k8s/k8s_suite_test.go b/status-reporter/pkg/k8s/k8s_suite_test.go new file mode 100644 index 0000000..3985e3b --- /dev/null +++ b/status-reporter/pkg/k8s/k8s_suite_test.go @@ -0,0 +1,13 @@ +package k8s_test + +import ( + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" +) + +func TestK8s(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "K8s Suite") +} diff --git a/status-reporter/pkg/reporter/reporter.go b/status-reporter/pkg/reporter/reporter.go new file mode 100644 index 0000000..31e999e --- /dev/null +++ b/status-reporter/pkg/reporter/reporter.go @@ -0,0 +1,405 @@ +package reporter + +import ( + "context" + "errors" + "fmt" + "log" + "os" + "sync" + "time" + + corev1 "k8s.io/api/core/v1" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/k8s" + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/result" +) + +const ( + ConditionStatusTrue = "True" + ConditionStatusFalse = "False" + + ReasonAdapterCrashed = "AdapterCrashed" + ReasonAdapterOOMKilled = "AdapterOOMKilled" + ReasonAdapterExitedWithError = "AdapterExitedWithError" + ReasonAdapterTimeout = "AdapterTimeout" + ReasonInvalidResultFormat = "InvalidResultFormat" + + ContainerReasonOOMKilled = "OOMKilled" + + // DefaultContainerStatusCheckInterval Default container status check interval - checked less frequently than file polling to reduce a K8s API load + DefaultContainerStatusCheckInterval = 10 * time.Second +) + +// K8sClientInterface defines the k8s operations needed by StatusReporter +type K8sClientInterface interface { + UpdateJobStatus(ctx context.Context, condition k8s.JobCondition) error + GetAdapterContainerStatus(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) +} + +// pollChannels encapsulates the channels used for communication between polling goroutines and the main Run loop +type pollChannels struct { + result chan *result.AdapterResult + error chan error + terminated chan *corev1.ContainerStateTerminated + done chan struct{} +} + +// StatusReporter is the main status reporter +type StatusReporter struct { + resultsPath string + pollInterval time.Duration + maxWaitTime time.Duration + containerStatusCheckInterval time.Duration + conditionType string + podName string + adapterContainerName string + k8sClient K8sClientInterface + parser *result.Parser +} + +// NewReporter creates a new status reporter +func NewReporter(resultsPath string, pollInterval, maxWaitTime time.Duration, conditionType, podName, adapterContainerName, jobName, jobNamespace string) (*StatusReporter, error) { + k8sClient, err := k8s.NewClient(jobNamespace, jobName) + if err != nil { + return nil, fmt.Errorf("failed to create k8s client: %w", err) + } + + return newReporterWithClient(resultsPath, pollInterval, maxWaitTime, DefaultContainerStatusCheckInterval, conditionType, podName, adapterContainerName, k8sClient), nil +} + +// NewReporterWithClient creates a new status reporter with a custom k8s client (for testing) +func NewReporterWithClient(resultsPath string, pollInterval, maxWaitTime time.Duration, conditionType, podName, adapterContainerName string, k8sClient K8sClientInterface) *StatusReporter { + return newReporterWithClient(resultsPath, pollInterval, maxWaitTime, DefaultContainerStatusCheckInterval, conditionType, podName, adapterContainerName, k8sClient) +} + +// NewReporterWithClientAndIntervals creates a new status reporter with custom intervals (for testing) +func NewReporterWithClientAndIntervals(resultsPath string, pollInterval, maxWaitTime, containerStatusCheckInterval time.Duration, conditionType, podName, adapterContainerName string, k8sClient K8sClientInterface) *StatusReporter { + return newReporterWithClient(resultsPath, pollInterval, maxWaitTime, containerStatusCheckInterval, conditionType, podName, adapterContainerName, k8sClient) +} + +func newReporterWithClient(resultsPath string, pollInterval, maxWaitTime, containerStatusCheckInterval time.Duration, conditionType, podName, adapterContainerName string, k8sClient K8sClientInterface) *StatusReporter { + return &StatusReporter{ + resultsPath: resultsPath, + pollInterval: pollInterval, + maxWaitTime: maxWaitTime, + containerStatusCheckInterval: containerStatusCheckInterval, + conditionType: conditionType, + podName: podName, + adapterContainerName: adapterContainerName, + k8sClient: k8sClient, + parser: result.NewParser(), + } +} + +// Run starts the reporter and blocks until completion +func (r *StatusReporter) Run(ctx context.Context) error { + log.Printf("Status reporter starting...") + log.Printf(" Pod: %s", r.podName) + log.Printf(" Results path: %s", r.resultsPath) + log.Printf(" Poll interval: %s", r.pollInterval) + log.Printf(" Max wait time: %s", r.maxWaitTime) + + timeoutCtx, cancel := context.WithTimeout(ctx, r.maxWaitTime) + defer cancel() + + // Buffered channels (size 1) prevent goroutine leaks if the main select has already + // chosen another case when a sender tries to send + channels := &pollChannels{ + result: make(chan *result.AdapterResult, 1), + error: make(chan error, 1), + terminated: make(chan *corev1.ContainerStateTerminated, 1), + done: make(chan struct{}), + } + + var wg sync.WaitGroup + wg.Add(2) + go r.pollForResultFile(timeoutCtx, channels, &wg) + go r.monitorContainerStatus(timeoutCtx, channels, &wg) + + var reportErr error + select { + case adapterResult := <-channels.result: + reportErr = r.UpdateFromResult(ctx, adapterResult) + case err := <-channels.error: + reportErr = r.UpdateFromError(ctx, err) + case terminated := <-channels.terminated: + reportErr = r.HandleTermination(ctx, terminated) + case <-timeoutCtx.Done(): + // Give precedence to results/errors/termination that may have arrived just before timeout + select { + case adapterResult := <-channels.result: + reportErr = r.UpdateFromResult(ctx, adapterResult) + case err := <-channels.error: + reportErr = r.UpdateFromError(ctx, err) + case terminated := <-channels.terminated: + reportErr = r.HandleTermination(ctx, terminated) + default: + reportErr = r.UpdateFromTimeout(ctx) + } + } + + close(channels.done) + wg.Wait() + + return reportErr +} + +// pollForResultFile polls for the result file at regular intervals. +// This is separated from container monitoring to allow fast polling of the local filesystem +// without incurring the cost of K8s API calls on every iteration. +func (r *StatusReporter) pollForResultFile(ctx context.Context, channels *pollChannels, wg *sync.WaitGroup) { + defer wg.Done() + + ticker := time.NewTicker(r.pollInterval) + defer ticker.Stop() + + log.Printf("Polling for result file at %s (interval: %s)...", r.resultsPath, r.pollInterval) + + for { + select { + case <-channels.done: + log.Printf("Result file polling stopped by shutdown signal") + return + case <-ctx.Done(): + log.Printf("Result file polling cancelled: %v", ctx.Err()) + return + case <-ticker.C: + // Check for result file (fast local filesystem operation) + if _, err := os.Stat(r.resultsPath); err != nil { + if os.IsNotExist(err) { + continue + } + // Unexpected stat error (e.g., permission denied) + select { + case channels.error <- fmt.Errorf("failed to stat result file path=%s: %w", r.resultsPath, err): + case <-channels.done: + return + } + return + } + + log.Printf("Result file found, parsing...") + adapterResult, err := r.parser.ParseFile(r.resultsPath) + if err != nil { + select { + case channels.error <- err: + case <-channels.done: + return + } + return + } + + log.Printf("Result parsed successfully: status=%s, reason=%s", adapterResult.Status, adapterResult.Reason) + select { + case channels.result <- adapterResult: + case <-channels.done: + return + } + return + } + } +} + +// checkContainerStatus checks if the adapter container has terminated. +// Returns true if terminated (and sends notification), false otherwise. +func (r *StatusReporter) checkContainerStatus(ctx context.Context, channels *pollChannels) bool { + containerStatus, err := r.k8sClient.GetAdapterContainerStatus(ctx, r.podName, r.adapterContainerName) + if err != nil { + log.Printf("Warning: failed to get container status pod=%s container=%s: %v", + r.podName, r.adapterContainerName, err) + return false + } + + if containerStatus != nil && containerStatus.State.Terminated != nil { + log.Printf("Container terminated: pod=%s container=%s reason=%s exitCode=%d", + r.podName, r.adapterContainerName, + containerStatus.State.Terminated.Reason, + containerStatus.State.Terminated.ExitCode) + select { + case channels.terminated <- containerStatus.State.Terminated: + case <-channels.done: + } + return true + } + return false +} + +// monitorContainerStatus monitors the adapter container status at regular intervals. +// This is separated from file polling to reduce K8s API load - we check container status +// less frequently (every 10s by default) compared to file polling (typically 50-100ms). +func (r *StatusReporter) monitorContainerStatus(ctx context.Context, channels *pollChannels, wg *sync.WaitGroup) { + defer wg.Done() + + log.Printf("Monitoring container status for pod=%s container=%s (interval: %s)...", + r.podName, r.adapterContainerName, r.containerStatusCheckInterval) + + // Perform immediate check before starting ticker + if r.checkContainerStatus(ctx, channels) { + return + } + + ticker := time.NewTicker(r.containerStatusCheckInterval) + defer ticker.Stop() + + for { + select { + case <-channels.done: + log.Printf("Container status monitoring stopped by shutdown signal") + return + case <-ctx.Done(): + log.Printf("Container status monitoring cancelled: %v", ctx.Err()) + return + case <-ticker.C: + if r.checkContainerStatus(ctx, channels) { + return + } + } + } +} + +// HandleTermination handles container termination by checking for result file first. +// Priority order: +// 1. If valid result file exists -> use it (adapter's intended status) +// 2. If result file missing or invalid -> use container exit code +func (r *StatusReporter) HandleTermination(ctx context.Context, terminated *corev1.ContainerStateTerminated) error { + log.Printf("Adapter container terminated: reason=%s, exitCode=%d", terminated.Reason, terminated.ExitCode) + + adapterResult, err := r.tryParseResultFile() + switch { + case err == nil && adapterResult != nil: + // Happy path: valid result file exists + log.Printf("Using result file: status=%s, reason=%s", adapterResult.Status, adapterResult.Reason) + return r.UpdateFromResult(ctx, adapterResult) + + case errors.Is(err, os.ErrNotExist): + // Expected: adapter terminated without producing result file + log.Printf("No result file found, using container exit code") + + case err != nil: + // Unexpected: file exists but can't read/parse it + log.Printf("Warning: result file error: %v. Falling back to container exit code", err) + } + + // No valid result file, update based on container termination state + return r.UpdateFromTerminatedContainer(ctx, terminated) +} + +// tryParseResultFile attempts to read and parse the result file. +// Returns (nil, os.ErrNotExist) if file doesn't exist, or (nil, err) for other errors. +func (r *StatusReporter) tryParseResultFile() (*result.AdapterResult, error) { + if _, err := os.Stat(r.resultsPath); err != nil { + return nil, err // Could be ErrNotExist or permission error + } + + adapterResult, err := r.parser.ParseFile(r.resultsPath) + if err != nil { + return nil, fmt.Errorf("parse failed: %w", err) + } + + return adapterResult, nil +} + +// UpdateFromResult updates Job status from adapter result +func (r *StatusReporter) UpdateFromResult(ctx context.Context, adapterResult *result.AdapterResult) error { + log.Printf("Updating Job status from adapter result...") + + conditionStatus := ConditionStatusTrue + if !adapterResult.IsSuccess() { + conditionStatus = ConditionStatusFalse + } + + condition := k8s.JobCondition{ + Type: r.conditionType, + Status: conditionStatus, + Reason: adapterResult.Reason, + Message: adapterResult.Message, + } + + if err := r.k8sClient.UpdateJobStatus(ctx, condition); err != nil { + return fmt.Errorf("failed to update job status: pod=%s condition=%s: %w", r.podName, r.conditionType, err) + } + + log.Printf("Job status updated successfully: %s=%s (reason: %s)", r.conditionType, conditionStatus, adapterResult.Reason) + return nil +} + +// UpdateFromError updates Job status when parsing fails +func (r *StatusReporter) UpdateFromError(ctx context.Context, err error) error { + log.Printf("Failed to parse result file: %v", err) + + condition := k8s.JobCondition{ + Type: r.conditionType, + Status: ConditionStatusFalse, + Reason: ReasonInvalidResultFormat, + Message: fmt.Sprintf("Failed to parse adapter result: %v", err), + } + + if updateErr := r.k8sClient.UpdateJobStatus(ctx, condition); updateErr != nil { + return fmt.Errorf("failed to update job status: %w", updateErr) + } + + log.Printf("Job status updated: %s=False (reason: %s)", r.conditionType, ReasonInvalidResultFormat) + return err +} + +// UpdateFromTimeout updates Job status when timeout occurs. +// As a last attempt, checks if container has terminated to provide more specific error info. +func (r *StatusReporter) UpdateFromTimeout(ctx context.Context) error { + log.Printf("Timeout waiting for adapter results (max wait: %s)", r.maxWaitTime) + log.Printf("Checking adapter container status: pod=%s container=%s", r.podName, r.adapterContainerName) + + containerStatus, err := r.k8sClient.GetAdapterContainerStatus(ctx, r.podName, r.adapterContainerName) + if err != nil { + log.Printf("Warning: failed to get container status pod=%s container=%s: %v", + r.podName, r.adapterContainerName, err) + } else if containerStatus != nil && containerStatus.State.Terminated != nil { + return r.UpdateFromTerminatedContainer(ctx, containerStatus.State.Terminated) + } + + condition := k8s.JobCondition{ + Type: r.conditionType, + Status: ConditionStatusFalse, + Reason: ReasonAdapterTimeout, + Message: fmt.Sprintf("Adapter did not produce results within %s", r.maxWaitTime), + } + + if err := r.k8sClient.UpdateJobStatus(ctx, condition); err != nil { + return fmt.Errorf("failed to update job status: %w", err) + } + + log.Printf("Job status updated: %s=False (reason: %s)", r.conditionType, ReasonAdapterTimeout) + return errors.New("timeout waiting for adapter results") +} + +// UpdateFromTerminatedContainer updates Job status from container termination state +func (r *StatusReporter) UpdateFromTerminatedContainer(ctx context.Context, terminated *corev1.ContainerStateTerminated) error { + var reason, message string + + if terminated.Reason == ContainerReasonOOMKilled { + reason = ReasonAdapterOOMKilled + message = "Adapter container was killed due to out of memory (OOMKilled)" + } else if terminated.ExitCode != 0 { + reason = ReasonAdapterExitedWithError + message = fmt.Sprintf("Adapter container exited with code %d: %s", terminated.ExitCode, terminated.Reason) + } else { + reason = ReasonAdapterCrashed + message = fmt.Sprintf("Adapter container terminated: %s", terminated.Reason) + } + + log.Printf("Adapter container terminated: reason=%s, exitCode=%d", terminated.Reason, terminated.ExitCode) + + condition := k8s.JobCondition{ + Type: r.conditionType, + Status: ConditionStatusFalse, + Reason: reason, + Message: message, + } + + if err := r.k8sClient.UpdateJobStatus(ctx, condition); err != nil { + return fmt.Errorf("failed to update job status: %w", err) + } + + log.Printf("Job status updated: %s=False (reason: %s)", r.conditionType, reason) + return fmt.Errorf("adapter container terminated: %s", message) +} diff --git a/status-reporter/pkg/reporter/reporter_suite_test.go b/status-reporter/pkg/reporter/reporter_suite_test.go new file mode 100644 index 0000000..0614058 --- /dev/null +++ b/status-reporter/pkg/reporter/reporter_suite_test.go @@ -0,0 +1,13 @@ +package reporter_test + +import ( + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" +) + +func TestReporter(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "Reporter Suite") +} diff --git a/status-reporter/pkg/reporter/reporter_test.go b/status-reporter/pkg/reporter/reporter_test.go new file mode 100644 index 0000000..4c7dd16 --- /dev/null +++ b/status-reporter/pkg/reporter/reporter_test.go @@ -0,0 +1,843 @@ +package reporter_test + +import ( + "context" + "errors" + "os" + "path/filepath" + "time" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + corev1 "k8s.io/api/core/v1" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/k8s" + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/reporter" + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/reporter/testhelpers" + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/result" +) + +var _ = Describe("Reporter", func() { + var ( + r *reporter.StatusReporter + mock *testhelpers.MockK8sClient + ctx context.Context + ) + + BeforeEach(func() { + mock = testhelpers.NewMockK8sClient() + ctx = context.Background() + r = reporter.NewReporterWithClient( + "/results/test.json", + 2*time.Second, + 300*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + }) + + Describe("Constants", func() { + It("exports expected constant values", func() { + Expect(reporter.ConditionStatusTrue).To(Equal("True")) + Expect(reporter.ConditionStatusFalse).To(Equal("False")) + Expect(reporter.ReasonAdapterCrashed).To(Equal("AdapterCrashed")) + Expect(reporter.ReasonAdapterOOMKilled).To(Equal("AdapterOOMKilled")) + Expect(reporter.ReasonAdapterExitedWithError).To(Equal("AdapterExitedWithError")) + Expect(reporter.ReasonAdapterTimeout).To(Equal("AdapterTimeout")) + Expect(reporter.ReasonInvalidResultFormat).To(Equal("InvalidResultFormat")) + }) + }) + + Describe("reporter.NewReporterWithClient", func() { + It("creates a reporter with custom condition type", func() { + customRep := reporter.NewReporterWithClient( + "/results/test.json", + 2*time.Second, + 300*time.Second, + "Ready", + "test-pod", + "adapter", + mock, + ) + Expect(customRep).NotTo(BeNil()) + }) + + It("uses default condition type when empty", func() { + customRep := reporter.NewReporterWithClient( + "/results/test.json", + 2*time.Second, + 300*time.Second, + "", + "test-pod", + "adapter", + mock, + ) + Expect(customRep).NotTo(BeNil()) + }) + }) + + Describe("updateFromResult", func() { + Context("with successful adapter result", func() { + It("updates job status to True", func() { + adapterResult := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "ValidationPassed", + Message: "All validations passed", + } + + err := r.UpdateFromResult(ctx, adapterResult) + + Expect(err).NotTo(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("True")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal("ValidationPassed")) + Expect(mock.LastUpdatedCondition.Message).To(Equal("All validations passed")) + }) + }) + + Context("with failed adapter result", func() { + It("updates job status to False", func() { + adapterResult := &result.AdapterResult{ + Status: result.StatusFailure, + Reason: "ValidationFailed", + Message: "Some validations failed", + } + + err := r.UpdateFromResult(ctx, adapterResult) + + Expect(err).NotTo(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal("ValidationFailed")) + Expect(mock.LastUpdatedCondition.Message).To(Equal("Some validations failed")) + }) + }) + + Context("when k8s client returns error", func() { + It("returns the error", func() { + mock.UpdateJobStatusFunc = func(ctx context.Context, condition k8s.JobCondition) error { + return errors.New("k8s update failed") + } + + adapterResult := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "ValidationPassed", + Message: "All validations passed", + } + + err := r.UpdateFromResult(ctx, adapterResult) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to update job status")) + Expect(err.Error()).To(ContainSubstring("k8s update failed")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("True")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal("ValidationPassed")) + Expect(mock.LastUpdatedCondition.Message).To(Equal("All validations passed")) + }) + }) + + Context("with custom condition type", func() { + It("uses the custom condition type", func() { + customRep := reporter.NewReporterWithClient( + "/results/test.json", + 2*time.Second, + 300*time.Second, + "Ready", + "test-pod", + "adapter", + mock, + ) + + adapterResult := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "ValidationPassed", + Message: "All validations passed", + } + + err := customRep.UpdateFromResult(ctx, adapterResult) + + Expect(err).NotTo(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Ready")) + }) + }) + }) + + Describe("updateFromError", func() { + It("updates job status with InvalidResultFormat reason", func() { + parseErr := errors.New("JSON parsing failed") + + err := r.UpdateFromError(ctx, parseErr) + + Expect(err).To(Equal(parseErr)) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonInvalidResultFormat)) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Failed to parse adapter result")) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("JSON parsing failed")) + }) + + It("returns error when k8s client fails", func() { + mock.UpdateJobStatusFunc = func(ctx context.Context, condition k8s.JobCondition) error { + return errors.New("k8s update failed") + } + + parseErr := errors.New("JSON parsing failed") + err := r.UpdateFromError(ctx, parseErr) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to update job status")) + Expect(err.Error()).To(ContainSubstring("k8s update failed")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonInvalidResultFormat)) + }) + }) + + Describe("handleTermination", func() { + var ( + tempDir string + resultsPath string + ) + + BeforeEach(func() { + tempDir = GinkgoT().TempDir() + resultsPath = filepath.Join(tempDir, "adapter-result.json") + r = reporter.NewReporterWithClient( + resultsPath, + 2*time.Second, + 300*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + }) + + Context("when result file exists and is valid", func() { + It("uses result file instead of exit code", func() { + // Write a valid result file + err := os.WriteFile(resultsPath, []byte(`{"status":"success","reason":"AllChecksPassed","message":"All validations passed"}`), 0644) + Expect(err).NotTo(HaveOccurred()) + + terminated := &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + } + + err = r.HandleTermination(ctx, terminated) + + Expect(err).NotTo(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("True")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal("AllChecksPassed")) + Expect(mock.LastUpdatedCondition.Message).To(Equal("All validations passed")) + }) + }) + + Context("when result file exists but is invalid", func() { + It("falls back to using exit code", func() { + // Write an invalid result file + err := os.WriteFile(resultsPath, []byte(`{invalid json`), 0644) + Expect(err).NotTo(HaveOccurred()) + + terminated := &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + } + + err = r.HandleTermination(ctx, terminated) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterExitedWithError)) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Adapter container exited with code 1")) + }) + }) + + Context("when result file does not exist", func() { + It("uses exit code to determine status", func() { + terminated := &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + } + + err := r.HandleTermination(ctx, terminated) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterExitedWithError)) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Adapter container exited with code 1")) + }) + }) + + Context("when container was OOMKilled", func() { + It("uses OOMKilled reason when no result file", func() { + terminated := &corev1.ContainerStateTerminated{ + Reason: "OOMKilled", + ExitCode: 137, + } + + err := r.HandleTermination(ctx, terminated) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterOOMKilled)) + Expect(mock.LastUpdatedCondition.Message).To(Equal("Adapter container was killed due to out of memory (OOMKilled)")) + }) + }) + }) + + Describe("updateFromTerminatedContainer", func() { + Context("when container was OOMKilled", func() { + It("updates with AdapterOOMKilled reason", func() { + terminated := &corev1.ContainerStateTerminated{ + Reason: "OOMKilled", + ExitCode: 137, + } + + err := r.UpdateFromTerminatedContainer(ctx, terminated) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterOOMKilled)) + Expect(mock.LastUpdatedCondition.Message).To(Equal("Adapter container was killed due to out of memory (OOMKilled)")) + }) + }) + + Context("when container exited with non-zero code", func() { + It("updates with AdapterExitedWithError reason", func() { + terminated := &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + } + + err := r.UpdateFromTerminatedContainer(ctx, terminated) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterExitedWithError)) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Adapter container exited with code 1")) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Error")) + }) + }) + + Context("when container exited with zero code", func() { + // This test case is valid because updateFromTerminatedContainer is only called + // when we've reached the timeout path (no result file was produced). + // If the container exited with code 0 but didn't produce the result file, + // this indicates a bug in the adapter logic - it should have either: + // 1. Written the result file and exited 0, OR + // 2. Failed to write the file and exited non-zero + // Therefore, we treat this as a crash/failure and mark it as AdapterCrashed. + It("updates with AdapterCrashed reason when no result file was produced", func() { + terminated := &corev1.ContainerStateTerminated{ + Reason: "Completed", + ExitCode: 0, + } + + err := r.UpdateFromTerminatedContainer(ctx, terminated) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterCrashed)) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Completed")) + }) + }) + + Context("when k8s client returns error", func() { + It("returns the error", func() { + mock.UpdateJobStatusFunc = func(ctx context.Context, condition k8s.JobCondition) error { + return errors.New("k8s update failed") + } + + terminated := &corev1.ContainerStateTerminated{ + Reason: "OOMKilled", + ExitCode: 137, + } + + err := r.UpdateFromTerminatedContainer(ctx, terminated) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to update job status")) + Expect(err.Error()).To(ContainSubstring("k8s update failed")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterOOMKilled)) + }) + }) + }) + + Describe("updateFromTimeout", func() { + Context("when adapter container is terminated with OOMKilled", func() { + It("updates with AdapterOOMKilled reason", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Terminated: &corev1.ContainerStateTerminated{ + Reason: "OOMKilled", + ExitCode: 137, + }, + }, + }, nil + } + + err := r.UpdateFromTimeout(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterOOMKilled)) + }) + }) + + Context("when adapter container is terminated with error", func() { + It("updates with AdapterExitedWithError reason", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Terminated: &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + }, + }, + }, nil + } + + err := r.UpdateFromTimeout(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterExitedWithError)) + }) + }) + + Context("when adapter container is not terminated", func() { + It("updates with AdapterTimeout reason", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Running: &corev1.ContainerStateRunning{}, + }, + }, nil + } + + err := r.UpdateFromTimeout(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(Equal("timeout waiting for adapter results")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterTimeout)) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Adapter did not produce results within")) + }) + }) + + Context("when getting container status fails", func() { + It("still updates with AdapterTimeout reason", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return nil, errors.New("failed to get container status") + } + + err := r.UpdateFromTimeout(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(Equal("timeout waiting for adapter results")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterTimeout)) + }) + }) + + Context("when k8s client update fails", func() { + It("returns the error", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Running: &corev1.ContainerStateRunning{}, + }, + }, nil + } + + mock.UpdateJobStatusFunc = func(ctx context.Context, condition k8s.JobCondition) error { + return errors.New("k8s update failed") + } + + err := r.UpdateFromTimeout(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to update job status")) + Expect(err.Error()).To(ContainSubstring("k8s update failed")) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterTimeout)) + }) + }) + }) + + Describe("Run", func() { + var ( + tempDir string + resultsPath string + ) + + BeforeEach(func() { + tempDir = GinkgoT().TempDir() + resultsPath = filepath.Join(tempDir, "adapter-result.json") + }) + + Context("when result file exists immediately", func() { + It("processes the result successfully", func() { + // Write result file before starting + err := os.WriteFile(resultsPath, []byte(`{"status":"success","reason":"AllChecksPassed","message":"All validations passed"}`), 0644) + Expect(err).NotTo(HaveOccurred()) + + r := reporter.NewReporterWithClient( + resultsPath, + 100*time.Millisecond, + 5*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + + err = r.Run(ctx) + + Expect(err).NotTo(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("True")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal("AllChecksPassed")) + }) + }) + + Context("when result file appears after polling", func() { + It("processes the result successfully", func() { + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + + // Write file after a short delay + go func() { + time.Sleep(150 * time.Millisecond) + _ = os.WriteFile(resultsPath, []byte(`{"status":"failure","reason":"ValidationFailed","message":"Some checks failed"}`), 0644) + }() + + err := r.Run(ctx) + + Expect(err).NotTo(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal("ValidationFailed")) + }) + }) + + Context("when timeout occurs without result file", func() { + It("reports timeout error", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Running: &corev1.ContainerStateRunning{}, + }, + }, nil + } + + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 200*time.Millisecond, + "Available", + "test-pod", + "adapter", + mock, + ) + + err := r.Run(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(Equal("timeout waiting for adapter results")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterTimeout)) + }) + }) + + Context("when result file has invalid JSON", func() { + It("reports parse error", func() { + err := os.WriteFile(resultsPath, []byte(`{invalid json`), 0644) + Expect(err).NotTo(HaveOccurred()) + + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + + err = r.Run(ctx) + + Expect(err).To(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonInvalidResultFormat)) + }) + }) + + Context("when result file is empty", func() { + It("reports parse error", func() { + err := os.WriteFile(resultsPath, []byte(""), 0644) + Expect(err).NotTo(HaveOccurred()) + + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + + err = r.Run(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("result file is empty")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonInvalidResultFormat)) + }) + }) + + Context("when result file has invalid status", func() { + It("reports parse error", func() { + err := os.WriteFile(resultsPath, []byte(`{"status":"invalid","reason":"Test","message":"Test"}`), 0644) + Expect(err).NotTo(HaveOccurred()) + + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + + err = r.Run(ctx) + + Expect(err).To(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonInvalidResultFormat)) + }) + }) + + Context("when context is cancelled before completion", func() { + It("stops polling and triggers timeout path", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Running: &corev1.ContainerStateRunning{}, + }, + }, nil + } + + cancelCtx, cancel := context.WithCancel(context.Background()) + + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + + // Cancel context after a short delay + go func() { + time.Sleep(100 * time.Millisecond) + cancel() + }() + + err := r.Run(cancelCtx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(Equal("timeout waiting for adapter results")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterTimeout)) + }) + }) + + Context("when UpdateFromResult fails", func() { + It("returns the update error", func() { + err := os.WriteFile(resultsPath, []byte(`{"status":"success","reason":"Test","message":"Test"}`), 0644) + Expect(err).NotTo(HaveOccurred()) + + mock.UpdateJobStatusFunc = func(ctx context.Context, condition k8s.JobCondition) error { + return errors.New("k8s update failed") + } + + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + "Available", + "test-pod", + "adapter", + mock, + ) + + err = r.Run(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to update job status")) + }) + }) + + Context("when timeout occurs with terminated container", func() { + It("reports container termination instead of timeout", func() { + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Terminated: &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + }, + }, + }, nil + } + + r := reporter.NewReporterWithClient( + resultsPath, + 50*time.Millisecond, + 200*time.Millisecond, + "Available", + "test-pod", + "adapter", + mock, + ) + + err := r.Run(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterExitedWithError)) + }) + }) + + Context("when container terminates during polling without result file", func() { + It("detects termination immediately and reports exit code", func() { + callCount := 0 + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + callCount++ + if callCount == 1 { + // First poll: container is running + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Running: &corev1.ContainerStateRunning{}, + }, + }, nil + } + // Container terminates on second check + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Terminated: &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + }, + }, + }, nil + } + + r := reporter.NewReporterWithClientAndIntervals( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + 100*time.Millisecond, // Check container status every 100ms for tests + "Available", + "test-pod", + "adapter", + mock, + ) + + err := r.Run(ctx) + + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("adapter container terminated")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal(reporter.ReasonAdapterExitedWithError)) + Expect(mock.LastUpdatedCondition.Message).To(ContainSubstring("Adapter container exited with code 1")) + }) + }) + + Context("when container terminates during polling with result file", func() { + It("detects termination and uses result file", func() { + callCount := 0 + mock.GetAdapterContainerStatusFunc = func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + callCount++ + if callCount == 1 { + // First poll: container is running + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Running: &corev1.ContainerStateRunning{}, + }, + }, nil + } + // Container terminates on second check, and we write the result file + if callCount == 2 { + _ = os.WriteFile(resultsPath, []byte(`{"status":"failure","reason":"ValidationFailed","message":"Validation checks failed"}`), 0644) + } + return &corev1.ContainerStatus{ + Name: "adapter", + State: corev1.ContainerState{ + Terminated: &corev1.ContainerStateTerminated{ + Reason: "Error", + ExitCode: 1, + }, + }, + }, nil + } + + r := reporter.NewReporterWithClientAndIntervals( + resultsPath, + 50*time.Millisecond, + 5*time.Second, + 100*time.Millisecond, // Check container status every 100ms for tests + "Available", + "test-pod", + "adapter", + mock, + ) + + err := r.Run(ctx) + + Expect(err).NotTo(HaveOccurred()) + Expect(mock.LastUpdatedCondition.Type).To(Equal("Available")) + Expect(mock.LastUpdatedCondition.Status).To(Equal("False")) + Expect(mock.LastUpdatedCondition.Reason).To(Equal("ValidationFailed")) + Expect(mock.LastUpdatedCondition.Message).To(Equal("Validation checks failed")) + }) + }) + }) +}) diff --git a/status-reporter/pkg/reporter/testhelpers/mock_k8s_client.go b/status-reporter/pkg/reporter/testhelpers/mock_k8s_client.go new file mode 100644 index 0000000..248a5fa --- /dev/null +++ b/status-reporter/pkg/reporter/testhelpers/mock_k8s_client.go @@ -0,0 +1,35 @@ +package testhelpers + +import ( + "context" + + corev1 "k8s.io/api/core/v1" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/k8s" +) + +// MockK8sClient is a mock implementation of k8s client operations for testing +type MockK8sClient struct { + UpdateJobStatusFunc func(ctx context.Context, condition k8s.JobCondition) error + GetAdapterContainerStatusFunc func(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) + LastUpdatedCondition k8s.JobCondition +} + +func NewMockK8sClient() *MockK8sClient { + return &MockK8sClient{} +} + +func (m *MockK8sClient) UpdateJobStatus(ctx context.Context, condition k8s.JobCondition) error { + m.LastUpdatedCondition = condition + if m.UpdateJobStatusFunc != nil { + return m.UpdateJobStatusFunc(ctx, condition) + } + return nil +} + +func (m *MockK8sClient) GetAdapterContainerStatus(ctx context.Context, podName, containerName string) (*corev1.ContainerStatus, error) { + if m.GetAdapterContainerStatusFunc != nil { + return m.GetAdapterContainerStatusFunc(ctx, podName, containerName) + } + return nil, nil +} diff --git a/status-reporter/pkg/result/parser.go b/status-reporter/pkg/result/parser.go new file mode 100644 index 0000000..0ec675e --- /dev/null +++ b/status-reporter/pkg/result/parser.go @@ -0,0 +1,60 @@ +package result + +import ( + "encoding/json" + "fmt" + "os" + "path/filepath" +) + +const ( + // maxResultFileSize limits result file size to prevent memory exhaustion + maxResultFileSize = 1 * 1024 * 1024 // 1MB +) + +// Parser handles parsing adapter result files +type Parser struct{} + +// NewParser creates a new result parser +func NewParser() *Parser { + return &Parser{} +} + +// ParseFile reads and parses a result file from the given path +func (p *Parser) ParseFile(path string) (*AdapterResult, error) { + // Clean and resolve the path to prevent path traversal attacks + cleanedPath, err := filepath.Abs(filepath.Clean(path)) + if err != nil { + return nil, fmt.Errorf("failed to resolve path=%s: %w", path, err) + } + + data, err := os.ReadFile(cleanedPath) + if err != nil { + return nil, fmt.Errorf("failed to read result file path=%s: %w", cleanedPath, err) + } + + if len(data) == 0 { + return nil, fmt.Errorf("result file is empty: path=%s", cleanedPath) + } + + if len(data) > maxResultFileSize { + return nil, fmt.Errorf("result file too large: path=%s size=%d max=%d", cleanedPath, len(data), maxResultFileSize) + } + + return p.Parse(data) +} + +// Parse parses result data from JSON bytes +func (p *Parser) Parse(data []byte) (*AdapterResult, error) { + var result AdapterResult + + if err := json.Unmarshal(data, &result); err != nil { + return nil, fmt.Errorf("failed to parse JSON: %w", err) + } + + if err := result.Validate(); err != nil { + return nil, fmt.Errorf("invalid result format: %w", err) + } + + return &result, nil +} diff --git a/status-reporter/pkg/result/parser_test.go b/status-reporter/pkg/result/parser_test.go new file mode 100644 index 0000000..b4a2f72 --- /dev/null +++ b/status-reporter/pkg/result/parser_test.go @@ -0,0 +1,153 @@ +package result_test + +import ( + "os" + "path/filepath" + "strings" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/result" +) + +var _ = Describe("Parser", func() { + var parser *result.Parser + + BeforeEach(func() { + parser = result.NewParser() + }) + + Describe("NewParser", func() { + It("creates a new parser", func() { + Expect(parser).NotTo(BeNil()) + }) + }) + + Describe("ParseFile", func() { + var tmpDir string + + BeforeEach(func() { + var err error + tmpDir, err = os.MkdirTemp("", "parser-test-*") + Expect(err).NotTo(HaveOccurred()) + }) + + AfterEach(func() { + os.RemoveAll(tmpDir) + }) + + Context("with valid files", func() { + It("parses valid success result", func() { + content := `{"status":"success","reason":"TestPassed","message":"Test completed"}` + tmpFile := filepath.Join(tmpDir, "result.json") + err := os.WriteFile(tmpFile, []byte(content), 0644) + Expect(err).NotTo(HaveOccurred()) + + r, err := parser.ParseFile(tmpFile) + Expect(err).NotTo(HaveOccurred()) + Expect(r).NotTo(BeNil()) + Expect(r.Status).To(Equal(result.StatusSuccess)) + }) + + It("parses valid failure result", func() { + content := `{"status":"failure","reason":"TestFailed","message":"Test failed"}` + tmpFile := filepath.Join(tmpDir, "result.json") + err := os.WriteFile(tmpFile, []byte(content), 0644) + Expect(err).NotTo(HaveOccurred()) + + r, err := parser.ParseFile(tmpFile) + Expect(err).NotTo(HaveOccurred()) + Expect(r).NotTo(BeNil()) + Expect(r.Status).To(Equal(result.StatusFailure)) + }) + }) + + Context("with invalid files", func() { + It("returns error for empty file", func() { + tmpFile := filepath.Join(tmpDir, "empty.json") + err := os.WriteFile(tmpFile, []byte(""), 0644) + Expect(err).NotTo(HaveOccurred()) + + _, err = parser.ParseFile(tmpFile) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("result file is empty")) + }) + + It("returns error for invalid JSON", func() { + content := `{invalid json}` + tmpFile := filepath.Join(tmpDir, "invalid.json") + err := os.WriteFile(tmpFile, []byte(content), 0644) + Expect(err).NotTo(HaveOccurred()) + + _, err = parser.ParseFile(tmpFile) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to parse JSON")) + }) + + It("returns error for invalid status", func() { + content := `{"status":"invalid","reason":"Test","message":"Test"}` + tmpFile := filepath.Join(tmpDir, "badstatus.json") + err := os.WriteFile(tmpFile, []byte(content), 0644) + Expect(err).NotTo(HaveOccurred()) + + _, err = parser.ParseFile(tmpFile) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("invalid result format")) + }) + + It("returns error for file too large", func() { + content := strings.Repeat("x", 1*1024*1024+1) + tmpFile := filepath.Join(tmpDir, "large.json") + err := os.WriteFile(tmpFile, []byte(content), 0644) + Expect(err).NotTo(HaveOccurred()) + + _, err = parser.ParseFile(tmpFile) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("result file too large")) + }) + + It("returns error for nonexistent file", func() { + _, err := parser.ParseFile("/nonexistent/path/file.json") + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to read result file")) + }) + }) + }) + + Describe("Parse", func() { + Context("with valid data", func() { + It("parses valid JSON", func() { + data := []byte(`{"status":"success","reason":"OK","message":"OK"}`) + r, err := parser.Parse(data) + Expect(err).NotTo(HaveOccurred()) + Expect(r).NotTo(BeNil()) + Expect(r.Status).To(Equal(result.StatusSuccess)) + }) + + It("provides defaults for missing fields", func() { + data := []byte(`{"status":"success"}`) + r, err := parser.Parse(data) + Expect(err).NotTo(HaveOccurred()) + Expect(r.Reason).To(Equal(result.DefaultReason)) + Expect(r.Message).To(Equal(result.DefaultMessage)) + }) + }) + + Context("with invalid data", func() { + It("returns error for invalid JSON", func() { + data := []byte(`{bad json`) + _, err := parser.Parse(data) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("failed to parse JSON")) + }) + + It("returns error for invalid status value", func() { + data := []byte(`{"status":"unknown","reason":"Test","message":"Test"}`) + _, err := parser.Parse(data) + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("invalid result format")) + }) + }) + }) +}) diff --git a/status-reporter/pkg/result/result.go b/status-reporter/pkg/result/result.go new file mode 100644 index 0000000..13bccbc --- /dev/null +++ b/status-reporter/pkg/result/result.go @@ -0,0 +1,94 @@ +package result + +import ( + "encoding/json" + "fmt" + "strings" + "unicode/utf8" +) + +const ( + StatusSuccess = "success" + StatusFailure = "failure" + + DefaultReason = "NoReasonProvided" + DefaultMessage = "No message provided" + + maxReasonLength = 128 + maxMessageLength = 1024 +) + +// ResultError represents a validation error for adapter result validation +type ResultError struct { + Field string + Message string +} + +func (e *ResultError) Error() string { + return e.Field + ": " + e.Message +} + +// AdapterResult represents the result contract that any adapter must produce +type AdapterResult struct { + // Status must be either StatusSuccess or StatusFailure + Status string `json:"status"` + + // Reason is a machine-readable identifier (e.g., "AllChecksPassed", "DNSConfigured") + Reason string `json:"reason"` + + // Message is a human-readable description + Message string `json:"message"` + + // Details contains optional adapter-specific data as raw JSON + Details json.RawMessage `json:"details,omitempty"` +} + +// IsSuccess returns true if the adapter operation succeeded +func (r *AdapterResult) IsSuccess() bool { + return r.Status == StatusSuccess +} + +// Validate validates and normalizes the result +func (r *AdapterResult) Validate() error { + if r.Status != StatusSuccess && r.Status != StatusFailure { + return &ResultError{ + Field: "status", + Message: fmt.Sprintf("must be either '%s' or '%s'", StatusSuccess, StatusFailure), + } + } + + r.Reason = strings.TrimSpace(r.Reason) + if r.Reason == "" { + r.Reason = DefaultReason + } + if len(r.Reason) > maxReasonLength { + r.Reason = truncateUTF8(r.Reason, maxReasonLength) + } + + r.Message = strings.TrimSpace(r.Message) + if r.Message == "" { + r.Message = DefaultMessage + } + if len(r.Message) > maxMessageLength { + r.Message = truncateUTF8(r.Message, maxMessageLength) + } + + return nil +} + +// truncateUTF8 safely truncates a string to maxBytes without splitting multi-byte UTF-8 characters +func truncateUTF8(s string, maxBytes int) string { + if len(s) <= maxBytes { + return s + } + + // Find the last valid UTF-8 character boundary before maxBytes + for i := maxBytes; i > 0; i-- { + if utf8.RuneStart(s[i]) { + return s[:i] + } + } + + // Fallback (should never happen with valid UTF-8) + return "" +} diff --git a/status-reporter/pkg/result/result_suite_test.go b/status-reporter/pkg/result/result_suite_test.go new file mode 100644 index 0000000..b580733 --- /dev/null +++ b/status-reporter/pkg/result/result_suite_test.go @@ -0,0 +1,13 @@ +package result_test + +import ( + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" +) + +func TestResult(t *testing.T) { + RegisterFailHandler(Fail) + RunSpecs(t, "Result Suite") +} diff --git a/status-reporter/pkg/result/result_test.go b/status-reporter/pkg/result/result_test.go new file mode 100644 index 0000000..d584a78 --- /dev/null +++ b/status-reporter/pkg/result/result_test.go @@ -0,0 +1,198 @@ +package result_test + +import ( + "encoding/json" + "strings" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + "github.com/openshift-hyperfleet/adapter-validation-gcp/status-reporter/pkg/result" +) + +var _ = Describe("AdapterResult", func() { + Describe("IsSuccess", func() { + It("returns true for success status", func() { + r := &result.AdapterResult{Status: result.StatusSuccess} + Expect(r.IsSuccess()).To(BeTrue()) + }) + + It("returns false for failure status", func() { + r := &result.AdapterResult{Status: result.StatusFailure} + Expect(r.IsSuccess()).To(BeFalse()) + }) + + It("returns false for invalid status", func() { + r := &result.AdapterResult{Status: "invalid"} + Expect(r.IsSuccess()).To(BeFalse()) + }) + }) + + Describe("Validate", func() { + Context("with valid results", func() { + It("accepts valid success result", func() { + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "AllChecksPassed", + Message: "All validation checks passed", + } + Expect(r.Validate()).To(Succeed()) + }) + + It("accepts valid failure result", func() { + r := &result.AdapterResult{ + Status: result.StatusFailure, + Reason: "SomeCheckFailed", + Message: "Validation failed", + } + Expect(r.Validate()).To(Succeed()) + }) + }) + + Context("with invalid status", func() { + It("returns error for invalid status", func() { + r := &result.AdapterResult{ + Status: "invalid", + Reason: "Test", + Message: "Test message", + } + err := r.Validate() + Expect(err).To(HaveOccurred()) + Expect(err.Error()).To(ContainSubstring("must be either 'success' or 'failure'")) + }) + }) + + Context("with empty or whitespace fields", func() { + It("provides default reason for empty reason", func() { + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "", + Message: "Test message", + } + Expect(r.Validate()).To(Succeed()) + Expect(r.Reason).To(Equal(result.DefaultReason)) + }) + + It("provides default reason for whitespace-only reason", func() { + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: " ", + Message: "Test message", + } + Expect(r.Validate()).To(Succeed()) + Expect(r.Reason).To(Equal(result.DefaultReason)) + }) + + It("provides default message for empty message", func() { + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "TestReason", + Message: "", + } + Expect(r.Validate()).To(Succeed()) + Expect(r.Message).To(Equal(result.DefaultMessage)) + }) + + It("provides default message for whitespace-only message", func() { + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "TestReason", + Message: " ", + } + Expect(r.Validate()).To(Succeed()) + Expect(r.Message).To(Equal(result.DefaultMessage)) + }) + }) + + Context("with whitespace", func() { + It("trims leading and trailing whitespace from reason", func() { + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: " TestReason ", + Message: "Test message", + } + Expect(r.Validate()).To(Succeed()) + Expect(r.Reason).To(Equal("TestReason")) + }) + + It("trims leading and trailing whitespace from message", func() { + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "TestReason", + Message: " Test message ", + } + Expect(r.Validate()).To(Succeed()) + Expect(r.Message).To(Equal("Test message")) + }) + }) + + Context("with overly long fields", func() { + It("truncates long reason to max length", func() { + longReason := strings.Repeat("A", 200) + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: longReason, + Message: "Test message", + } + Expect(r.Validate()).To(Succeed()) + Expect(len(r.Reason)).To(Equal(128)) + }) + + It("truncates long message to max length", func() { + longMessage := strings.Repeat("A", 2000) + r := &result.AdapterResult{ + Status: result.StatusSuccess, + Reason: "TestReason", + Message: longMessage, + } + Expect(r.Validate()).To(Succeed()) + Expect(len(r.Message)).To(Equal(1024)) + }) + }) + }) + + Describe("JSON marshaling", func() { + It("unmarshals basic success result", func() { + jsonData := `{"status":"success","reason":"TestPassed","message":"Test completed"}` + var r result.AdapterResult + + err := json.Unmarshal([]byte(jsonData), &r) + Expect(err).NotTo(HaveOccurred()) + Expect(r.Status).To(Equal(result.StatusSuccess)) + Expect(r.Reason).To(Equal("TestPassed")) + Expect(r.Message).To(Equal("Test completed")) + }) + + It("unmarshals result with details", func() { + jsonData := `{"status":"failure","reason":"TestFailed","message":"Test failed","details":{"key":"value"}}` + var r result.AdapterResult + + err := json.Unmarshal([]byte(jsonData), &r) + Expect(err).NotTo(HaveOccurred()) + Expect(r.Status).To(Equal(result.StatusFailure)) + Expect(r.Details).To(Equal(json.RawMessage(`{"key":"value"}`))) + }) + + It("unmarshals result with nested details", func() { + jsonData := `{"status":"success","reason":"OK","message":"OK","details":{"nested":{"deep":"value"}}}` + var r result.AdapterResult + + err := json.Unmarshal([]byte(jsonData), &r) + Expect(err).NotTo(HaveOccurred()) + Expect(string(r.Details)).To(ContainSubstring("nested")) + Expect(string(r.Details)).To(ContainSubstring("deep")) + }) + }) +}) + +var _ = Describe("ResultError", func() { + It("formats error message correctly", func() { + err := &result.ResultError{Field: "status", Message: "required"} + Expect(err.Error()).To(Equal("status: required")) + }) + + It("handles longer messages", func() { + err := &result.ResultError{Field: "reason", Message: "must be alphanumeric"} + Expect(err.Error()).To(Equal("reason: must be alphanumeric")) + }) +})