diff --git a/skills/design-cicd-gcp/SKILL.md b/skills/design-cicd-gcp/SKILL.md new file mode 100644 index 0000000..e24fb57 --- /dev/null +++ b/skills/design-cicd-gcp/SKILL.md @@ -0,0 +1,63 @@ +--- +name: design-cicd-gcp +description: Design and implement a Google Cloud based CI/CD pipeline. Use when the user wants to build a new pipeline, design an architecture on GCP. +--- + +# Google Cloud DevOps Assistant + +You are a comprehensive Google Cloud DevOps Assistant. Your primary function is to help users design, build, and manage CI/CD pipelines on Google Cloud. You operate by first analyzing the user's intent and then following the appropriate workflow. + +## Core Operational Logic: Intent Analysis + +First, analyze the user's request to determine the primary intent. + +* If the intent is a high-level goal like **"build a pipeline," "design an architecture,"** or **"migrate my Jenkins pipeline,"** you must follow the two-stage **Workflow: Design & Implement**. + +## Workflow: Design & Implement + +This workflow is for high-level, architectural tasks. It consists of a design phase followed by an implementation phase. + +### Stage 1: Architectural Design + +Your purpose in this stage is to operate as a collaborative consultant, guiding the user to a complete, concrete, and expert-designed pipeline plan. + +1. **Autonomous Context Gathering**: Before asking any questions, perform an autonomous scan of the local repository to gather initial context (Environment *e.g., target cloud, existing infrastructure*, Application Archetype, Migration Intent *e.g., from Jenkins, from on-prem*). +2. **Guided Strategic Consultation**: Present your initial findings to the user. Then, ask key strategic questions to clarify their release strategy (e.g., trigger type, deployment target, environment needs). +3. **Identify Pattern and Propose First Draft**: Based on the gathered context and user's release strategy, search the `references/` directory for files prefixed with `pattern_` (e.g., `pattern_trunk_based_push_to_deploy.txt`). Select the best-matching pattern *(e.g., by prioritizing patterns that align with the user's specified deployment style or keywords)* and propose "Draft 1". +4. **Collaborative Design with Adaptive Re-planning**: Solicit feedback on the draft. + * **For minor changes** (e.g., "add a linter"), update the plan and present a new draft. + * **For major architectural changes** (e.g., "make the cluster secure"), re-evaluate the patterns in the `references/` directory (prefixed with `pattern_`) against the new requirements. Propose switching to a better-fitting pattern if one exists, or integrate the major changes into the current plan. +5. **Plan Finalization & Handoff**: Continue the refinement loop until the user gives final approval. Once approved, your only output for this stage is the final action plan in **YAML format**. After generating the YAML, you will automatically proceed to Stage 2. + +### Stage 2: Plan Implementation + +Once the user has approved the YAML plan, your sole purpose is to execute it by calling a suite of specialized tools. + +1. **Process Sequentially**: Execute the plan by processing the `stages` object in order. +2. **Announce the Step**: For each component in the plan, tell the user which component you are starting (e.g., "Starting step: 'Build and Test'"). +3. **Execute the Recommended Tool**: Call the specific tool recommended by the knowledge base (e.g., `create_cloud_build_trigger`), passing it the component's `details` block from the plan. +4. **Await and Report Success**: Wait for the tool to return a success message, report the completion to the user, and then proceed to the next component. + +## Universal Protocols & Constraints + +### Error Handling Protocol + +1. **STOP EXECUTION**: If any tool returns an error, immediately halt the plan. +2. **REPORT THE ERROR**: Present the exact error message to the user. +3. **DIAGNOSE AND SUGGEST**: If possible, identify a likely cause and suggest a single, corrective tool call (e.g., using `enable_api`). +4. **AWAIT PERMISSION**: You **MUST NOT** attempt any fix without the user's explicit permission. + +### Core Constraints + +* **Follow Instructions**: Your primary directive is to follow the plan or the user's direct command without deviation. +* **Use Only Your Tools**: You can only call the specialized tools provided to you. + +### Execution Mandate + +* **Immediately begin executing the very first step of that workflow.** +* **DO NOT** start by introducing yourself, summarizing your abilities, or asking the user what they want to do. Their query *is* what they want to do. Proceed directly to the first action and summarize what you are going to do. + +### Defaults + +* **Google Cloud**: If gcloud is installed use `gcloud config list` to get the default *project* and *region*. +* **GIT URL**: If git is installed use `git remote get-url origin` to get the git url for Developer Connect tools. diff --git a/skills/design-cicd-gcp/references/how_to_build_cloudbuild_yaml.md b/skills/design-cicd-gcp/references/how_to_build_cloudbuild_yaml.md new file mode 100644 index 0000000..4a99e36 --- /dev/null +++ b/skills/design-cicd-gcp/references/how_to_build_cloudbuild_yaml.md @@ -0,0 +1,92 @@ +# How to Create a `cloudbuild.yaml` File + +This document outlines the standard procedure for automatically generating a `cloudbuild.yaml` configuration file. The primary goal is to create a best-practice, archetype-specific CI pipeline when one does not already exist in the user's repository. + +--- + +## When to Generate a `cloudbuild.yaml` + +The core principle is to be non-destructive and idempotent. The generation process should only be triggered under one specific condition: + +* **A `cloudbuild.yaml` file does not exist at the root of the source repository.** + +If a `cloudbuild.yaml` file is already present, it should be treated as the source of truth and used as-is without modification, unless the user explicitly requests a change. + +--- + +## Step 1: Discovering the Application Archetype + +Before a `cloudbuild.yaml` can be generated, the application's archetype must be identified. This is done by inspecting the local filesystem for common project files. This discovery step is crucial for tailoring the build steps (e.g., linting, testing) to the specific language or framework. + +The following mapping should be used: + +* `pom.xml` → **Java (Maven)** +* `build.gradle` → **Java (Gradle)** +* `package.json` → **Node.js** +* `requirements.txt` or `pyproject.toml` → **Python** +* `go.mod` → **Go** + +--- + +## Step 2: Generating the Default CI Pipeline + +If no `cloudbuild.yaml` exists, a new one should be generated with a standard, four-step CI sequence. These steps must be tailored to the discovered application archetype. + +1. **Lint**: Run a static code analysis tool to check for stylistic or programmatic errors. The specific linter should match the application archetype (e.g., `pylint` for Python, `eslint` for Node.js). +2. **Test**: Execute the project's unit tests. The test runner should match the archetype (e.g., `pytest` for Python, `go test` for Go, `mvn test` for Maven). +3. **Build Container**: Use the native `gcr.io/cloud-builders/docker` builder to build the container image from the `Dockerfile` in the repository. +4. **Push Container**: Push the newly built container image to the verified Artifact Registry repository. + +--- + +## Key Best Practices and Configuration + +When generating the `cloudbuild.yaml`, several best practices must be included to ensure security and efficiency. + +* **Image Tagging**: The container image must be tagged with the `$SHORT_SHA` substitution variable. This ensures a unique, traceable image for every single commit. +* **Use the `images` Attribute**: The final image URI should be explicitly listed under the top-level `images` attribute in the `cloudbuild.yaml`. This allows Cloud Build to push the image concurrently with other build steps, potentially speeding up the build. +* **Enable Provenance**: To enhance supply chain security, build provenance should always be enabled. This is done by adding an `options` block and setting `requestedVerifyOption: VERIFIED`. + +--- + +## Example: Python Application + +Here is a complete, best-practice `cloudbuild.yaml` generated for a Python application that uses `pytest` for testing. + +```yaml +# Auto-generated cloudbuild.yaml for a Python application + +steps: + # Step 1: Install dependencies + - name: 'python:3.11' + entrypoint: 'pip' + args: ['install', '-r', 'requirements.txt', '--user'] + + # Step 2: Run unit tests with pytest + - name: 'python:3.11' + entrypoint: 'python' + args: ['-m', 'pytest'] + + # Step 3: Build the container image + # The image is tagged with the short commit SHA for traceability. + - name: 'gcr.io/cloud-builders/docker' + args: + - 'build' + - '-t' + - '${_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${_REPO_NAME}/${_IMAGE_NAME}:$SHORT_SHA' + - '.' + + # Step 4: Push the container image to Artifact Registry + # This step runs in parallel with the final steps of the build. + - name: 'gcr.io/cloud-builders/docker' + args: + - 'push' + - '${_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${_REPO_NAME}/${_IMAGE_NAME}:$SHORT_SHA' + +# Explicitly list the final image to be pushed for potential build speed improvements. +images: + - '${_LOCATION}-docker.pkg.dev/${PROJECT_ID}/${_REPO_NAME}/${_IMAGE_NAME}:$SHORT_SHA' + +# Enable SLSA Level 3 provenance for enhanced supply chain security. +options: + requestedVerifyOption: VERIFIED \ No newline at end of file diff --git a/skills/design-cicd-gcp/references/how_to_create_cloudbuild_trigger.md b/skills/design-cicd-gcp/references/how_to_create_cloudbuild_trigger.md new file mode 100644 index 0000000..9b8a76e --- /dev/null +++ b/skills/design-cicd-gcp/references/how_to_create_cloudbuild_trigger.md @@ -0,0 +1,47 @@ +# End-to-End Guide: Creating a Cloud Build Trigger + +This document outlines the standard, idempotent procedure for creating a Google Cloud Build trigger. Creating the trigger is the **final** action in a sequence of prerequisite checks and resource provisioning steps. The agent must ensure all dependencies are met before attempting to create the trigger itself. + +--- + +## ## Core Principle: Idempotency + +Every step in this process must be **idempotent**. This means the agent must **always check if a resource already exists** before attempting to create it. This prevents errors and ensures the process can be run multiple times safely. + +--- + +## ## Prerequisite Checklist + +The following dependencies must be satisfied in order before creating the trigger. + +### ### 1. Ensure `cloudbuild.yaml` Exists + +The trigger needs a build configuration file to execute. + +* **Action**: Check for a `cloudbuild.yaml` file at the root of the source repository. +* **If it does not exist**: Generate one by translating the user-approved plan. The steps in the generated YAML must be a direct translation of the components defined in the plan's `stages` object. The specifics of the steps (e.g., using `pytest` vs. `mvn test`) should be informed by discovering the application archetype (e.g., by finding a `pyproject.toml` or `pom.xml`). + +### ### 2. Ensure Artifact Registry Repository Exists + +The `cloudbuild.yaml` file will reference an Artifact Registry repository to push container images. This repository must exist before a build can succeed. + +* **Action**: Parse the `cloudbuild.yaml` file to identify the Artifact Registry image path (e.g., `us-central1-docker.pkg.dev/my-project/my-app-repo/my-image`). +* **Extract** the repository portion (`us-central1-docker.pkg.dev/my-project/my-app-repo`). +* **Check** if this repository already exists in the target GCP project. +* **If it does not exist**: Create it using the available tools. + +### ### 3. Ensure Developer Connect and Repository Link Exist + +Cloud Build triggers connect to source code via Developer Connect. The entire connection and repository link must be in place. + +1. **Check for Connection**: First, check if a Developer Connect connection already exists for the relevant Git provider (e.g., GitHub) in the target project and region. +2. **Create Connection (if needed)**: If no suitable connection exists, create one. This may require prompting the user to complete the authorization flow in the GCP console. +3. **Obtain Source URI**: The agent must know the exact URI of the source code repository (e.g., `https://github.com/my-org/my-app`). This should be obtained from the user-approved plan or by asking the user directly. +4. **Check for Repository Link**: Check if a repository link for that specific URI already exists within the Developer Connect connection. +5. **Create Repository Link (if needed)**: If the link does not exist, create it. This link is the resource that the Cloud Build trigger will formally point to. + +--- + +## ## Final Step: Creating the Trigger + +Once all prerequisites are met, the agent can create the trigger itself using the available tools. \ No newline at end of file diff --git a/skills/design-cicd-gcp/references/how_to_write_dockerfile.md b/skills/design-cicd-gcp/references/how_to_write_dockerfile.md new file mode 100644 index 0000000..66ef134 --- /dev/null +++ b/skills/design-cicd-gcp/references/how_to_write_dockerfile.md @@ -0,0 +1,224 @@ +# Comprehensive Guide to Writing Dockerfiles + +This document is a detailed guideline for writing professional-grade Dockerfiles + + +## 1. Core Principles: Thinking in Layers + +A Docker image is a stack of read-only layers. Each instruction (`RUN`, `COPY`, `ADD`) creates a new layer. + +* **Optimize the Cache:** Order instructions from **least frequent to most frequent change**. +* *Bad:* Copying all source code, then running `npm install`. (Cache busts on every code change). +* *Good:* Copy `package.json`, run `install`, *then* copy source code. +* *Install dependencies first:* For all languages, install dependencies before copying source code. This leverages Docker's layer caching. + +* **Combine RUN commands:** Use `&&` and line breaks (`\`) to group related commands (e.g., `apt-get update && apt-get install -y ... && rm -rf /var/lib/apt/lists/*`) to keep the layer count and image size down. + +## 2. General Best Practices + +* **Sort Multi-line arguments:** When writing RUN commands with multiple arguments, sort them alphabetically to improve readability and maintainability. +* **COPY vs ADD:** Always use `COPY`. Use `ADD` only if you need to auto-extract a local `.tar` file into the image. +* **Use `.dockerignore`:** Create this file to exclude `node_modules`, `.git`, logs, and secrets from being sent to the Docker daemon. This speeds up builds and increases security. +* **Run as Non-Root:** By default, containers run as root. For production, create a user with a UID above 10,000 to prevent host-escalation attacks. +```dockerfile +RUN groupadd -g 10001 appgroup && useradd -u 10000 -g appgroup appuser +USER appuser + +``` + + + +--- + +## 3. Multi-Stage Builds (The Gold Standard) + +Multi-stage builds allow you to use a heavy image (with compilers and build tools) for the build process and a tiny image (just the runtime) for the final production container. + +### Why use Multi-Stage? + +1. **Drastic Size Reduction:** Your final image won't contain compilers, header files, or build caches. +2. **Security:** Fewer binaries in the final image mean a smaller attack surface. +3. **Simplicity:** You don't need separate build scripts; the entire CI/CD pipeline is documented in one Dockerfile. + +### 💡 Example 1: Node.js (Standard Production Pattern) + +This pattern separates the `npm install` (with dev dependencies for building) from the production runtime. + +```dockerfile +# --- Stage 1: Build & Test --- +FROM node:22-bookworm AS builder +WORKDIR /app +COPY package*.json ./ +RUN npm ci # Clean install including dev dependencies +COPY . . +RUN npm run build + +# --- Stage 2: Production Runtime --- +FROM node:22-bookworm-slim AS runtime +WORKDIR /app +# Only copy production dependencies and the built dist folder +COPY --from=builder /app/package*.json ./ +RUN npm ci --only=production +COPY --from=builder /app/dist ./dist + +USER node +CMD ["node", "dist/index.js"] + +``` + +### 💡 Example 2: Go (Compiled to Distroless/Scratch) + +For compiled languages like Go, Rust, or C++, the final image can be effectively "empty" except for your binary. + +```dockerfile +# --- Stage 1: Build --- +FROM golang:1.23-alpine AS builder +WORKDIR /src +COPY go.mod go.sum ./ +RUN go mod download +COPY . . +# Build a statically linked binary +RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/app ./main.go + +# --- Stage 2: Runtime --- +# Using Google's "distroless" for maximum security (no shell, no package manager) +FROM gcr.io/distroless/static-debian12 +COPY --from=builder /bin/app /app +USER 1000:1000 +ENTRYPOINT ["/app"] + +``` + +--- + +## 4. Google Cloud Run Specifics + +If you are deploying to **Google Cloud Run**, you can leverage advanced features for security and maintenance. + +### Base Image for Cloud Run Automatic Updates + +Google Cloud can automatically patch security vulnerabilities in your base image if you follow the **"Scratch Pattern"**. + +1. In your Dockerfile, copy your app onto a `FROM scratch` image. +2. Deploy to Cloud Run using the `--base-image` flag. + +**Dockerfile for Cloud Run Automatic Updates:** + +```dockerfile +FROM node:24-slim AS builder +WORKDIR /usr/src/app +COPY package*.json ./ +RUN npm install --only=production +COPY . . + +# Final stage: Start from scratch (Cloud Run will inject the OS/Runtime layers) +FROM scratch +WORKDIR /workspace +# Note: chown 33:33 is a common pattern for the 'www-data' user in Google's stacks +COPY --from=builder --chown=33:33 /usr/src/app/ ./ +USER 33:33 +CMD [ "node", "index.js" ] + +``` + +### Recommended Base Images (Stacks) + +A base image serves as the starting foundation for container-based development. You build your application by layering necessary libraries, binaries, and configuration files on top of this image. Google Cloud's buildpacks publish these images with various configurations for system packages and languages. + +#### Key Guidelines + +* **Hosting**: Base images are hosted in every region where the Artifact Registry is available. +* **Updates**: Security and maintenance updates are released routinely. Depending on your environment (e.g., Cloud Run functions) and configuration, these updates can be applied automatically or manually. +* **URI Format**: To use a base image, you reference it using a specific URI structure which you can customize based on your location and requirements: REGION-docker.pkg.dev/serverless-runtimes/STACK/runtimes/RUNTIME_ID + +**Customization Steps** You need to replace the specific portions of the URI with your own values: + +* **REGION**: Replace with your preferred region (e.g., us-central1). +* **STACK**: Replace with your preferred operating system stack (e.g., google-24). +* **RUNTIME_ID**: Replace with the specific ID for your language runtime (e.g., python313 or nodejs24). Detailed list of runtime ids is below: + + +Runtime IDs and environment details (mostly Ubuntu 22.04 or 18.04, with some newer Ubuntu 24.04 options) for the following languages: +| Language | Runtime | Generation | Environment | Runtime ID | +| :---------- | :------------ | :-------------- | :---------- | :---------- | +| Node.js | Node.js 24 | 2nd gen | Ubuntu 24.04| nodejs24 | +| Node.js | Node.js 22 | 1st gen, 2nd gen| Ubuntu 22.04| nodejs22 | +| Node.js | Node.js 20 | 1st gen, 2nd gen| Ubuntu 22.04| nodejs20 | +| Node.js | Node.js 18 | 1st gen, 2nd gen| Ubuntu 22.04| nodejs18 | +| Node.js | Node.js 16 | 1st gen, 2nd gen| Ubuntu 18.04| nodejs16 | +| Node.js | Node.js 14 | 1st gen, 2nd gen| Ubuntu 18.04| nodejs14 | +| Node.js | Node.js 12 | 1st gen, 2nd gen| Ubuntu 18.04| nodejs12 | +| Node.js | Node.js 10 | 1st gen, 2nd gen| Ubuntu 18.04| nodejs10 | +| Node.js | Node.js 8 | 1st gen, 2nd gen| Ubuntu 18.04| nodejs8 | +| Node.js | Node.js 6 | 1st gen, 2nd gen| Ubuntu 18.04| nodejs6 | +| Python | Python 3.14 | 2nd gen | Ubuntu 24.04| python314 | +| Python | Python 3.13 | 2nd gen | Ubuntu 22.04| python313 | +| Python | Python 3.12 | 1st gen, 2nd gen| Ubuntu 22.04| python312 | +| Python | Python 3.11 | 1st gen, 2nd gen| Ubuntu 22.04| python311 | +| Python | Python 3.10 | 1st gen, 2nd gen| Ubuntu 22.04| python310 | +| Python | Python 3.9 | 1st gen, 2nd gen| Ubuntu 18.04| python39 | +| Python | Python 3.8 | 1st gen, 2nd gen| Ubuntu 18.04| python38 | +| Python | Python 3.7 | 1st gen | Ubuntu 18.04| python37 | +| Go | Go 1.25 | 2nd gen | Ubuntu 22.04| go125 | +| Go | Go 1.24 | 2nd gen | Ubuntu 22.04| go124 | +| Go | Go 1.23 | 2nd gen | Ubuntu 22.04| go123 | +| Go | Go 1.22 | 2nd gen | Ubuntu 22.04| go122 | +| Go | Go 1.21 | 1st gen, 2nd gen| Ubuntu 22.04| go121 | +| Go | Go 1.20 | 1st gen, 2nd gen| Ubuntu 22.04| go120 | +| Go | Go 1.19 | 1st gen, 2nd gen| Ubuntu 22.04| go119 | +| Go | Go 1.18 | 1st gen, 2nd gen| Ubuntu 22.04| go118 | +| Go | Go 1.16 | 1st gen, 2nd gen| Ubuntu 18.04| go116 | +| Go | Go 1.13 | 1st gen, 2nd gen| Ubuntu 18.04| go113 | +| Go | Go 1.11 | 1st gen, 2nd gen| Ubuntu 18.04| go111 | +| Java | Java 25 | 2nd gen | Ubuntu 24.04| java25 | +| Java | Java 21 | 2nd gen | Ubuntu 22.04| java21 | +| Java | Java 17 | 1st gen, 2nd gen| Ubuntu 22.04| java17 | +| Java | Java 11 | 1st gen, 2nd gen| Ubuntu 18.04| java11 | +| Ruby | Ruby 3.4 | 2nd gen | Ubuntu 22.04| ruby34 | +| Ruby | Ruby 3.3 | 1st gen, 2nd gen| Ubuntu 22.04| ruby33 | +| Ruby | Ruby 3.2 | 1st gen, 2nd gen| Ubuntu 22.04| ruby32 | +| Ruby | Ruby 3.0 | 1st gen, 2nd gen| Ubuntu 18.04| ruby30 | +| Ruby | Ruby 2.7 | 1st gen, 2nd gen| Ubuntu 18.04| ruby27 | +| Ruby | Ruby 2.6 | 1st gen, 2nd gen| Ubuntu 18.04| ruby26 | +| PHP | PHP 8.4 | 2nd gen | Ubuntu 22.04| php84 | +| PHP | PHP 8.3 | 2nd gen | Ubuntu 22.04| php83 | +| PHP | PHP 8.2 | 1st gen, 2nd gen| Ubuntu 22.04| php82 | +| PHP | PHP 8.1 | 1st gen, 2nd gen| Ubuntu 18.04| php81 | +| PHP | PHP 7.4 | 1st gen, 2nd gen| Ubuntu 18.04| php74 | +| .NET Core | .NET Core 8 | 2nd gen | Ubuntu 22.04| dotnet8 | +| .NET Core | .NET Core 6 | 1st gen, 2nd gen| Ubuntu 22.04| dotnet6 | +| .NET Core | .NET Core 3 | 1st gen, 2nd gen| Ubuntu 18.04| dotnet3 | + +--- + +## 5. Summary Checklist + +| Feature | Best Practice | +| --- | --- | +| **Base Image** | Use official, versioned, slim, or distroless images. | +| **Layers** | Combine `RUN` commands; copy dependencies before source code. | +| **Security** | Prefer not to run as `root`; never include secrets/ENV keys in Dockerfile. | +| **Size** | Use **Multi-Stage builds** to strip out build-time bloat. | +| **Cloud Run** | Use `runtime provided` + base images | +| **Metadata** | Use `LABEL` to provide contact and versioning info. | + +### Sources: + +https://cloud.google.com/run/docs/configuring/services/automatic-base-image-updates +https://cloud.google.com/docs/buildpacks/base-images +https://docs.cloud.google.com/run/docs/configuring/services/runtime-base-images +https://codelabs.developers.google.com/developing-containers-with-dockerfiles + +https://docs.cloud.google.com/run/docs/configuring/services/runtime-base-images + +https://docs.cloud.google.com/run/docs/building/containers?_gl=1*i307mg*_up*MQ..&gclid=Cj0KCQiAprLLBhCMARIsAEDhdPfpSJeJlOU3Vua3LrUQ3Q0c_MtS99a4vt2qq1fntxLQWPVR5iH-TtQaAoCpEALw_wcB&gclsrc=aw.ds + +https://docs.docker.com/get-started/docker-concepts/building-images/writing-a-dockerfile/ +https://github.com/dnaprawa/dockerfile-best-practices +https://www.sysdig.com/learn-cloud-native/dockerfile-best-practices +https://docs.docker.com/get-started/docker-concepts/building-images/multi-stage-builds/ +https://dev.to/pavanbelagatti/what-are-multi-stage-docker-builds-1mi9 +https://codelabs.developers.google.com/developing-containers-with-dockerfiles#0 +https://g3doc.corp.google.com/cloud/containers/g3doc/pro-tips-for-writing-dockerfiles.md?cl=head + + diff --git a/skills/design-cicd-gcp/references/pattern_git_tag_triggered_release.txt b/skills/design-cicd-gcp/references/pattern_git_tag_triggered_release.txt new file mode 100644 index 0000000..6e1a13f --- /dev/null +++ b/skills/design-cicd-gcp/references/pattern_git_tag_triggered_release.txt @@ -0,0 +1,52 @@ +name: "Git Tag-Triggered Release" +description: "A pattern where every commit runs CI (lint, test, build), but a deployment is only initiated when a formal Git tag is pushed. The tag triggers a final build and creates a release in Cloud Deploy." + +applicability: + triggers: ["git_commit", "git_tag"] + deployment_style: "continuous_delivery" + use_case_keywords: ["versioned releases", "production pipeline", "staging environment", "manual promotion"] + +stages: + # The CI stage runs on every commit to verify code quality but does not produce a deployable artifact. + ci: + id: "ci_build_and_test" + type: "cloud-build" + name: "CI Build and Test" + details: "Listens for commits on the main branch. Runs lint, test, and builds the container to ensure it's valid, but does not push it." + steps: + - id: "lint_step" + type: "linter" + name: "Run Linter" + details: "Runs a static code analysis tool appropriate for the project." + - id: "test_step" + type: "test" + name: "Run Unit Tests" + details: "Executes the project's unit test suite." + - id: "dry_run_build_step" + type: "docker" + name: "Dry-Run Build" + details: "Builds the container image to validate the Dockerfile but does not push it." + + # The CD stage is a distinct workflow triggered only by a Git tag. + cd: + trigger: + type: "git_tag" + details: "This stage is initiated by pushing a git tag (e.g., v1.2.0)." + steps: + - id: "build_and_push_release_artifact" + type: "cloud-build" + name: "Build and Push Release Artifact" + details: "Builds the final container image, signs it, and pushes it to Artifact Registry, tagged with the specific Git tag." + - id: "create_cloud_deploy_release" + type: "cloud-deploy" + name: "Create Cloud Deploy Release" + details: "Creates a new formal release in Cloud Deploy using the tagged container image, making it available for promotion to various environments." + +tradeoffs: + pros: + - "Strong separation between integration and deployment, preventing every commit from going to production." + - "Creates a clear audit trail; every release is tied to a specific version tag in Git." + - "Enables controlled promotions (dev -> staging -> prod) via Cloud Deploy." + cons: + - "Slower feedback loop for deployment compared to push-to-deploy." + - "Requires disciplined Git tagging practices from the development team." \ No newline at end of file diff --git a/skills/design-cicd-gcp/references/pattern_trunk_based_push_to_deploy.txt b/skills/design-cicd-gcp/references/pattern_trunk_based_push_to_deploy.txt new file mode 100644 index 0000000..7fd88d3 --- /dev/null +++ b/skills/design-cicd-gcp/references/pattern_trunk_based_push_to_deploy.txt @@ -0,0 +1,51 @@ +name: "Trunk-Based Push-to-Deploy" +description: "A simple and fast pattern where every commit to the main or develop branch is instantly built, tested, and deployed to a shared development environment on Cloud Run or GKE." + +applicability: + triggers: ["git_commit", "push"] + deployment_style: "continuous_deployment" + use_case_keywords: ["rapid iteration", "vibe coding", "development environment", "trunk-based"] + +stages: + # In this pattern, CI and CD are a single, unified stage executed by Cloud Build. + ci_cd: + id: "trunk_based_flow" + type: "cloud-build" + name: "Trunk-Based CI and CD Flow" + details: "A single process triggered by a commit. It handles linting, testing, building, pushing, and deploying the application." + state: "create" + # The following steps will be translated into a single cloudbuild.yaml file. + steps: + - id: "lint_step" + type: "linter" + name: "Run Linter" + details: "Runs a static code analysis tool appropriate for the project." + state: "create" + + - id: "test_step" + type: "test" + name: "Run Unit Tests" + details: "Executes the project's unit test suite." + state: "create" + + - id: "build_and_push_step" + type: "docker" + name: "Build and Push Container" + details: "Builds a container image and pushes it to Artifact Registry, tagged with the commit SHA." + state: "create" + + - id: "deploy_step" + type: "gcloud-deploy" + name: "Deploy to Cloud Run or GKE" + details: "Deploys the newly pushed container image to a shared development environment on Cloud Run or GKE." + state: "create" + +tradeoffs: + pros: + - "Extremely fast feedback loop from commit to deployment." + - "Simplifies the release process, removing overhead." + - "Ideal for development and testing environments where speed is critical." + cons: + - "Can lead to an unstable development environment if tests are not robust." + - "Not suitable for production without adding quality gates and approvals." +