Skip to content

[Feat] AWS->GCP 코드 마이그레이션#206

Merged
gaeunee2 merged 10 commits into
devfrom
feat/205-gcp-migration
May 8, 2026
Merged

[Feat] AWS->GCP 코드 마이그레이션#206
gaeunee2 merged 10 commits into
devfrom
feat/205-gcp-migration

Conversation

@gaeunee2
Copy link
Copy Markdown
Collaborator

@gaeunee2 gaeunee2 commented Apr 16, 2026

📌 관련 이슈

🏷️ PR 타입

  • ✨ 기능 추가 (Feature)
  • 🐛 버그 수정 (Bug Fix)
  • ♻️ 리팩토링 (Refactoring)
  • 📝 문서 수정 (Documentation)
  • 🎨 스타일 변경 (Style)
  • ✅ 테스트 추가 (Test)

📝 작업 내용

의존성 교체

  • software.amazon.awssdk:s3 제거 → google-cloud-storage (libraries-bom 26.34.0) 추가

스토리지 레이어 교체 (global/infra/s3global/infra/gcs)

  • S3Config 삭제 → GcsConfig 신규 생성
    • ADC(Application Default Credentials) 기본 지원, GCS_CREDENTIALS_JSON 설정 시 서비스 계정 키 사용
  • S3Service / S3ServiceImpl 삭제 → GcsService / GcsServiceImpl 신규 생성
    • 파일 업로드·다운로드·삭제·존재 확인·Signed URL(V4) 동일 인터페이스로 구현
    • 파일 URL 형식 변경: s3.amazonaws.comstorage.googleapis.com
  • S3HealthController 삭제 → GcsHealthController 신규 생성 (GET /api/health/gcs)

설정 파일 교체

  • application.yaml: cloud.aws.* 블록 → gcs.project-id / bucket / credentials-json
  • .env.example: AWS_S3_ACCESS_KEY, AWS_S3_SECRET_KEY, AWS_REGIONGCS_PROJECT_ID, GCS_BUCKET, GCS_CREDENTIALS_JSON

CI/CD 전환

  • deploy.yml: EC2 + SSM RunCommand 방식 → Artifact Registry + gcloud run deploy 방식으로 교체
  • Dockerfile 신규 생성 (eclipse-temurin:21-jre-alpine 기반)

로컬 개발환경

  • docker-compose.yaml: localstack(S3 에뮬레이터) → fake-gcs-server(GCS 에뮬레이터) 교체

참조 업데이트

  • S3Service 주입 파일 8개 import 및 타입 교체
    • AssetsServiceImpl, ThumbnailService, UserService, StorageService, NoteServiceImpl, ConversationQueryServiceImpl, ChatServiceImpl, StorageServiceTest

📸 스크린샷

GCP 프로젝트 세팅 완료 후 빌드 및 런타임 검증 진행

✅ 체크리스트

  • 코드 리뷰를 받을 준비가 완료되었습니다
  • 테스트를 작성하고 모두 통과했습니다
  • 문서를 업데이트했습니다 (필요한 경우)
  • 코드 스타일 가이드를 준수했습니다
  • 셀프 리뷰를 완료했습니다

📎 기타 참고사항

  • Signed URL 생성은 ServiceAccountCredentials 필요. Cloud Run 배포 환경에서는 GCS_CREDENTIALS_JSON을 Secret Manager로 주입 권장
  • 로컬 개발 시 GCS 에뮬레이터: docker compose up fake-gcs (http://localhost:4443)
  • Spring Boot Cold Start 방지를 위해 Cloud Run minimum-instances=1 설정 예정

Summary by CodeRabbit

  • 새로운 기능

    • Google Cloud Storage 기반 파일 저장소와 GCS 연결 상태 체크 엔드포인트(/api/health/gcs) 추가
  • Chores

    • 스토리지 설정을 AWS S3에서 GCS로 전환(환경변수 예시 및 애플리케이션 설정 업데이트)
    • 배포 파이프라인을 Cloud Run으로 전환 및 Docker 이미지 빌드 지원 추가
    • 로컬 GCS 에뮬레이터 지원 추가, 기존 S3 관련 구성·헬스 체크 제거
  • Tests

    • 스토리지 의존성 목(mock) 업데이트

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 16, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 87059205-c999-44ca-b47d-99a0ba71fecf

📥 Commits

Reviewing files that changed from the base of the PR and between 5a2761a and dc03fbe.

📒 Files selected for processing (2)
  • .github/workflows/deploy.yml
  • Dockerfile
✅ Files skipped from review due to trivial changes (1)
  • Dockerfile
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/deploy.yml

📝 Walkthrough

Walkthrough

AWS S3 기반 스토리지와 배포 파이프라인을 GCS 및 Cloud Run 기반으로 전면 교체: 환경·설정, 빌드/의존성, Docker/Compose, CI/CD, S3→GCS 서비스/컨트롤러/설정/주입 지점 및 썸네일 읽기 경로가 GCS로 마이그레이션되었습니다.

Changes

GCS 마이그레이션 (단일 DAG)

Layer / File(s) Summary
Environment / Config
\.env.example, src/main/resources/application.yaml
AWS 관련 env/설정(AWS_REGION, AWS_S3_ACCESS_KEY, AWS_S3_SECRET_KEY, S3_BUCKET, cloud.aws.*) 제거, GCS_PROJECT_ID, GCS_CREDENTIALS_JSON, GCS_BUCKETgcs.* 설정 추가.
Build / Dependencies
build.gradle
AWS SDK/S3 의존성 제거, google-cloud-storage 및 Google BOM 추가; bootJar 출력명 app.jar로 설정하고 jar.enabled = false.
Container / Image
Dockerfile
Java 21 JRE Alpine 기반 Dockerfile 추가: non-root app 유저 생성, app.jar 복사, 포트 8080 노출 및 java -jar app.jar 실행.
CI/CD Deployment
.github/workflows/deploy.yml
EC2/SSM+S3 배포 흐름 제거; GCP Workload Identity 인증(id-token: write), google-github-actions/auth, Cloud SDK 설정, Artifact Registry용 Docker auth, 이미지 빌드·푸시, gcloud run deploy로 Cloud Run 배포로 전환(이미지 태깅, --set-secrets, SPRING_PROFILES_ACTIVE=prod, --allow-unauthenticated, --min-instances 1).
Local Emulator / Compose
docker-compose.yaml
localstack 서비스 제거, fsouza/fake-gcs-server 기반 fake-gcs 서비스 추가(포트/엔드포인트 변경: 4443).
Configuration Classes
src/main/java/.../GcsConfig.java, (removed) src/main/java/.../S3Config.java
S3Config 삭제. 신규 GcsConfig 추가: Storage 빈 제공, gcs.project-id, 선택적 gcs.credentials-json 파싱(서비스 계정 JSON) 및 선택적 gcs.endpoint 지원; gcsStorage()는 IOException을 던짐.
Storage Abstraction / Impl
src/main/java/.../gcs/GcsService.java, .../GcsServiceImpl.java, (removed) src/main/java/.../s3/*
S3Service, S3ServiceImpl, S3HealthController 삭제. 신규 GcsService 인터페이스 추가(업로드/삭제/존재확인/URL/서명 URL/read bytes 등) 및 GcsServiceImpl 구현 추가(버킷명, 엔드포인트, 예외→BusinessException 매핑, V4 서명 URL 생성 등).
Health Endpoints
src/main/java/.../gcs/GcsHealthController.java, (removed) src/main/java/.../s3/S3HealthController.java
/api/health/s3 삭제, /api/health/gcs 추가: storage.get(bucket) 기반 접근성 검사 및 상태/에러 매핑(503/500).
Domain Wiring / Usage
src/main/java/com/proovy/domain/.../*.java
AssetsServiceImpl, ChatServiceImpl, ConversationQueryServiceImpl, NoteServiceImpl, StorageService, UserService, ThumbnailService 등에서 주입 타입을 S3ServiceGcsService로 변경(필드명 s3Service 유지).
Thumbnail Service 변경
src/main/java/.../thumbnail/ThumbnailService.java
원격 HTTP 다운로드 로직 제거(HTTP client 제거). 대신 GcsService.readFileBytes(...) 기반 직접 읽기(readFileWithSizeCheck)로 동작하도록 동기/비동기 썸네일 생성 로직 변경.
Tests
src/test/java/com/proovy/domain/storage/service/StorageServiceTest.java
테스트의 모킹 대상 타입을 S3ServiceGcsService로 교체(필드 타입/임포트 변경).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

💾Storage

Suggested reviewers

  • haein45
  • Ncs89
  • lucy7noh

Poem

🐰 깡충, 버킷을 바꾼 날이에요,
S3 숲을 지나 GCS 들판에 도착했죠.
서명 URL은 새로 빚고, 이미지도 직접 읽어,
컨테이너는 구름 위로 살포시 떠올라요.
작은 토끼가 박수 치며 축하해요 🥕✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 54.17% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed PR 제목은 AWS에서 GCP로의 마이그레이션이라는 주요 변경사항을 명확하게 요약하고 있으며, 변경 세트의 핵심 목적을 잘 전달합니다.
Description check ✅ Passed PR 설명은 요청된 템플릿의 대부분 섹션을 포함하고 있으며, 의존성 교체, 스토리지 레이어 변경, 설정 파일 교체, CI/CD 전환 등 상세한 작업 내용을 제시하고 있습니다.
Linked Issues check ✅ Passed PR은 #205 이슈의 모든 주요 목표를 구현했습니다: AWS SDK 제거 및 google-cloud-storage 추가, S3Config/S3Service 제거 및 GCS 대체, GcsHealthController 생성, 설정 파일 전환, CI/CD 변환, 로컬 에뮬레이터 교체, 그리고 8개 서비스 파일의 import 변경.
Out of Scope Changes check ✅ Passed 모든 변경사항이 AWS에서 GCP로의 마이그레이션 범위 내에 있으며, 관련 없는 변경이나 불필요한 수정은 관찰되지 않습니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/205-gcp-migration

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gaeunee2 gaeunee2 marked this pull request as draft April 16, 2026 08:00
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (2)
src/main/java/com/proovy/domain/user/service/UserService.java (1)

58-58: AssetsServiceImpl과 동일한 명명 불일치가 있습니다.

이전 파일과 마찬가지로 GcsService 타입에 s3Service 필드명을 사용하고 있습니다. 일괄 리팩터링 시 함께 수정을 권장합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/java/com/proovy/domain/user/service/UserService.java` at line 58,
Field naming is inconsistent: the UserService class declares a GcsService-typed
field named s3Service; rename the field to gcsService and update all references
(constructor parameters, assignments, usages, getters/setters) in UserService
(and related methods) to use gcsService so the type and name match the pattern
used in AssetsServiceImpl (replace symbol s3Service with gcsService where
GcsService is the type).
src/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.java (1)

43-43: 필드명이 타입과 일치하지 않습니다.

GcsService 타입을 사용하면서 필드명은 s3Service로 유지되어 있어 혼란을 줄 수 있습니다. 로그 메시지(Line 309, 311 등)와 변수명(s3Key, s3Keys)도 S3 용어를 사용하고 있습니다.

현재 기능에는 문제가 없지만, 향후 유지보수성을 위해 GCS 관련 명칭으로 일괄 변경을 권장합니다.

♻️ 명명 일관성 개선 제안
-    private final GcsService s3Service;
+    private final GcsService gcsService;

추가로 메서드 내 변수명과 로그 메시지도 변경 검토:

  • s3KeygcsKey
  • s3KeysgcsKeys
  • 로그: "S3 원본 파일 삭제" → "GCS 원본 파일 삭제"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.java` at line
43, Rename the misnamed field and related symbols in AssetsServiceImpl: change
the field declaration from s3Service (type GcsService) to gcsService, and update
all usages of that field in methods; also rename local variables s3Key and
s3Keys to gcsKey and gcsKeys respectively, and update any log messages (e.g.,
"S3 원본 파일 삭제") to use "GCS 원본 파일 삭제"; ensure all references to GcsService,
s3Service, s3Key, and s3Keys are consistently replaced so names match the GCS
type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/deploy.yml:
- Around line 58-67: The deploy step named "Deploy to Cloud Run" only sets
SPRING_PROFILES_ACTIVE and omits required runtime env vars; update the gcloud
run deploy invocation (the step using --set-env-vars and --set-secrets) to
inject all required variables from Secret Manager or env: include DB_HOST,
DB_PORT, DB_NAME, DB_USERNAME, DB_PASSWORD, REDIS_HOST, REDIS_PORT, JWT_SECRET,
KAKAO_CLIENT_ID, KAKAO_CLIENT_SECRET, NAVER_CLIENT_ID, NAVER_CLIENT_SECRET,
GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, GCS_PROJECT_ID, GCS_BUCKET,
INTERNAL_API_TOKEN, and OPENROUTER_API_KEY by adding them to --set-env-vars
where non-sensitive or mapping secrets via --set-secrets for sensitive values
(e.g., DB_PASSWORD=your-db-secret:latest, JWT_SECRET=jwt-secret:latest,
KAKAO_CLIENT_SECRET=..., etc.) so the application can read the same keys defined
in application.yaml.

In `@docker-compose.yaml`:
- Around line 4-11: Add a configurable GCS endpoint and use it when building the
client and composing public file URLs: add a gcs.endpoint property to
application.yaml (e.g., gcs.endpoint=${GCS_ENDPOINT:}) and in GcsConfig.java
read that property and, when non-empty, configure the Storage/StorageOptions
client to use that endpoint (override host/transport/options so local
fake-gcs-server at http://localhost:4443 is used); then update
GcsServiceImpl.getFileUrl() to derive the base URL from the same configured
endpoint (fall back to https://storage.googleapis.com if endpoint is blank) so
URLs point to the local emulator in dev and to real GCS in prod.

In `@Dockerfile`:
- Around line 1-5: The Dockerfile currently runs as root; update it to create
and switch to a non-root user before running the app: add a non-root user/group
(e.g., appuser), change ownership of /app (the WORKDIR and files copied by COPY
build/libs/*.jar app.jar) to that user, and set USER to that non-root account so
ENTRYPOINT ["java","-jar","app.jar"] runs unprivileged; ensure any required
ports or permissions are adjusted for the new user.
- Line 3: The COPY pattern in the Dockerfile (COPY build/libs/*.jar app.jar) can
match both the executable jar and the -plain jar and produce wrong or failing
image builds; update the Dockerfile to copy only the boot (executable) jar by
referencing a stable filename—either set bootJar.archiveFileName in build.gradle
to a known name and use COPY build/libs/<that-name>.jar app.jar, or narrow the
glob to the executable artifact (for example COPY build/libs/*-SNAPSHOT.jar
app.jar) so the -plain jar is not picked up.

In `@src/main/java/com/proovy/global/infra/gcs/GcsHealthController.java`:
- Around line 51-53: The controller GcsHealthController currently returns error
HTTP statuses (503/500) but wraps the body with ApiResponse.success, causing
mismatch; update the three failure branches (the ResponseEntity calls around the
ApiResponse.success at lines shown) to return a proper failure wrapper (e.g.,
ApiResponse.failure / ApiResponse.error) or a dedicated HealthStatus DTO that
conveys failure (isSuccess=false and appropriate code/message), and ensure the
ResponseEntity body and HTTP status are consistent for the methods handling the
GCS connection test and health checks.

In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java`:
- Around line 153-161: In generatePresignedDownloadUrl, validate fileName the
same way gcsKey is validated: if fileName is null or isBlank(), throw new
BusinessException(ErrorCode.COMMON400) before calling URLEncoder.encode; this
prevents a NullPointerException and keeps input validation consistent for the
public API (refer to symbols: generatePresignedDownloadUrl, fileName, gcsKey,
BusinessException, ErrorCode.COMMON400).
- Around line 61-68: The batch-delete in GcsServiceImpl treats any false from
Storage.delete(blobIds) as a server error; change this to distinguish "not
found" vs real failures: call storage.delete(blobIds) as before, then for each
index where results.get(i) == false, call storage.get(blobIds.get(i)) — if
get(...) returns null log.warn("[GCS] blob not found: {}") and treat as success,
otherwise treat as a real failure (log.error and count it). Only throw
BusinessException(ErrorCode.COMMON500) when there are real failures (existing
blobs that failed to delete); remove throwing on mere-not-found results. Use the
existing variables/methods: storage.delete, results, blobIds, storage.get,
log.warn/log.error, BusinessException and ErrorCode.COMMON500.
- Around line 135-148: The code in GcsServiceImpl currently only catches
StorageException around the storage.signUrl(...) call, so IllegalStateException
thrown when signing credentials are missing is not handled; add an additional
catch block for IllegalStateException (or a multi-catch with StorageException)
around storage.signUrl(...) in both methods that generate signed URLs so it logs
the same error message (e.g. log.error("[GCS] Upload Signed URL 생성 실패: {},
message={}", gcsKey, e.getMessage(), e)) and rethrows new
BusinessException(ErrorCode.COMMON500) to ensure credential-related failures are
normalized to BusinessException; locate uses of storage.signUrl(...) in
GcsServiceImpl to update the try/catch accordingly.

In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java`:
- Around line 42-46: ThumbnailService fails for private GCS buckets because it
calls GcsService.getFileUrl() and then downloadFile(), which only works for
public URLs; update ThumbnailService to use
GcsService.generatePresignedDownloadUrl() (or call a new authenticated blob-read
method on GcsService) instead of getFileUrl()/downloadFile() so server-side
thumbnail fetching uses an authenticated URL or direct blob read; modify the
logic around the existing downloadFile() calls in ThumbnailService (references:
GcsService, ThumbnailService, getFileUrl(), downloadFile(),
generatePresignedDownloadUrl()) to obtain and use a presigned URL or
authenticated stream before attempting HTTP download.

---

Nitpick comments:
In `@src/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.java`:
- Line 43: Rename the misnamed field and related symbols in AssetsServiceImpl:
change the field declaration from s3Service (type GcsService) to gcsService, and
update all usages of that field in methods; also rename local variables s3Key
and s3Keys to gcsKey and gcsKeys respectively, and update any log messages
(e.g., "S3 원본 파일 삭제") to use "GCS 원본 파일 삭제"; ensure all references to
GcsService, s3Service, s3Key, and s3Keys are consistently replaced so names
match the GCS type.

In `@src/main/java/com/proovy/domain/user/service/UserService.java`:
- Line 58: Field naming is inconsistent: the UserService class declares a
GcsService-typed field named s3Service; rename the field to gcsService and
update all references (constructor parameters, assignments, usages,
getters/setters) in UserService (and related methods) to use gcsService so the
type and name match the pattern used in AssetsServiceImpl (replace symbol
s3Service with gcsService where GcsService is the type).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0b745a09-84c0-4ab6-85bb-7fefa1a8088a

📥 Commits

Reviewing files that changed from the base of the PR and between 1da7f3b and 82b8616.

📒 Files selected for processing (22)
  • .env.example
  • .github/workflows/deploy.yml
  • Dockerfile
  • build.gradle
  • docker-compose.yaml
  • src/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.java
  • src/main/java/com/proovy/domain/conversation/service/ChatServiceImpl.java
  • src/main/java/com/proovy/domain/conversation/service/ConversationQueryServiceImpl.java
  • src/main/java/com/proovy/domain/note/service/NoteServiceImpl.java
  • src/main/java/com/proovy/domain/storage/service/StorageService.java
  • src/main/java/com/proovy/domain/user/service/UserService.java
  • src/main/java/com/proovy/global/config/GcsConfig.java
  • src/main/java/com/proovy/global/config/S3Config.java
  • src/main/java/com/proovy/global/infra/gcs/GcsHealthController.java
  • src/main/java/com/proovy/global/infra/gcs/GcsService.java
  • src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java
  • src/main/java/com/proovy/global/infra/s3/S3HealthController.java
  • src/main/java/com/proovy/global/infra/s3/S3Service.java
  • src/main/java/com/proovy/global/infra/s3/S3ServiceImpl.java
  • src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java
  • src/main/resources/application.yaml
  • src/test/java/com/proovy/domain/storage/service/StorageServiceTest.java
💤 Files with no reviewable changes (4)
  • src/main/java/com/proovy/global/config/S3Config.java
  • src/main/java/com/proovy/global/infra/s3/S3HealthController.java
  • src/main/java/com/proovy/global/infra/s3/S3Service.java
  • src/main/java/com/proovy/global/infra/s3/S3ServiceImpl.java

Comment thread .github/workflows/deploy.yml
Comment thread docker-compose.yaml
Comment on lines +4 to +11
# GCS 로컬 에뮬레이터 (fake-gcs-server)
# 접속: http://localhost:4443/storage/v1/
fake-gcs:
image: fsouza/fake-gcs-server:latest
container_name: proovy-fake-gcs
ports:
- "4566:4566"
environment:
- SERVICES=s3
- DEBUG=1
- TMPDIR=/var/lib/localstack/tmp
volumes:
- "./localstack-data:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
- "4443:4443"
command: -scheme http -port 4443 -public-host localhost:4443
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

로컬 개발 환경에서 fake-gcs-server에 연결할 수 없습니다.

fake-gcs-serverlocalhost:4443에서 실행되지만, GcsConfig.java에서 GCS 클라이언트의 endpoint URL을 설정하는 코드가 없습니다. 또한 GcsServiceImpl.javagetFileUrl() 메서드는 https://storage.googleapis.com을 하드코딩하고 있습니다.

현재 상태로는 로컬에서 Spring 애플리케이션이 실제 GCS에 연결을 시도하게 됩니다.

🛠️ 로컬 환경 지원을 위한 수정 제안
  1. application.yaml에 endpoint 설정 추가:
gcs:
  project-id: ${GCS_PROJECT_ID}
  bucket: ${GCS_BUCKET:proovy-assets-dev}
  credentials-json: ${GCS_CREDENTIALS_JSON:}
  endpoint: ${GCS_ENDPOINT:}  # 로컬: http://localhost:4443
  1. GcsConfig.java에서 endpoint 설정:
+    `@Value`("${gcs.endpoint:}")
+    private String endpoint;

     `@Bean`
     public Storage gcsStorage() throws IOException {
         StorageOptions.Builder builder = StorageOptions.newBuilder()
                 .setProjectId(projectId);

+        if (endpoint != null && !endpoint.isBlank()) {
+            builder.setHost(endpoint);
+        }

         if (credentialsJson != null && !credentialsJson.isBlank()) {
             // ...
         }
         return builder.build().getService();
     }
  1. GcsServiceImpl.javagetFileUrl()도 endpoint 기반으로 변경 필요
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yaml` around lines 4 - 11, Add a configurable GCS endpoint and
use it when building the client and composing public file URLs: add a
gcs.endpoint property to application.yaml (e.g., gcs.endpoint=${GCS_ENDPOINT:})
and in GcsConfig.java read that property and, when non-empty, configure the
Storage/StorageOptions client to use that endpoint (override
host/transport/options so local fake-gcs-server at http://localhost:4443 is
used); then update GcsServiceImpl.getFileUrl() to derive the base URL from the
same configured endpoint (fall back to https://storage.googleapis.com if
endpoint is blank) so URLs point to the local emulator in dev and to real GCS in
prod.

Comment thread Dockerfile
Comment thread Dockerfile Outdated
Comment thread src/main/java/com/proovy/global/infra/gcs/GcsHealthController.java
Comment on lines +61 to +68
try {
List<Boolean> results = storage.delete(blobIds);
long failCount = results.stream().filter(r -> !r).count();

if (failCount > 0) {
log.error("[GCS] {} 개 파일 삭제 실패", failCount);
throw new BusinessException(ErrorCode.COMMON500);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In the Google Cloud Storage Java client, what does Storage.delete(Iterable)return when a blob does not exist? Does it returnfalse without throwing an exception?

💡 Result:

Yes, Storage.delete(Iterable) returns a List where false is returned for each BlobId that does not exist (or fails to delete), without throwing an exception. This matches the behavior of the single Storage.delete(BlobId) method, which also returns false when the blob does not exist.

Citations:


배치 삭제 실패 처리 로직이 GCS 동작과 맞지 않습니다

Storage.delete(blobIds)는 존재하지 않는 blob에 대해 예외를 던지지 않고 false를 반환합니다. 현재 코드는 이 false 결과를 모두 서버 오류로 처리하고 있어서, 정리성 삭제(cleanup) 작업이 비멱등이 되고, S3에서는 성공하던 경로가 500 에러로 깨질 수 있습니다.

존재하지 않는 blob은 경고 수준으로 기록하고 성공 처리해야 합니다.

제안 diff
         try {
             List<Boolean> results = storage.delete(blobIds);
-            long failCount = results.stream().filter(r -> !r).count();
+            long missingCount = results.stream().filter(Boolean.FALSE::equals).count();
 
-            if (failCount > 0) {
-                log.error("[GCS] {} 개 파일 삭제 실패", failCount);
-                throw new BusinessException(ErrorCode.COMMON500);
+            if (missingCount > 0) {
+                log.warn("[GCS] {} 개 파일은 이미 없어 삭제를 건너뛰었습니다.", missingCount);
             }
 
             log.info("[GCS] 파일 일괄 삭제 성공: {} 개", results.size());
         } catch (StorageException e) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java` around lines
61 - 68, The batch-delete in GcsServiceImpl treats any false from
Storage.delete(blobIds) as a server error; change this to distinguish "not
found" vs real failures: call storage.delete(blobIds) as before, then for each
index where results.get(i) == false, call storage.get(blobIds.get(i)) — if
get(...) returns null log.warn("[GCS] blob not found: {}") and treat as success,
otherwise treat as a real failure (log.error and count it). Only throw
BusinessException(ErrorCode.COMMON500) when there are real failures (existing
blobs that failed to delete); remove throwing on mere-not-found results. Use the
existing variables/methods: storage.delete, results, blobIds, storage.get,
log.warn/log.error, BusinessException and ErrorCode.COMMON500.

Comment thread src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java
Comment thread src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java
@gaeunee2 gaeunee2 marked this pull request as ready for review April 16, 2026 14:02
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
Dockerfile (1)

1-5: ⚠️ Potential issue | 🟠 Major

컨테이너를 비루트 사용자로 실행하세요.

현재 이미지가 여전히 root로 실행됩니다. Cloud Run에서도 불필요하게 권한 범위를 넓히므로 USER를 추가하고 복사 파일 소유권도 비루트 계정으로 넘기는 편이 안전합니다.

제안 수정
 FROM eclipse-temurin:21-jre-alpine
+RUN addgroup -S app && adduser -S -G app app
 WORKDIR /app
-COPY build/libs/app.jar app.jar
+COPY --chown=app:app build/libs/app.jar app.jar
 EXPOSE 8080
+USER app
 ENTRYPOINT ["java", "-jar", "app.jar"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Dockerfile` around lines 1 - 5, The Dockerfile currently runs as root; create
and switch to a non-root user and transfer ownership of the app artifacts and
WORKDIR to that user: add steps after FROM/WORKDIR to create a user (e.g.,
appuser) and group, chown the WORKDIR and the copied artifact (the file
referenced by COPY build/libs/app.jar app.jar) to that user, and add a USER
appuser instruction before ENTRYPOINT so the java -jar app.jar process runs
unprivileged.
🧹 Nitpick comments (3)
src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java (2)

257-266: 주석과 메서드명이 GCS 마이그레이션과 일치하지 않습니다.

generateThumbnailS3Key 메서드명과 주석들이 여전히 "S3"를 참조하고 있습니다. 코드 전체의 일관성을 위해 "GCS" 또는 일반적인 "Storage"로 변경하는 것이 좋습니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java` around
lines 257 - 266, The method generateThumbnailS3Key and its Javadoc/comments
still reference "S3" but the project migrated to GCS/storage; rename the method
to generateThumbnailStorageKey (or generateThumbnailGcsKey) and update all
Javadoc/comments and any error messages inside (e.g., the
IllegalArgumentException text) to use "GCS" or a neutral "storage" term, and
update every callsite referencing generateThumbnailS3Key to the new name to keep
code and docs consistent.

37-42: 미사용 필드 및 일관성 없는 네이밍.

  1. bucketName 필드(line 42)가 주입되지만 사용되지 않습니다. HTTP 다운로드 로직 제거 후 불필요해졌습니다.
  2. s3Service 변수명이 실제 타입인 GcsService와 일치하지 않아 혼란을 줄 수 있습니다.
제안 diff
 `@RequiredArgsConstructor`
 public class ThumbnailService {

-    private final GcsService s3Service;
+    private final GcsService gcsService;
     private final AssetRepository assetRepository;
     private final ApplicationContext applicationContext;

-    `@Value`("${gcs.bucket}")
-    private String bucketName;
-
     private static final int THUMBNAIL_WIDTH = 400;

이후 s3Service 참조를 gcsService로 변경해야 합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java` around
lines 37 - 42, Remove the unused bucketName field from ThumbnailService and
rename the incorrectly named s3Service field to gcsService to match its type
GcsService; update the field declaration, the constructor parameter, any
this.s3Service references, and any usages in methods to this.gcsService
(identify occurrences by the symbols s3Service and GcsService in class
ThumbnailService) while leaving assetRepository and applicationContext
untouched.
src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java (1)

120-122: blob.exists() 호출이 중복 API 요청을 발생시킵니다.

storage.get(BlobId)는 이미 GCS에 대한 API 호출을 수행하며, blob이 존재하지 않으면 null을 반환합니다. 이후 blob.exists()를 호출하면 추가 RPC 요청이 발생하여 단순 존재 확인에 2번의 API 호출이 발생합니다.

제안 diff
         try {
             Blob blob = storage.get(BlobId.of(bucketName, gcsKey));
-            return blob != null && blob.exists();
+            return blob != null;
         } catch (StorageException e) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java` around lines
120 - 122, The code in GcsServiceImpl currently calls
storage.get(BlobId.of(bucketName, gcsKey)) and then blob.exists(), causing two
RPCs; change the existence check to rely on the null result from storage.get
instead of calling blob.exists() (i.e., replace the blob.exists() usage with a
null check on the Blob returned by storage.get in the method that performs the
existence check) so only one API call is made.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/deploy.yml:
- Around line 39-42: The workflow currently authenticates to GCP by passing a
long-lived service account key via the google-github-actions/auth@v2 step using
the credentials_json input; replace this with Workload Identity Federation
(OIDC) by switching the auth step to use the workload_identity_provider and
service_account inputs (and remove credentials_json), and add top-level GitHub
Actions permissions including id-token: write (and contents: read) so the runner
can mint OIDC tokens; update the "Authenticate to GCP" step configuration (still
the google-github-actions/auth@v2 step) and ensure the provider and target
service account names match your GCP Identity Provider setup.

---

Duplicate comments:
In `@Dockerfile`:
- Around line 1-5: The Dockerfile currently runs as root; create and switch to a
non-root user and transfer ownership of the app artifacts and WORKDIR to that
user: add steps after FROM/WORKDIR to create a user (e.g., appuser) and group,
chown the WORKDIR and the copied artifact (the file referenced by COPY
build/libs/app.jar app.jar) to that user, and add a USER appuser instruction
before ENTRYPOINT so the java -jar app.jar process runs unprivileged.

---

Nitpick comments:
In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java`:
- Around line 120-122: The code in GcsServiceImpl currently calls
storage.get(BlobId.of(bucketName, gcsKey)) and then blob.exists(), causing two
RPCs; change the existence check to rely on the null result from storage.get
instead of calling blob.exists() (i.e., replace the blob.exists() usage with a
null check on the Blob returned by storage.get in the method that performs the
existence check) so only one API call is made.

In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java`:
- Around line 257-266: The method generateThumbnailS3Key and its
Javadoc/comments still reference "S3" but the project migrated to GCS/storage;
rename the method to generateThumbnailStorageKey (or generateThumbnailGcsKey)
and update all Javadoc/comments and any error messages inside (e.g., the
IllegalArgumentException text) to use "GCS" or a neutral "storage" term, and
update every callsite referencing generateThumbnailS3Key to the new name to keep
code and docs consistent.
- Around line 37-42: Remove the unused bucketName field from ThumbnailService
and rename the incorrectly named s3Service field to gcsService to match its type
GcsService; update the field declaration, the constructor parameter, any
this.s3Service references, and any usages in methods to this.gcsService
(identify occurrences by the symbols s3Service and GcsService in class
ThumbnailService) while leaving assetRepository and applicationContext
untouched.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ec40cc39-dfd5-4d6d-a2b9-d50c0780ef66

📥 Commits

Reviewing files that changed from the base of the PR and between 82b8616 and 5a2761a.

📒 Files selected for processing (9)
  • .github/workflows/deploy.yml
  • Dockerfile
  • build.gradle
  • src/main/java/com/proovy/global/config/GcsConfig.java
  • src/main/java/com/proovy/global/infra/gcs/GcsHealthController.java
  • src/main/java/com/proovy/global/infra/gcs/GcsService.java
  • src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java
  • src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java
  • src/main/resources/application.yaml
✅ Files skipped from review due to trivial changes (1)
  • src/main/java/com/proovy/global/infra/gcs/GcsService.java
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/main/resources/application.yaml
  • build.gradle
  • src/main/java/com/proovy/global/infra/gcs/GcsHealthController.java

Comment thread .github/workflows/deploy.yml Outdated
@gaeunee2 gaeunee2 requested a review from haein45 May 8, 2026 10:59
Copy link
Copy Markdown
Collaborator

@haein45 haein45 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gcp로 코드 변경한 부분 확인했습니다.

@gaeunee2 gaeunee2 merged commit 0015784 into dev May 8, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feat] AWS → GCP 코드 마이그레이션

2 participants