[Feat] AWS->GCP 코드 마이그레이션#206
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAWS S3 기반 스토리지와 배포 파이프라인을 GCS 및 Cloud Run 기반으로 전면 교체: 환경·설정, 빌드/의존성, Docker/Compose, CI/CD, S3→GCS 서비스/컨트롤러/설정/주입 지점 및 썸네일 읽기 경로가 GCS로 마이그레이션되었습니다. ChangesGCS 마이그레이션 (단일 DAG)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (2)
src/main/java/com/proovy/domain/user/service/UserService.java (1)
58-58:AssetsServiceImpl과 동일한 명명 불일치가 있습니다.이전 파일과 마찬가지로
GcsService타입에s3Service필드명을 사용하고 있습니다. 일괄 리팩터링 시 함께 수정을 권장합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/com/proovy/domain/user/service/UserService.java` at line 58, Field naming is inconsistent: the UserService class declares a GcsService-typed field named s3Service; rename the field to gcsService and update all references (constructor parameters, assignments, usages, getters/setters) in UserService (and related methods) to use gcsService so the type and name match the pattern used in AssetsServiceImpl (replace symbol s3Service with gcsService where GcsService is the type).src/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.java (1)
43-43: 필드명이 타입과 일치하지 않습니다.
GcsService타입을 사용하면서 필드명은s3Service로 유지되어 있어 혼란을 줄 수 있습니다. 로그 메시지(Line 309, 311 등)와 변수명(s3Key,s3Keys)도 S3 용어를 사용하고 있습니다.현재 기능에는 문제가 없지만, 향후 유지보수성을 위해 GCS 관련 명칭으로 일괄 변경을 권장합니다.
♻️ 명명 일관성 개선 제안
- private final GcsService s3Service; + private final GcsService gcsService;추가로 메서드 내 변수명과 로그 메시지도 변경 검토:
s3Key→gcsKeys3Keys→gcsKeys- 로그: "S3 원본 파일 삭제" → "GCS 원본 파일 삭제"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.java` at line 43, Rename the misnamed field and related symbols in AssetsServiceImpl: change the field declaration from s3Service (type GcsService) to gcsService, and update all usages of that field in methods; also rename local variables s3Key and s3Keys to gcsKey and gcsKeys respectively, and update any log messages (e.g., "S3 원본 파일 삭제") to use "GCS 원본 파일 삭제"; ensure all references to GcsService, s3Service, s3Key, and s3Keys are consistently replaced so names match the GCS type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/deploy.yml:
- Around line 58-67: The deploy step named "Deploy to Cloud Run" only sets
SPRING_PROFILES_ACTIVE and omits required runtime env vars; update the gcloud
run deploy invocation (the step using --set-env-vars and --set-secrets) to
inject all required variables from Secret Manager or env: include DB_HOST,
DB_PORT, DB_NAME, DB_USERNAME, DB_PASSWORD, REDIS_HOST, REDIS_PORT, JWT_SECRET,
KAKAO_CLIENT_ID, KAKAO_CLIENT_SECRET, NAVER_CLIENT_ID, NAVER_CLIENT_SECRET,
GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, GCS_PROJECT_ID, GCS_BUCKET,
INTERNAL_API_TOKEN, and OPENROUTER_API_KEY by adding them to --set-env-vars
where non-sensitive or mapping secrets via --set-secrets for sensitive values
(e.g., DB_PASSWORD=your-db-secret:latest, JWT_SECRET=jwt-secret:latest,
KAKAO_CLIENT_SECRET=..., etc.) so the application can read the same keys defined
in application.yaml.
In `@docker-compose.yaml`:
- Around line 4-11: Add a configurable GCS endpoint and use it when building the
client and composing public file URLs: add a gcs.endpoint property to
application.yaml (e.g., gcs.endpoint=${GCS_ENDPOINT:}) and in GcsConfig.java
read that property and, when non-empty, configure the Storage/StorageOptions
client to use that endpoint (override host/transport/options so local
fake-gcs-server at http://localhost:4443 is used); then update
GcsServiceImpl.getFileUrl() to derive the base URL from the same configured
endpoint (fall back to https://storage.googleapis.com if endpoint is blank) so
URLs point to the local emulator in dev and to real GCS in prod.
In `@Dockerfile`:
- Around line 1-5: The Dockerfile currently runs as root; update it to create
and switch to a non-root user before running the app: add a non-root user/group
(e.g., appuser), change ownership of /app (the WORKDIR and files copied by COPY
build/libs/*.jar app.jar) to that user, and set USER to that non-root account so
ENTRYPOINT ["java","-jar","app.jar"] runs unprivileged; ensure any required
ports or permissions are adjusted for the new user.
- Line 3: The COPY pattern in the Dockerfile (COPY build/libs/*.jar app.jar) can
match both the executable jar and the -plain jar and produce wrong or failing
image builds; update the Dockerfile to copy only the boot (executable) jar by
referencing a stable filename—either set bootJar.archiveFileName in build.gradle
to a known name and use COPY build/libs/<that-name>.jar app.jar, or narrow the
glob to the executable artifact (for example COPY build/libs/*-SNAPSHOT.jar
app.jar) so the -plain jar is not picked up.
In `@src/main/java/com/proovy/global/infra/gcs/GcsHealthController.java`:
- Around line 51-53: The controller GcsHealthController currently returns error
HTTP statuses (503/500) but wraps the body with ApiResponse.success, causing
mismatch; update the three failure branches (the ResponseEntity calls around the
ApiResponse.success at lines shown) to return a proper failure wrapper (e.g.,
ApiResponse.failure / ApiResponse.error) or a dedicated HealthStatus DTO that
conveys failure (isSuccess=false and appropriate code/message), and ensure the
ResponseEntity body and HTTP status are consistent for the methods handling the
GCS connection test and health checks.
In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java`:
- Around line 153-161: In generatePresignedDownloadUrl, validate fileName the
same way gcsKey is validated: if fileName is null or isBlank(), throw new
BusinessException(ErrorCode.COMMON400) before calling URLEncoder.encode; this
prevents a NullPointerException and keeps input validation consistent for the
public API (refer to symbols: generatePresignedDownloadUrl, fileName, gcsKey,
BusinessException, ErrorCode.COMMON400).
- Around line 61-68: The batch-delete in GcsServiceImpl treats any false from
Storage.delete(blobIds) as a server error; change this to distinguish "not
found" vs real failures: call storage.delete(blobIds) as before, then for each
index where results.get(i) == false, call storage.get(blobIds.get(i)) — if
get(...) returns null log.warn("[GCS] blob not found: {}") and treat as success,
otherwise treat as a real failure (log.error and count it). Only throw
BusinessException(ErrorCode.COMMON500) when there are real failures (existing
blobs that failed to delete); remove throwing on mere-not-found results. Use the
existing variables/methods: storage.delete, results, blobIds, storage.get,
log.warn/log.error, BusinessException and ErrorCode.COMMON500.
- Around line 135-148: The code in GcsServiceImpl currently only catches
StorageException around the storage.signUrl(...) call, so IllegalStateException
thrown when signing credentials are missing is not handled; add an additional
catch block for IllegalStateException (or a multi-catch with StorageException)
around storage.signUrl(...) in both methods that generate signed URLs so it logs
the same error message (e.g. log.error("[GCS] Upload Signed URL 생성 실패: {},
message={}", gcsKey, e.getMessage(), e)) and rethrows new
BusinessException(ErrorCode.COMMON500) to ensure credential-related failures are
normalized to BusinessException; locate uses of storage.signUrl(...) in
GcsServiceImpl to update the try/catch accordingly.
In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java`:
- Around line 42-46: ThumbnailService fails for private GCS buckets because it
calls GcsService.getFileUrl() and then downloadFile(), which only works for
public URLs; update ThumbnailService to use
GcsService.generatePresignedDownloadUrl() (or call a new authenticated blob-read
method on GcsService) instead of getFileUrl()/downloadFile() so server-side
thumbnail fetching uses an authenticated URL or direct blob read; modify the
logic around the existing downloadFile() calls in ThumbnailService (references:
GcsService, ThumbnailService, getFileUrl(), downloadFile(),
generatePresignedDownloadUrl()) to obtain and use a presigned URL or
authenticated stream before attempting HTTP download.
---
Nitpick comments:
In `@src/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.java`:
- Line 43: Rename the misnamed field and related symbols in AssetsServiceImpl:
change the field declaration from s3Service (type GcsService) to gcsService, and
update all usages of that field in methods; also rename local variables s3Key
and s3Keys to gcsKey and gcsKeys respectively, and update any log messages
(e.g., "S3 원본 파일 삭제") to use "GCS 원본 파일 삭제"; ensure all references to
GcsService, s3Service, s3Key, and s3Keys are consistently replaced so names
match the GCS type.
In `@src/main/java/com/proovy/domain/user/service/UserService.java`:
- Line 58: Field naming is inconsistent: the UserService class declares a
GcsService-typed field named s3Service; rename the field to gcsService and
update all references (constructor parameters, assignments, usages,
getters/setters) in UserService (and related methods) to use gcsService so the
type and name match the pattern used in AssetsServiceImpl (replace symbol
s3Service with gcsService where GcsService is the type).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0b745a09-84c0-4ab6-85bb-7fefa1a8088a
📒 Files selected for processing (22)
.env.example.github/workflows/deploy.ymlDockerfilebuild.gradledocker-compose.yamlsrc/main/java/com/proovy/domain/asset/service/AssetsServiceImpl.javasrc/main/java/com/proovy/domain/conversation/service/ChatServiceImpl.javasrc/main/java/com/proovy/domain/conversation/service/ConversationQueryServiceImpl.javasrc/main/java/com/proovy/domain/note/service/NoteServiceImpl.javasrc/main/java/com/proovy/domain/storage/service/StorageService.javasrc/main/java/com/proovy/domain/user/service/UserService.javasrc/main/java/com/proovy/global/config/GcsConfig.javasrc/main/java/com/proovy/global/config/S3Config.javasrc/main/java/com/proovy/global/infra/gcs/GcsHealthController.javasrc/main/java/com/proovy/global/infra/gcs/GcsService.javasrc/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.javasrc/main/java/com/proovy/global/infra/s3/S3HealthController.javasrc/main/java/com/proovy/global/infra/s3/S3Service.javasrc/main/java/com/proovy/global/infra/s3/S3ServiceImpl.javasrc/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.javasrc/main/resources/application.yamlsrc/test/java/com/proovy/domain/storage/service/StorageServiceTest.java
💤 Files with no reviewable changes (4)
- src/main/java/com/proovy/global/config/S3Config.java
- src/main/java/com/proovy/global/infra/s3/S3HealthController.java
- src/main/java/com/proovy/global/infra/s3/S3Service.java
- src/main/java/com/proovy/global/infra/s3/S3ServiceImpl.java
| # GCS 로컬 에뮬레이터 (fake-gcs-server) | ||
| # 접속: http://localhost:4443/storage/v1/ | ||
| fake-gcs: | ||
| image: fsouza/fake-gcs-server:latest | ||
| container_name: proovy-fake-gcs | ||
| ports: | ||
| - "4566:4566" | ||
| environment: | ||
| - SERVICES=s3 | ||
| - DEBUG=1 | ||
| - TMPDIR=/var/lib/localstack/tmp | ||
| volumes: | ||
| - "./localstack-data:/var/lib/localstack" | ||
| - "/var/run/docker.sock:/var/run/docker.sock" | ||
| - "4443:4443" | ||
| command: -scheme http -port 4443 -public-host localhost:4443 |
There was a problem hiding this comment.
로컬 개발 환경에서 fake-gcs-server에 연결할 수 없습니다.
fake-gcs-server가 localhost:4443에서 실행되지만, GcsConfig.java에서 GCS 클라이언트의 endpoint URL을 설정하는 코드가 없습니다. 또한 GcsServiceImpl.java의 getFileUrl() 메서드는 https://storage.googleapis.com을 하드코딩하고 있습니다.
현재 상태로는 로컬에서 Spring 애플리케이션이 실제 GCS에 연결을 시도하게 됩니다.
🛠️ 로컬 환경 지원을 위한 수정 제안
application.yaml에 endpoint 설정 추가:
gcs:
project-id: ${GCS_PROJECT_ID}
bucket: ${GCS_BUCKET:proovy-assets-dev}
credentials-json: ${GCS_CREDENTIALS_JSON:}
endpoint: ${GCS_ENDPOINT:} # 로컬: http://localhost:4443GcsConfig.java에서 endpoint 설정:
+ `@Value`("${gcs.endpoint:}")
+ private String endpoint;
`@Bean`
public Storage gcsStorage() throws IOException {
StorageOptions.Builder builder = StorageOptions.newBuilder()
.setProjectId(projectId);
+ if (endpoint != null && !endpoint.isBlank()) {
+ builder.setHost(endpoint);
+ }
if (credentialsJson != null && !credentialsJson.isBlank()) {
// ...
}
return builder.build().getService();
}GcsServiceImpl.java의getFileUrl()도 endpoint 기반으로 변경 필요
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker-compose.yaml` around lines 4 - 11, Add a configurable GCS endpoint and
use it when building the client and composing public file URLs: add a
gcs.endpoint property to application.yaml (e.g., gcs.endpoint=${GCS_ENDPOINT:})
and in GcsConfig.java read that property and, when non-empty, configure the
Storage/StorageOptions client to use that endpoint (override
host/transport/options so local fake-gcs-server at http://localhost:4443 is
used); then update GcsServiceImpl.getFileUrl() to derive the base URL from the
same configured endpoint (fall back to https://storage.googleapis.com if
endpoint is blank) so URLs point to the local emulator in dev and to real GCS in
prod.
| try { | ||
| List<Boolean> results = storage.delete(blobIds); | ||
| long failCount = results.stream().filter(r -> !r).count(); | ||
|
|
||
| if (failCount > 0) { | ||
| log.error("[GCS] {} 개 파일 삭제 실패", failCount); | ||
| throw new BusinessException(ErrorCode.COMMON500); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the Google Cloud Storage Java client, what does Storage.delete(Iterable)return when a blob does not exist? Does it returnfalse without throwing an exception?
💡 Result:
Yes, Storage.delete(Iterable) returns a List where false is returned for each BlobId that does not exist (or fails to delete), without throwing an exception. This matches the behavior of the single Storage.delete(BlobId) method, which also returns false when the blob does not exist.
Citations:
- 1: https://www.programcreek.com/java-api-examples/?class=com.google.cloud.storage.Storage&method=delete
- 2: https://cloud.google.com/java/docs/reference/google-cloud-storage/2.3.0/com.google.cloud.storage.Storage
- 3: https://cloud.google.com/java/docs/reference/google-cloud-storage/2.13.1/com.google.cloud.storage.Storage
- 4: Storage.delete returns false on NotFound, BatchRequest.delete shows failure googleapis/google-cloud-java#372
배치 삭제 실패 처리 로직이 GCS 동작과 맞지 않습니다
Storage.delete(blobIds)는 존재하지 않는 blob에 대해 예외를 던지지 않고 false를 반환합니다. 현재 코드는 이 false 결과를 모두 서버 오류로 처리하고 있어서, 정리성 삭제(cleanup) 작업이 비멱등이 되고, S3에서는 성공하던 경로가 500 에러로 깨질 수 있습니다.
존재하지 않는 blob은 경고 수준으로 기록하고 성공 처리해야 합니다.
제안 diff
try {
List<Boolean> results = storage.delete(blobIds);
- long failCount = results.stream().filter(r -> !r).count();
+ long missingCount = results.stream().filter(Boolean.FALSE::equals).count();
- if (failCount > 0) {
- log.error("[GCS] {} 개 파일 삭제 실패", failCount);
- throw new BusinessException(ErrorCode.COMMON500);
+ if (missingCount > 0) {
+ log.warn("[GCS] {} 개 파일은 이미 없어 삭제를 건너뛰었습니다.", missingCount);
}
log.info("[GCS] 파일 일괄 삭제 성공: {} 개", results.size());
} catch (StorageException e) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java` around lines
61 - 68, The batch-delete in GcsServiceImpl treats any false from
Storage.delete(blobIds) as a server error; change this to distinguish "not
found" vs real failures: call storage.delete(blobIds) as before, then for each
index where results.get(i) == false, call storage.get(blobIds.get(i)) — if
get(...) returns null log.warn("[GCS] blob not found: {}") and treat as success,
otherwise treat as a real failure (log.error and count it). Only throw
BusinessException(ErrorCode.COMMON500) when there are real failures (existing
blobs that failed to delete); remove throwing on mere-not-found results. Use the
existing variables/methods: storage.delete, results, blobIds, storage.get,
log.warn/log.error, BusinessException and ErrorCode.COMMON500.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
Dockerfile (1)
1-5:⚠️ Potential issue | 🟠 Major컨테이너를 비루트 사용자로 실행하세요.
현재 이미지가 여전히 root로 실행됩니다. Cloud Run에서도 불필요하게 권한 범위를 넓히므로
USER를 추가하고 복사 파일 소유권도 비루트 계정으로 넘기는 편이 안전합니다.제안 수정
FROM eclipse-temurin:21-jre-alpine +RUN addgroup -S app && adduser -S -G app app WORKDIR /app -COPY build/libs/app.jar app.jar +COPY --chown=app:app build/libs/app.jar app.jar EXPOSE 8080 +USER app ENTRYPOINT ["java", "-jar", "app.jar"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 1 - 5, The Dockerfile currently runs as root; create and switch to a non-root user and transfer ownership of the app artifacts and WORKDIR to that user: add steps after FROM/WORKDIR to create a user (e.g., appuser) and group, chown the WORKDIR and the copied artifact (the file referenced by COPY build/libs/app.jar app.jar) to that user, and add a USER appuser instruction before ENTRYPOINT so the java -jar app.jar process runs unprivileged.
🧹 Nitpick comments (3)
src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java (2)
257-266: 주석과 메서드명이 GCS 마이그레이션과 일치하지 않습니다.
generateThumbnailS3Key메서드명과 주석들이 여전히 "S3"를 참조하고 있습니다. 코드 전체의 일관성을 위해 "GCS" 또는 일반적인 "Storage"로 변경하는 것이 좋습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java` around lines 257 - 266, The method generateThumbnailS3Key and its Javadoc/comments still reference "S3" but the project migrated to GCS/storage; rename the method to generateThumbnailStorageKey (or generateThumbnailGcsKey) and update all Javadoc/comments and any error messages inside (e.g., the IllegalArgumentException text) to use "GCS" or a neutral "storage" term, and update every callsite referencing generateThumbnailS3Key to the new name to keep code and docs consistent.
37-42: 미사용 필드 및 일관성 없는 네이밍.
bucketName필드(line 42)가 주입되지만 사용되지 않습니다. HTTP 다운로드 로직 제거 후 불필요해졌습니다.s3Service변수명이 실제 타입인GcsService와 일치하지 않아 혼란을 줄 수 있습니다.제안 diff
`@RequiredArgsConstructor` public class ThumbnailService { - private final GcsService s3Service; + private final GcsService gcsService; private final AssetRepository assetRepository; private final ApplicationContext applicationContext; - `@Value`("${gcs.bucket}") - private String bucketName; - private static final int THUMBNAIL_WIDTH = 400;이후
s3Service참조를gcsService로 변경해야 합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java` around lines 37 - 42, Remove the unused bucketName field from ThumbnailService and rename the incorrectly named s3Service field to gcsService to match its type GcsService; update the field declaration, the constructor parameter, any this.s3Service references, and any usages in methods to this.gcsService (identify occurrences by the symbols s3Service and GcsService in class ThumbnailService) while leaving assetRepository and applicationContext untouched.src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java (1)
120-122:blob.exists()호출이 중복 API 요청을 발생시킵니다.
storage.get(BlobId)는 이미 GCS에 대한 API 호출을 수행하며, blob이 존재하지 않으면null을 반환합니다. 이후blob.exists()를 호출하면 추가 RPC 요청이 발생하여 단순 존재 확인에 2번의 API 호출이 발생합니다.제안 diff
try { Blob blob = storage.get(BlobId.of(bucketName, gcsKey)); - return blob != null && blob.exists(); + return blob != null; } catch (StorageException e) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java` around lines 120 - 122, The code in GcsServiceImpl currently calls storage.get(BlobId.of(bucketName, gcsKey)) and then blob.exists(), causing two RPCs; change the existence check to rely on the null result from storage.get instead of calling blob.exists() (i.e., replace the blob.exists() usage with a null check on the Blob returned by storage.get in the method that performs the existence check) so only one API call is made.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/deploy.yml:
- Around line 39-42: The workflow currently authenticates to GCP by passing a
long-lived service account key via the google-github-actions/auth@v2 step using
the credentials_json input; replace this with Workload Identity Federation
(OIDC) by switching the auth step to use the workload_identity_provider and
service_account inputs (and remove credentials_json), and add top-level GitHub
Actions permissions including id-token: write (and contents: read) so the runner
can mint OIDC tokens; update the "Authenticate to GCP" step configuration (still
the google-github-actions/auth@v2 step) and ensure the provider and target
service account names match your GCP Identity Provider setup.
---
Duplicate comments:
In `@Dockerfile`:
- Around line 1-5: The Dockerfile currently runs as root; create and switch to a
non-root user and transfer ownership of the app artifacts and WORKDIR to that
user: add steps after FROM/WORKDIR to create a user (e.g., appuser) and group,
chown the WORKDIR and the copied artifact (the file referenced by COPY
build/libs/app.jar app.jar) to that user, and add a USER appuser instruction
before ENTRYPOINT so the java -jar app.jar process runs unprivileged.
---
Nitpick comments:
In `@src/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.java`:
- Around line 120-122: The code in GcsServiceImpl currently calls
storage.get(BlobId.of(bucketName, gcsKey)) and then blob.exists(), causing two
RPCs; change the existence check to rely on the null result from storage.get
instead of calling blob.exists() (i.e., replace the blob.exists() usage with a
null check on the Blob returned by storage.get in the method that performs the
existence check) so only one API call is made.
In `@src/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.java`:
- Around line 257-266: The method generateThumbnailS3Key and its
Javadoc/comments still reference "S3" but the project migrated to GCS/storage;
rename the method to generateThumbnailStorageKey (or generateThumbnailGcsKey)
and update all Javadoc/comments and any error messages inside (e.g., the
IllegalArgumentException text) to use "GCS" or a neutral "storage" term, and
update every callsite referencing generateThumbnailS3Key to the new name to keep
code and docs consistent.
- Around line 37-42: Remove the unused bucketName field from ThumbnailService
and rename the incorrectly named s3Service field to gcsService to match its type
GcsService; update the field declaration, the constructor parameter, any
this.s3Service references, and any usages in methods to this.gcsService
(identify occurrences by the symbols s3Service and GcsService in class
ThumbnailService) while leaving assetRepository and applicationContext
untouched.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ec40cc39-dfd5-4d6d-a2b9-d50c0780ef66
📒 Files selected for processing (9)
.github/workflows/deploy.ymlDockerfilebuild.gradlesrc/main/java/com/proovy/global/config/GcsConfig.javasrc/main/java/com/proovy/global/infra/gcs/GcsHealthController.javasrc/main/java/com/proovy/global/infra/gcs/GcsService.javasrc/main/java/com/proovy/global/infra/gcs/GcsServiceImpl.javasrc/main/java/com/proovy/global/infra/thumbnail/ThumbnailService.javasrc/main/resources/application.yaml
✅ Files skipped from review due to trivial changes (1)
- src/main/java/com/proovy/global/infra/gcs/GcsService.java
🚧 Files skipped from review as they are similar to previous changes (3)
- src/main/resources/application.yaml
- build.gradle
- src/main/java/com/proovy/global/infra/gcs/GcsHealthController.java
📌 관련 이슈
🏷️ PR 타입
📝 작업 내용
의존성 교체
software.amazon.awssdk:s3제거 →google-cloud-storage(libraries-bom 26.34.0) 추가스토리지 레이어 교체 (
global/infra/s3→global/infra/gcs)S3Config삭제 →GcsConfig신규 생성GCS_CREDENTIALS_JSON설정 시 서비스 계정 키 사용S3Service/S3ServiceImpl삭제 →GcsService/GcsServiceImpl신규 생성s3.amazonaws.com→storage.googleapis.comS3HealthController삭제 →GcsHealthController신규 생성 (GET /api/health/gcs)설정 파일 교체
application.yaml:cloud.aws.*블록 →gcs.project-id / bucket / credentials-json.env.example:AWS_S3_ACCESS_KEY,AWS_S3_SECRET_KEY,AWS_REGION→GCS_PROJECT_ID,GCS_BUCKET,GCS_CREDENTIALS_JSONCI/CD 전환
deploy.yml: EC2 + SSM RunCommand 방식 → Artifact Registry +gcloud run deploy방식으로 교체Dockerfile신규 생성 (eclipse-temurin:21-jre-alpine 기반)로컬 개발환경
docker-compose.yaml: localstack(S3 에뮬레이터) → fake-gcs-server(GCS 에뮬레이터) 교체참조 업데이트
S3Service주입 파일 8개 import 및 타입 교체AssetsServiceImpl,ThumbnailService,UserService,StorageService,NoteServiceImpl,ConversationQueryServiceImpl,ChatServiceImpl,StorageServiceTest📸 스크린샷
GCP 프로젝트 세팅 완료 후 빌드 및 런타임 검증 진행
✅ 체크리스트
📎 기타 참고사항
ServiceAccountCredentials필요. Cloud Run 배포 환경에서는GCS_CREDENTIALS_JSON을 Secret Manager로 주입 권장docker compose up fake-gcs(http://localhost:4443)minimum-instances=1설정 예정Summary by CodeRabbit
새로운 기능
Chores
Tests