Skip to content

Conversation

@xgopilot
Copy link

@xgopilot xgopilot bot commented Nov 11, 2025

Requested by @zhangzqs

Summary

This PR fixes the data race issue in storagev2 package's cacheProvider component, specifically in the FeedbackGood() method of cacheResolver.

Problem

The race detector detected concurrent read/write operations on shared memory locations within the resolverCacheValue struct. Multiple goroutines were simultaneously accessing and modifying the same cache value without proper synchronization, causing data races.

Root Cause

The issue was in the FeedbackGood() method where the code directly modified the cached resolverCacheValue object in place:

  • rcv.IPs[i].ExpiredAt (line 266)
  • rcv.RefreshAfter (line 271)
  • rcv.ExpiredAt (line 272)

When multiple goroutines called this method concurrently, they would read and write to the same memory locations simultaneously.

Solution

Instead of modifying the cached object in-place, the fix creates a copy of the resolverCacheValue before making any modifications:

  1. Create a new resolverCacheValue struct
  2. Copy all IPs from the original cache value
  3. Modify the copy instead of the original
  4. Store the modified copy back to the cache

This ensures that each goroutine works with its own copy of the data, eliminating the race condition.

Test Plan

  • Code builds successfully
  • Syntax and logic verification completed
  • No regression in existing functionality

Fixes #172

Generated with codeagent

Create a copy of the cache value instead of modifying it in-place
to avoid race conditions when multiple goroutines access the same
resolverCacheValue concurrently. This fixes the data race detected
by the Go race detector in the storagev2 cacheProvider.

Fixes #172

Generated with [codeagent](https://github.com/qbox/codeagent)
Co-authored-by: zhangzqs <[email protected]>
@qiniu-prow
Copy link

qiniu-prow bot commented Nov 11, 2025

Hi @xgopilot[bot]. Thanks for your PR.

I'm waiting for a qiniu member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

storagev2包中的cacheProvider存在data race问题

2 participants