Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UCT/CUDA_IPC: Use buffer id to detect VA recylcing #10405

Merged

Conversation

Akshay-Venkatesh
Copy link
Contributor

@Akshay-Venkatesh Akshay-Venkatesh commented Jan 6, 2025

Why/What?

  1. Export handles are used for VA recycling check today but this is not guaranteed to be unique for newer memory allocators like VMM, and CUDA memory pools. We need to switch to using buffer_id attribute to perform this check.
  2. We call MempoolDestroy when invalidating import entries which can hurt performance for mempool caching logic as we would be forced to reimport mempools again. This PR frees the suballocation instead of freeing the imported mempool altogether.

@Akshay-Venkatesh
Copy link
Contributor Author

@yosefe , @tvegas1 We got reports of VA recycling not being detected internally. This PR aims to address the issue for newer allocators. If the changes look good, is there an option to backport this to 1.18 at all?

arg1 = (const void *)&key->ph;
arg2 = (const void *)&region->key.ph;
#endif
if (memcmp(arg1, arg2, cmp_size) == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it makes sense to update the test to cover this behavior?

Comment on lines 496 to 504
#if HAVE_CUDA_FABRIC
cmp_size = sizeof(key->ph.buffer_id);
arg1 = (const void *)&key->ph.buffer_id;
arg2 = (const void *)&region->key.ph.buffer_id;
#else
cmp_size = sizeof(key->ph);
arg1 = (const void *)&key->ph;
arg2 = (const void *)&region->key.ph;
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. need to fix indent
  2. any reason to not compare buffer_id also for legacy, regardless of HAVE_CUDA_FABRIC?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason to not compare buffer_id also for legacy, regardless of HAVE_CUDA_FABRIC?

I had this to begin with but key->ph is of type CUipcMemHandle when HAVE_CUDA_FABRIC is not declared. So we would need to change the type if we want to also include buffer_id.

@tvegas1
Copy link
Contributor

tvegas1 commented Jan 7, 2025

Export handles are used for VA recycling check today but this is not guaranteed to be unique for newer memory allocators like VMM, and CUDA memory pools. We need to switch to using buffer_id attribute to perform this check.

This PR solves the case where cuda reused pointer value (alloc/free/alloc), and failed to detect that it's actually not same allocation instance, right?

@@ -124,7 +124,7 @@ static ucs_status_t uct_cuda_ipc_close_memhandle(uct_cuda_ipc_cache_region_t *re
(CUdeviceptr)region->mapped_addr, region->key.b_len));
}
} else if (region->key.ph.handle_type == UCT_CUDA_IPC_KEY_HANDLE_TYPE_MEMPOOL) {
return UCT_CUDADRV_FUNC_LOG_WARN(cuMemPoolDestroy(region->key.ph.pool));
return UCT_CUDADRV_FUNC_LOG_WARN(cuMemFree((CUdeviceptr)region->mapped_addr));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i did not understand why this change? previously we were erroneously freeing mem pool?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tvegas1 It not erroneous but excessive. If a suballocation from the same mempool gets sent over in the future, then we would have benefited from using the same imported mempool but closememhandle occurs when there is VA recycling detected. This change addresses the problem by deferring all imported mempool releases to the point of UCP context destruction. The drawback of this approach as with previous IPC handling is that if the exporter explicitly destroys the mempool, there is a reference on the importer side until context destruction. This needs to be separately handled for legacy memory and newer allocations like VMM and Mallocasync in a different PR.

goto found;
} else {
/* VA recycling case. Remove entry. A given pointer should only
* belong to one region so need to look through rest of the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean no need?

@Akshay-Venkatesh
Copy link
Contributor Author

@yosefe @brminich can you also provide a review again if you don't mind?

Comment on lines 494 to 497
if (memcmp((const void*)&key->ph.buffer_id,
(const void*)&region->key.ph.buffer_id,
sizeof(key->ph.buffer_id)) == 0) {
/*cache hit */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just compare buffer_id without using memcmp?

status = UCT_CUDADRV_FUNC(cuIpcGetMemHandle(legacy_handle, (CUdeviceptr)addr),
UCS_LOG_LEVEL_ERROR);
key->ph.handle_type = UCT_CUDA_IPC_KEY_HANDLE_TYPE_LEGACY;
status = UCT_CUDADRV_FUNC(cuIpcGetMemHandle(&key->ph.handle.legacy,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use UCT_CUDADRV_FUNC_LOG_ERR instead

* items in the linked list. Skip to export phase. */
ucs_trace("VA recycling detected for (%p ... %p) (%llu, %llu)",
address, address + length, buffer_id, key->ph.buffer_id);
ucs_list_del(&key->link);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems need to use ucs_list_for_each_safe now

brminich
brminich previously approved these changes Feb 26, 2025
Copy link
Contributor

@yosefe yosefe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add unit test (gtest) to reproduce the issue that same fabric handle represents a different buffer

goto err;
}

key->ph.handle_type = UCT_CUDA_IPC_KEY_HANDLE_TYPE_ERROR;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move after allocation

Comment on lines 141 to 142
ucs_trace("exporting handle for %p: base %p b_len %lu buffer_id %llu", addr,
(void*)key->d_bptr, key->b_len, key->ph.buffer_id);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this log and print buffer id in 233-234 log message - no need two log msgs

goto found;
ucs_trace("found range (%p ... %p) in local cache", address,
address + length);
if (buffer_id == key->ph.buffer_id) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can remove this logic since rcache evicts memh if memory is released (using memory hooks invalidate logic)

brminich
brminich previously approved these changes Mar 2, 2025
@Akshay-Venkatesh Akshay-Venkatesh force-pushed the topic/cuda-ipc-buffer-id-va-recycle branch from 9f48a14 to 21b4c78 Compare March 11, 2025 16:19
@yosefe yosefe merged commit 853475e into openucx:master Mar 13, 2025
151 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants