Skip to content

Reduce the memory allocation for AE #347

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: unstable
Choose a base branch
from
Open

Reduce the memory allocation for AE #347

wants to merge 2 commits into from

Conversation

sundb
Copy link
Owner

@sundb sundb commented Oct 29, 2024

Currently, the event loop allocates memory for events and fires at startup based on the maximum client limit (max client). This can lead to significant memory waste if the max client is set high but the actual number of active connections remains low.

This PR introduces a dynamic memory allocation approach.
events and fired are now allocated based on the actual number of active client numbers (maxfd).
If maxfd exceeds the current allocated space, memory is expanded by doubling the current size.
This strategy minimizes frequent reallocations while preventing excessive memory usage when only a small number of clients are connected.

@oranagra
Copy link

please describe what it does.
from a quick view, i see it keeps an array of pointers, with lazy allocation rather than an array on structs (and no free).
this means that the extra indirection can come at a performance cost.
and also that in case all (or most) of these clients are actually used, the memory consumption (and maybe fragmentation) rises.
can you add some measurements?

@sundb
Copy link
Owner Author

sundb commented Nov 4, 2024

please describe what it does. from a quick view, i see it keeps an array of pointers, with lazy allocation rather than an array on structs (and no free). this means that the extra indirection can come at a performance cost. and also that in case all (or most) of these clients are actually used, the memory consumption (and maybe fragmentation) rises. can you add some measurements?

you're right, extra indirection caused significant regression.
i give up the old solution, and use dynamic allocation for events and fired.

@sundb
Copy link
Owner Author

sundb commented Nov 4, 2024

  1. initial start: ./src/redis-server --save --maxclients 10000
    memory usage:
    unstable: used_memory_human:998.13K
    this PR: used_memory_human:563.05K

  2. create 1000 clients: memtier_benchmark --hide-histogram --test-time=1 -c 500 -t 2 --command="ping"
    memory usage:
    usable:used_memory_human:1.02M
    this PR: used_memory_human:625.83K

  3. create 10k clients: memtier_benchmark --hide-histogram --test-time=1 -c 5000 -t 2 --command="ping"
    memory usage
    usable:used_memory_human:1.01M
    this PR: used_memory_human:1009.97K

when the connected clients reach the maxclient, this two consume almost the same memory.

@oranagra
Copy link

oranagra commented Nov 4, 2024

ok, so now it's the same as the original version, and we just avoid the big allocation at startup, and instead gradually grow when needed.
i suppose it's better than before. but i'm not sure it makes it ok to hold a many event loops.

regarding the test, i suppose you should also test with 10k clients, and it'll show the same memory usage.

@sundb
Copy link
Owner Author

sundb commented Nov 4, 2024

@oranagra update the comment in #347 (comment), got the result as expected.

@oranagra
Copy link

oranagra commented Nov 5, 2024

still, despite being better than what we have now. i'm not sure it makes it ok to hold a many (10?) event loops.

@sundb
Copy link
Owner Author

sundb commented Nov 5, 2024

still, despite being better than what we have now. i'm not sure it makes it ok to hold a many (10?) event loops.

i can't think of a more efficient way to do it, there's no hotter path than it, any indirection overhead is magnified
i think if a user can use up 10000 connections, then the memory usage of the keys must be much larger than the 10*1m.

@oranagra
Copy link

oranagra commented Nov 5, 2024

well, so maybe coupling that gradual memory increase, with a gradual increase of IO thread count would be an ok mitigation.
i.e. redis is born with configuration that allows high number of connections and high number of threads, but the memory usage and amount of actual threads that are created, only grows when needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants