Batch requests to flush context to backend #320
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Closes #319
Description
Currently the persistent context flush logic sends requests to the backend for all scopes that needed flushing without any regard of how many requests that would be.
Given the file server has to do locking on the database to properly calculate quotas for each request, flooding the backend with lots of requests causes a performance issue.
I have a test flow with 60 copies of a subflow that just writes a timestamp to its flow context - all wired to one inject node.
When triggered and the cache flushes, I see all 60 http requests made to the backend at the same time. The response time for each request increases markedly - the last few requests are close to 2000ms response time. With a single subflow, the response time is ~20ms. The response time increases the more subflow instances in the flow.
With this fix, the flush requests are made in batches of 5. The response time for each request remains stable at sub-30ms - the total time to flush is less with the batching in place than without.