You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Users desire to expedite task processing and thus initiate a long-running Spark process, which processes approximately one billion data points every ten minutes. Initially, within the first 24 hours, the task can complete the processing of around one billion data points within ten minutes. However, as time progresses, a significant number of Regions become invalid, leading to retries and consequently causing the task's runtime to exceed ten minutes. Therefore, there is a demand to incorporate an API that allows for the manual clearing of the Region Cache after each ten-minute cycle of task completion.
The text was updated successfully, but these errors were encountered:
Enhancement
Users desire to expedite task processing and thus initiate a long-running Spark process, which processes approximately one billion data points every ten minutes. Initially, within the first 24 hours, the task can complete the processing of around one billion data points within ten minutes. However, as time progresses, a significant number of Regions become invalid, leading to retries and consequently causing the task's runtime to exceed ten minutes. Therefore, there is a demand to incorporate an API that allows for the manual clearing of the Region Cache after each ten-minute cycle of task completion.
The text was updated successfully, but these errors were encountered: