-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clusters autodiscovery #629
base: antalya
Are you sure you want to change the base?
Conversation
Resolve the issue of leaking keeper watches.
This is an automated comment for commit 0575462 with description of existing statuses. It's updated for the latest CI running ❌ Click here to open a full report in a separate page
Successful checks
|
@@ -110,6 +111,54 @@ class ClusterDiscovery::ConcurrentFlags | |||
bool stop_flag = false; | |||
}; | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, explain why Flags and ConcurrentFlags coexist.
/* cluster_secret */ cluster_secret | ||
); | ||
|
||
multicluster_discovery_paths->push_back( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, we overuse shared_ptr in this code (not only in this particular place).
May be it worth getting rid of this extra layer or at least introduce types like FlagsPtr? I think that emplace_back should work effortlessly.
|
||
for (auto & path : (*multicluster_discovery_paths)) | ||
{ | ||
++zk_root_index; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, explain why we start index from 1.
|
||
if (info.count(cluster)) | ||
{ /// Possible with several root paths, it's a configuration error | ||
LOG_WARNING(log, "Found dynamic duplicate of cluster '{}' in Keeper, skipped record by path {}:{}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we be more strict and prohibit this kind of misconfiguration?
{ | ||
LOG_ERROR(log, "Unknown cluster '{}'", cluster_name); | ||
continue; | ||
if (!info.zk_root_index) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it actually possible? In which circumstances?
continue; | ||
auto p = new_dynamic_clusters_info.find(cluster_name); | ||
if (p != new_dynamic_clusters_info.end()) | ||
new_dynamic_clusters_info.erase(p); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new_dynamic_clusters_info.erase(cluster_name) is enough,
though I am not sure, may be it is less verbose.
|
||
if (!need_update.exchange(false)) | ||
auto clusters = dynamic_clusters_to_update->wait(5s, finished); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'5s' probably worth moving to header.
auto cluster_info_it = clusters_info.find(cluster_name); | ||
if (cluster_info_it == clusters_info.end()) | ||
{ | ||
LOG_ERROR(log, "Unknown cluster '{}'", cluster_name); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be a bit more verbose message?
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Clusters autodiscovery
Cherry-pick of bugfix ClickHouse#74521
Documentation entry for user-facing changes
ClickHouse has a feature to discovery nodes in cluster (see setting
allow_experimental_cluster_discovery
).This PR add ability to discover new clusters.
Nodes in new clusters keep the same way to register in Keeper
Added ability to discover all clusters in Keeper node
Cherry-pick of bugfix ClickHouse#74521