Skip to content

fix: Don't change ConfiguredAddr when adding a transport #6804

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 18, 2025

Conversation

Hocuri
Copy link
Collaborator

@Hocuri Hocuri commented Apr 14, 2025

Before this PR, ConfiguredAddr (which will be used to store the primary transport) would have been changed when adding a new transport. Doesn't matter yet because it's not possible yet to have multiple transports. But I wanted to fix this bug already so that I'm not suprised by it later.

Doesn't matter yet because it's not possible yet to have multiple
transports. But I wanted to fix this bug already so that I'm not
suprised by it later.
@Hocuri Hocuri requested a review from link2xt April 14, 2025 14:15
@Hocuri Hocuri requested a review from iequidoo June 12, 2025 13:43
.sql
.set_raw_config(Config::ConfiguredAddr.as_ref(), Some(&addr))
.await?;
if configured_addr.is_none() {
Copy link
Collaborator

@iequidoo iequidoo Jun 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can race with itself, e.g. if two set_config_ex() calls are done in parallel for an unconfigured context. While this is unlikely, i'd suggest to add some try_set_raw_config() which does INSERT OR IGNORE. Extra locking it's better to avoid. Orphaned transports entry shouldn't be a problem i think.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with this approach is that ConfiguredProvider should be set iff there was no ConfiguredAddr. While that would be possible to solve with some extra logic, I don't think the complexity (and by that, possibility of new bugs, and of confusing us in the future) is worth it. Many functions are not supposed to be called concurrently already (e.g. receive_imf()), and I don't see any reason why anyone would call save_to_transports_table() multiple times in parallel.

Copy link
Collaborator

@iequidoo iequidoo Jun 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with this approach is that ConfiguredProvider should be set iff there was no ConfiguredAddr.

Not a problem, try_set_raw_config() could return bool (whether the insertion happened).

save_to_transports_table() won't be called in parallel directly of course because it's an internal function, but set_config() can be. Both calls would succeed then which doesn't look correct.

While there's no such problem for other config keys, maybe just document that set_config() mustn't be called in parallel for the same key? Then save_to_transports_table() wouldn't violate anything.

EDIT: But overall doing INSERT OR REPLACE for a config key which isn't supposed to change is bug-prone and will shoot sooner or later. But if this code is a temporary solution while multi-transport isn't fully supported, that's fine.

Copy link
Collaborator

@link2xt link2xt Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ConfiguredProvider needs to go away eventually anyway. Then worst issue of the race condition is that another transport is set as default.

If race condition was a real problem here, then I'd start a transaction and do all the checks and replacement within it with raw config table changes. But in this case it is a config that is only changed in a response to UI actions or during initial bot setup.

@Hocuri Hocuri merged commit 0568393 into main Jun 18, 2025
29 checks passed
@Hocuri Hocuri deleted the hoc/only-set-configured-addr-when-there-is-none branch June 18, 2025 09:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants