Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deflaking tests to remove chance of false positives causing tests to fail #34

Merged
merged 2 commits into from
Dec 17, 2024

Conversation

zackcam
Copy link
Contributor

@zackcam zackcam commented Dec 14, 2024

Updated unit tests and integration tests to remove the chance of false positives causing the tests to fail. Before we would assert for each item of insert madd and mexists but due to false positive sometimes the output can be unexpected. For these outputs we now assert if they are equal to 1 or 0 and the length of the return is correct instead of the exact number we expect.
I also updated areas where we are checking bf.card as again false positives could cause this number to be lower than expected. In order to remedy this I changed where we test this command or I called a method that will add up until that number of items taking into account fp's.
For the unit tests we could have a false positive in the add where that item thinks its already been added. To remedy this we now perform a loop until we try and add an item we don't think exists. This will still then check that we get the correct error while not having a small chance to fail on a fp.

Also removed some creations and commands that weren't used anywhere

src/bloom/utils.rs Outdated Show resolved Hide resolved
src/bloom/utils.rs Outdated Show resolved Hide resolved
src/bloom/utils.rs Outdated Show resolved Hide resolved
@@ -134,6 +128,10 @@ def test_bloom_command_behavior(self):
for test_case in basic_behavior_test_case:
cmd = test_case[0]
expected_result = test_case[1]
Copy link
Member

@KarthikSubbarao KarthikSubbarao Dec 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not expected_result for every scenario any more.

Adding a comment above this to explain what it is varies based on the command - Multi Commands have it as the length (of 0s/1s expected)
Other commands have it as the literal expected result

assert(self.client.execute_command('BF.CARD key') == 4)
assert(len(self.client.execute_command('BF.MADD key item3 item4')) == 2)
assert(len(self.client.execute_command('BF.MEXISTS key item3 item5')) == 2)
assert(self.client.execute_command('BF.CARD key') >= 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also add an upper limit here? like <= 4 ?

@zackcam zackcam force-pushed the unstable branch 2 times, most recently from 3ed0d91 to 68f3cae Compare December 17, 2024 00:31
Comment on lines 621 to 626
if let Some(BloomError::NonScalingFilterFull) = expected_error {
panic!("A non scaling filter that is full shouldn't be able to have new items added successfully");
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If something like this is possible, it will be more generic:

Suggested change
if let Some(BloomError::NonScalingFilterFull) = expected_error {
panic!("A non scaling filter that is full shouldn't be able to have new items added successfully");
}
if let Some(err) = expected_error {
panic!("Expected error on the bloom object during during item add: {:?}", err);
}

Co-authored-by: KarthikSubbarao <[email protected]>
Signed-off-by: zackcam <[email protected]>
@KarthikSubbarao KarthikSubbarao merged commit 98ccdc6 into valkey-io:unstable Dec 17, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants