Skip to content

Conversation

@prestonvasquez
Copy link
Member

@prestonvasquez prestonvasquez commented Nov 8, 2025

GODRIVER-3659

Summary

This PR implements event filtering for awaitMinPoolSizeMS in the unified test runner, as specified in the unified test format specification.

A naive implementation would clear specific event arrays after initialization:

client.pooled = nil
client.serverDescriptionChanged = nil
client.serverHeartbeatStartedEvent = nil
// ... clear each SDAM event type

However, if we add a new CMAP or SDAM event type in the future, we must remember to update this clearing block. Forgetting to do so means initialization events leak into test assertions, causing false failures.

Additionally, the clientEntity requires that the following fields not be reset:

	// These should not be changed after the clientEntity is initialized
	observedEvents                      map[monitoringEventType]struct{}
	storedEvents                        map[monitoringEventType][]string
	eventsCount                         map[monitoringEventType]int32
	serverDescriptionChangedEventsCount map[serverDescriptionChangedEventInfo]int32

This guarantee would have to be removed with the naive approach, at least for eventsCount.

The eventSequencer assigns a monotonically increasing sequence number to each CMAP and SDAM event as it's recorded. After pool initialization completes, we capture the current sequence as a cutoff. When verifying test expectations, we filter out any events with sequence numbers at or below the cutoff. This approach is future-proof because new event types automatically participate in filtering as long as they call the appropriate recording method in their event processor.

Background & Motivation

PR #2196 added support for awaitMinPoolSizeMS to the unified test runner, but was merged before the specification PR mongodb/specifications#1834 was finalized. As a result, the initial implementation used a simplified approach that doesn't match the final specification requirements.

Per the spec, when awaitMinPoolSizeMS is specified:

Any CMAP and SDAM event/log listeners configured on the client should ignore any events that occur before the pool is being populated.

Copilot AI review requested due to automatic review settings November 8, 2025 00:08
@prestonvasquez prestonvasquez requested a review from a team as a code owner November 8, 2025 00:08
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements event filtering for awaitMinPoolSizeMS in the unified test runner to comply with the specification requirement that CMAP and SDAM events occurring during connection pool initialization should be ignored. The implementation uses a sequence-based filtering approach where events are assigned monotonically increasing sequence numbers, and a cutoff is set after pool initialization completes.

Key Changes:

  • Replaced boolean awaitMinPoolSize field with awaitMinPoolSizeMS integer field to specify timeout duration
  • Introduced eventSequencer to track event ordering via sequence numbers and filter events below a cutoff threshold
  • Modified event processing functions to record sequence numbers for all CMAP and SDAM events

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.

File Description
internal/integration/unified/entity.go Updated entityOptions to use awaitMinPoolSizeMS with timeout duration instead of boolean flag
internal/integration/unified/client_entity.go Added eventSequencer type and filtering logic; updated awaitMinimumPoolSize to accept timeout parameter and set cutoff after pool initialization
internal/integration/unified/event_verification.go Modified event verification functions to use filterEventsBySeq for filtering CMAP and SDAM events
internal/integration/unified/client_entity_test.go Added comprehensive unit tests for eventSequencer functionality

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@mongodb-drivers-pr-bot
Copy link
Contributor

mongodb-drivers-pr-bot bot commented Nov 8, 2025

🧪 Performance Results

Commit SHA: 8619a32

The following benchmark tests for version 69264177bcbd2f0007cdc247 had statistically significant changes (i.e., |z-score| > 1.96):

Benchmark Measurement % Change Patch Value Stable Region H-Score Z-Score
BenchmarkMultiInsertSmallDocument total_mem_allocs 16.5680 2644243.0000 Avg: 2268411.9868
Med: 2238225.0000
Stdev: 118423.5610
0.8425 3.1736
BenchmarkMultiInsertSmallDocument total_bytes_allocated 11.9290 499795272.0000 Avg: 446528879.1602
Med: 445126496.0000
Stdev: 14519630.6129
0.8559 3.6686
BenchmarkMultiInsertSmallDocument total_time_seconds 10.8978 1.1763 Avg: 1.0607
Med: 1.0466
Stdev: 0.0461
0.8078 2.5089
BenchmarkBSONFlatDocumentDecoding total_time_seconds -4.5920 1.1443 Avg: 1.1994
Med: 1.1992
Stdev: 0.0067
0.9368 -8.2507
BenchmarkBSONFullDocumentEncoding total_mem_allocs 3.1576 1584140.0000 Avg: 1535650.0000
Med: 1533146.0000
Stdev: 17366.4133
0.8032 2.7922
BenchmarkBSONFullDocumentEncoding total_bytes_allocated 3.1030 273904488.0000 Avg: 265660997.0000
Med: 265305892.0000
Stdev: 2976611.2768
0.8016 2.7694
BenchmarkBSONFullDocumentEncoding ops_per_second_max 2.6905 48685.4917 Avg: 47409.9136
Med: 47359.7633
Stdev: 452.5302
0.8033 2.8188
BenchmarkBSONFullDocumentEncoding ops_per_second_med 2.4616 47158.6890 Avg: 46025.7070
Med: 46020.5241
Stdev: 498.6836
0.7583 2.2719
BenchmarkBSONFullDocumentEncoding allocated_bytes_per_op -0.0637 5194.0000 Avg: 5197.3125
Med: 5197.0000
Stdev: 1.3525
0.7795 -2.4492
BenchmarkBSONFullDocumentDecoding allocated_bytes_per_op 0.0323 25330.0000 Avg: 25321.8125
Med: 25321.0000
Stdev: 3.6188
0.8412 2.2625

For a comprehensive view of all microbenchmark results for this PR's commit, please check out the Evergreen perf task for this patch.

@mongodb-drivers-pr-bot
Copy link
Contributor

API Change Report

No changes found!

@prestonvasquez prestonvasquez force-pushed the ci/godriver-3659-await-min-pool-size-in-ust-new branch from acbd1d0 to 4110738 Compare November 10, 2025 23:49
@prestonvasquez prestonvasquez added the review-priority-low Low Priority PR for Review: within 3 business days label Nov 17, 2025
return fmt.Errorf("timed out waiting for client to reach minPoolSize")
case <-ticker.C:
if uint64(entity.getEventCount(connectionReadyEvent)) >= minPoolSize {
entity.eventSequencer.setCutoff()
Copy link
Member Author

@prestonvasquez prestonvasquez Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative to this is to re-initialize the event containers here. While this is a much simpler solution, it could be buggy depending if we need to re-initialize event logic for some other reason somewhere else. The eventSequencer future-proofs this issue.

@prestonvasquez
Copy link
Member Author

Closing to focus on a simpler solution for now.

seqSlice = c.eventSequencer.seqByEventType[eventType]
}

localSeqs := append([]int64(nil), seqSlice...)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional: Use slices.Clone.

Suggested change
localSeqs := append([]int64(nil), seqSlice...)
localSeqs := slices.Clone(seqSlice)

Comment on lines +57 to +62
cutoffAfter: 2,
setupEvents: func(c *clientEntity) {
recordPoolEvent(c)
recordPoolEvent(c)
// Cutoff will be set here (after event 2)
recordPoolEvent(c)
Copy link
Collaborator

@matthewdale matthewdale Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting the cutoff point by manipulating the internal state of eventSequencer makes this test dependent on the internal implementation. Instead, call c.eventSequencer.setCutoff() at the expected point in the sequence in the setupEvents func.

E.g.

setupEvents: func(c *clientEntity) {
	recordPoolEvent(c)
	recordPoolEvent(c)
	c.eventSequencer.setCutoff()
	recordPoolEvent(c)
	recordPoolEvent(c)
	recordPoolEvent(c)
// ...

Comment on lines +71 to +75
func (es *eventSequencer) recordEvent(eventType monitoringEventType) {
next := es.counter.Add(1)

es.mu.Lock()
es.seqByEventType[eventType] = append(es.seqByEventType[eventType], next)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of storing every sequence number per event type, can we only store the index when the cutoff happened?

E.g.

func (es *eventSequencer) recordEvent(...) {
	es.mu.Lock()
	defer es.mu.Unlock()

	if !es.isCutoff {
		es.cutoffByEventType[eventType]++
	}
}

func (es *eventSequencer) setCutoff() {
	es.mu.Lock()
	defer es.mu.Unlock()
	es.isCutoff = true
}

That would also simplify filterEventsBySeq because you just get a subslice index:

func filterEventsBySeq[T any](...) []T {
	// ...
	cutIdx := c.eventSequencer.cutoffByEventType[eventType]
	return localEvents[cutIdx:]
}

Copy link
Member Author

@prestonvasquez prestonvasquez Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this approach is simpler, it only supports a single cutoff per test. If the unified spec adds scenarios with multiple cutoffs (e.g., awaitMinPoolSizeMS (implicit cutoff) → run X → cutoff at N (either implicit or explicit) → run Y), we’d have to refactor back to the original design.

Comment on lines 79 to 83
func (es *eventSequencer) recordPooledEvent() {
next := es.counter.Add(1)

es.mu.Lock()
es.poolSeq = append(es.poolSeq, next)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason this has to be a separate counter? Can this call es.recordEvent(poolAnyEvent)?

case <-awaitCtx.Done():
return fmt.Errorf("timed out waiting for client to reach minPoolSize")
case <-ticker.C:
if uint64(entity.getEventCount(connectionReadyEvent)) >= minPoolSize {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spec says that awaitMinPoolSizeMS must wait for each connection pool to reach minPoolSize, but this seems to wait for the total number of connections across all pools. Is there a way to wait for each connection pool to reach minPoolSize?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

…hub.com:prestonvasquez/mongo-go-driver into ci/godriver-3659-await-min-pool-size-in-ust-new
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement review-priority-low Low Priority PR for Review: within 3 business days

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants