Skip to content

Commit 299e11e

Browse files
committed
Add Benchmark Framework for ducktape (#2030)
1 parent 0848b88 commit 299e11e

File tree

4 files changed

+663
-107
lines changed

4 files changed

+663
-107
lines changed

tests/ducktape/README.md

Lines changed: 56 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,73 @@
11
# Ducktape Producer Tests
22

3-
Ducktape-based producer tests for the Confluent Kafka Python client.
3+
Ducktape-based producer tests for the Confluent Kafka Python client with comprehensive performance metrics.
44

55
## Prerequisites
66

7-
- `pip install ducktape confluent-kafka`
7+
- `pip install ducktape confluent-kafka psutil`
88
- Kafka running on `localhost:9092`
99

1010
## Running Tests
1111

1212
```bash
13-
# Run all tests
13+
# Run all tests with integrated performance metrics
1414
./tests/ducktape/run_ducktape_test.py
1515

16-
# Run specific test
16+
# Run specific test with metrics
1717
./tests/ducktape/run_ducktape_test.py SimpleProducerTest.test_basic_produce
1818
```
1919

2020
## Test Cases
2121

22-
- **test_basic_produce**: Basic message production with callbacks
23-
- **test_produce_multiple_batches**: Parameterized tests (5, 10, 50 messages)
24-
- **test_produce_with_compression**: Matrix tests (none, gzip, snappy)
22+
- **test_basic_produce**: Basic message production with integrated metrics tracking
23+
- **test_produce_multiple_batches**: Parameterized tests (2s, 5s, 10s durations) with metrics
24+
- **test_produce_with_compression**: Matrix tests (none, gzip, snappy) with compression-aware metrics
25+
26+
## Integrated Performance Metrics Features
27+
28+
Every test automatically includes:
29+
30+
- **Latency Tracking**: P50, P95, P99 percentiles with real-time calculation
31+
- **Per-Topic/Partition Metrics**: Detailed breakdown by topic and partition
32+
- **Memory Monitoring**: Peak memory usage and growth tracking with psutil
33+
- **Batch Efficiency**: Messages per poll and buffer utilization analysis
34+
- **Throughput Validation**: Messages/sec and MB/sec with configurable bounds checking
35+
- **Comprehensive Reporting**: Detailed performance reports with pass/fail validation
36+
- **Automatic Bounds Validation**: Performance assertions against configurable thresholds
37+
38+
## Configuration
39+
40+
Performance bounds are loaded from a JSON config file. By default, it loads `benchmark_bounds.json`, but you can override this with the `BENCHMARK_BOUNDS_CONFIG` environment variable:
41+
42+
```json
43+
{
44+
"min_throughput_msg_per_sec": 1500.0,
45+
"max_p95_latency_ms": 1500.0,
46+
"max_error_rate": 0.01,
47+
"min_success_rate": 0.99,
48+
"max_p99_latency_ms": 2500.0,
49+
"max_memory_growth_mb": 600.0,
50+
"max_buffer_full_rate": 0.03,
51+
"min_messages_per_poll": 15.0
52+
}
53+
```
54+
55+
Usage:
56+
```bash
57+
# Use default config file
58+
./run_ducktape_test.py
59+
60+
# Use different configs for different environments
61+
BENCHMARK_BOUNDS_CONFIG=ci_bounds.json ./run_ducktape_test.py
62+
BENCHMARK_BOUNDS_CONFIG=production_bounds.json ./run_ducktape_test.py
63+
```
64+
65+
```python
66+
from benchmark_metrics import MetricsBounds
67+
68+
# Loads from BENCHMARK_BOUNDS_CONFIG env var, or benchmark_bounds.json if not set
69+
bounds = MetricsBounds()
70+
71+
# Or load from a specific config file
72+
bounds = MetricsBounds.from_config_file("my_bounds.json")
73+
```

tests/ducktape/benchmark_bounds.json

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
{
2+
"_comment": "Default performance bounds for benchmark tests",
3+
"min_throughput_msg_per_sec": 1500.0,
4+
"max_p95_latency_ms": 1500.0,
5+
"max_error_rate": 0.01,
6+
"min_success_rate": 0.99,
7+
"max_p99_latency_ms": 2500.0,
8+
"max_memory_growth_mb": 600.0,
9+
"max_buffer_full_rate": 0.03,
10+
"min_messages_per_poll": 15.0
11+
}

0 commit comments

Comments
 (0)