|
1 | 1 | # Ducktape Producer Tests
|
2 | 2 |
|
3 |
| -Ducktape-based producer tests for the Confluent Kafka Python client. |
| 3 | +Ducktape-based producer tests for the Confluent Kafka Python client with comprehensive performance metrics. |
4 | 4 |
|
5 | 5 | ## Prerequisites
|
6 | 6 |
|
7 |
| -- `pip install ducktape confluent-kafka` |
| 7 | +- `pip install ducktape confluent-kafka psutil` |
8 | 8 | - Kafka running on `localhost:9092`
|
9 | 9 |
|
10 | 10 | ## Running Tests
|
11 | 11 |
|
12 | 12 | ```bash
|
13 |
| -# Run all tests |
| 13 | +# Run all tests with integrated performance metrics |
14 | 14 | ./tests/ducktape/run_ducktape_test.py
|
15 | 15 |
|
16 |
| -# Run specific test |
| 16 | +# Run specific test with metrics |
17 | 17 | ./tests/ducktape/run_ducktape_test.py SimpleProducerTest.test_basic_produce
|
18 | 18 | ```
|
19 | 19 |
|
20 | 20 | ## Test Cases
|
21 | 21 |
|
22 |
| -- **test_basic_produce**: Basic message production with callbacks |
23 |
| -- **test_produce_multiple_batches**: Parameterized tests (5, 10, 50 messages) |
24 |
| -- **test_produce_with_compression**: Matrix tests (none, gzip, snappy) |
| 22 | +- **test_basic_produce**: Basic message production with integrated metrics tracking |
| 23 | +- **test_produce_multiple_batches**: Parameterized tests (2s, 5s, 10s durations) with metrics |
| 24 | +- **test_produce_with_compression**: Matrix tests (none, gzip, snappy) with compression-aware metrics |
| 25 | + |
| 26 | +## Integrated Performance Metrics Features |
| 27 | + |
| 28 | +Every test automatically includes: |
| 29 | + |
| 30 | +- **Latency Tracking**: P50, P95, P99 percentiles with real-time calculation |
| 31 | +- **Per-Topic/Partition Metrics**: Detailed breakdown by topic and partition |
| 32 | +- **Memory Monitoring**: Peak memory usage and growth tracking with psutil |
| 33 | +- **Batch Efficiency**: Messages per poll and buffer utilization analysis |
| 34 | +- **Throughput Validation**: Messages/sec and MB/sec with configurable bounds checking |
| 35 | +- **Comprehensive Reporting**: Detailed performance reports with pass/fail validation |
| 36 | +- **Automatic Bounds Validation**: Performance assertions against configurable thresholds |
| 37 | + |
| 38 | +## Configuration |
| 39 | + |
| 40 | +Performance bounds are loaded from a JSON config file. By default, it loads `benchmark_bounds.json`, but you can override this with the `BENCHMARK_BOUNDS_CONFIG` environment variable: |
| 41 | + |
| 42 | +```json |
| 43 | +{ |
| 44 | + "min_throughput_msg_per_sec": 1500.0, |
| 45 | + "max_p95_latency_ms": 1500.0, |
| 46 | + "max_error_rate": 0.01, |
| 47 | + "min_success_rate": 0.99, |
| 48 | + "max_p99_latency_ms": 2500.0, |
| 49 | + "max_memory_growth_mb": 600.0, |
| 50 | + "max_buffer_full_rate": 0.03, |
| 51 | + "min_messages_per_poll": 15.0 |
| 52 | +} |
| 53 | +``` |
| 54 | + |
| 55 | +Usage: |
| 56 | +```bash |
| 57 | +# Use default config file |
| 58 | +./run_ducktape_test.py |
| 59 | + |
| 60 | +# Use different configs for different environments |
| 61 | +BENCHMARK_BOUNDS_CONFIG=ci_bounds.json ./run_ducktape_test.py |
| 62 | +BENCHMARK_BOUNDS_CONFIG=production_bounds.json ./run_ducktape_test.py |
| 63 | +``` |
| 64 | + |
| 65 | +```python |
| 66 | +from benchmark_metrics import MetricsBounds |
| 67 | + |
| 68 | +# Loads from BENCHMARK_BOUNDS_CONFIG env var, or benchmark_bounds.json if not set |
| 69 | +bounds = MetricsBounds() |
| 70 | + |
| 71 | +# Or load from a specific config file |
| 72 | +bounds = MetricsBounds.from_config_file("my_bounds.json") |
| 73 | +``` |
0 commit comments