Generate realistic synthetic logs for testing, development, and benchmarking. Define patterns in YAML, control output rates, and stream to files, TCP, UDP, or HTTP endpoints.
pip install logsynth# Generate 100 nginx access logs
logsynth run nginx --count 100
# Stream logs at 50/sec for 5 minutes
logsynth run nginx --rate 50 --duration 5m
# Output as JSON to a file
logsynth run nginx --count 1000 --format json --output /var/log/test.log
# See what's available
logsynth presets list| Category | Presets |
|---|---|
| Web | nginx, apache, nginx-error, haproxy |
| Database | redis, postgres, mysql, mongodb |
| Infrastructure | systemd, kubernetes, docker, terraform |
| Security | auth, sshd, firewall, audit |
| Application | java, python, nodejs |
logsynth run <preset> [options]
--rate, -r Lines per second (default: 10)
--count, -c Total lines to generate
--duration, -d Run time (30s, 5m, 1h)
--format, -f Output format: plain, json, logfmt
--output, -o Destination: file, tcp://, udp://, http://, https://
--header, -H HTTP header (key:value), can be repeated
--preview, -p Show sample output and exit
--seed, -s Random seed for reproducibilityCreate YAML templates for any log format:
name: my-app
format: plain
pattern: "[$ts] $level: $message"
fields:
ts:
type: timestamp
format: "%Y-%m-%d %H:%M:%S"
level:
type: choice
values: [INFO, WARN, ERROR]
weights: [0.8, 0.15, 0.05]
message:
type: choice
values:
- "Request completed"
- "Connection timeout"
- "Database error"logsynth run my-app.yaml --count 100| Type | Description | Key Options |
|---|---|---|
timestamp |
Date/time values | format, step, jitter, tz |
choice |
Random from list | values, weights |
int |
Random integer | min, max |
float |
Random decimal | min, max, precision |
ip |
IP addresses | cidr, ipv6 |
uuid |
Random UUIDs | uppercase |
sequence |
Incrementing numbers | start, step |
literal |
Fixed value | value |
Run multiple log types simultaneously with independent rates:
logsynth run nginx redis postgres \
--stream nginx:rate=100 \
--stream redis:rate=20 \
--stream postgres:rate=10 \
--duration 5mGenerate fields only when conditions are met:
fields:
level:
type: choice
values: [INFO, ERROR]
error_code:
type: int
min: 1000
max: 9999
when: "level == 'ERROR'"Use Jinja2 for complex patterns (auto-detected):
pattern: |
{% if level == "ERROR" %}ALERT {% endif %}{{ ts }} {{ level }}: {{ message }}Inject malformed logs to test error handling:
logsynth run nginx --count 1000 --corrupt 5 # 5% corruptedSimulate traffic spikes:
# 100/sec for 5s, then 10/sec for 25s, repeat
logsynth run nginx --burst 100:5s,10:25s --duration 5mSave and reuse settings:
logsynth profiles create high-volume --rate 1000 --format json
logsynth run nginx --profile high-volumePOST logs to HTTP endpoints with batching, retries, and dead-letter support:
# Basic HTTP POST
logsynth run nginx --output http://localhost:8080/logs --count 1000
# With batching config (batch=N lines, timeout=T seconds)
logsynth run nginx --output "http://localhost:8080/logs?batch=50&timeout=10"
# With custom headers
logsynth run nginx --output http://localhost:8080/logs \
--header "Authorization:Bearer token" \
--header "X-Source:logsynth"
# NDJSON format for Elasticsearch-style ingestion
logsynth run nginx --output "http://localhost:9200/_bulk?format=ndjson"URL parameters: batch, timeout, format (json/ndjson/text), retries, dead_letter
Failed batches are written to ./logsynth-dead-letter.jsonl for retry/debugging.
Extend with Python plugins in ~/.config/logsynth/plugins/:
from logsynth.fields import FieldGenerator, register
class HashGenerator(FieldGenerator):
def generate(self) -> str:
return hashlib.sha256(str(random.random()).encode()).hexdigest()[:16]
def reset(self) -> None:
pass
@register("hash")
def create(config: dict) -> FieldGenerator:
return HashGenerator(config)docker build -t logsynth .
docker run --rm logsynth run nginx --count 100See the examples/ directory for:
- Jinja2 conditional templates
- Custom plugin implementations
- Profile configurations
- Parallel stream scripts
MIT - see LICENSE