Avalanche is a web application load testing framework designed to simulate user traffic, monitor performance, and analyze system behavior under load.
Avalanche consists of two main components:
- Client Application
- Runner
The Client Application is used to execute Scenarios, store the results, and visualize them through a user interface.
The Runner allows Scenarios to be executed externally, independent of the Client Application. Results generated by the Runner can be stored through the Client’s API.
If no Application URL is provided, results are only displayed in the console.
The Client Application can execute Scenarios independently and does not require the Runner to function.
| Credential | Default Value |
|---|---|
| User | admin |
| Password | admin |
⚠️ Security Notice:
Change the default password immediately after the first login.
Volumes
/app/data/app/scenariosor/scenarios
Environment Variables
| Name | Default Value | Description |
|---|---|---|
SCENARIO_PATH |
./scenarios |
Optional path to the Scenarios directory. Must match the mounted volume path. |
DATA_PATH |
./data |
Optional path for data storage when using SQLite as the datastore. |
Volumes
/app/scenariosor/scenarios
Environment Variables
| Name | Default Value | Description |
|---|---|---|
AV_DB |
pgsql |
Enables PostgreSQL as the datastore. |
AV_DB_SERVER |
postgres |
Hostname or path to the PostgreSQL server. |
AV_DB_PORT |
5432 |
Port number for the PostgreSQL server. |
AV_DB_USERNAME |
(required) | Username for connecting to the PostgreSQL server. |
AV_DB_PASSWORD |
(required) | Password for the PostgreSQL server user. |
SCENARIO_PATH |
./scenarios |
Optional path to the Scenarios directory. Must match the mounted volume path. |
The following example demonstrates how to deploy the Avalanche Client Application alongside a PostgreSQL database using Docker Compose.
Note:
Using PostgreSQL as the datastore is optional.
If PostgreSQL is not configured, Avalanche will automatically use a local SQLite database.
version: '3.8'
services:
avalanche-client:
image: registry.gitlab.com/wickedflame/avalanche/client:latest
ports:
- "8080:8080"
environment:
- AV_DB=pgsql
- AV_DB_SERVER=postgres
- AV_DB_PORT=5432
- AV_DB_USERNAME=avalanche_user
- AV_DB_PASSWORD=secure_password
- SCENARIO_PATH=./scenarios
volumes:
- ./scenarios:/app/scenarios
depends_on:
- postgres
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=avalanche_user
- POSTGRES_PASSWORD=secure_password
- POSTGRES_DB=avalanche
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:The Runner is a lightweight execution engine that allows you to run Scenarios externally — typically in a container or as part of an automated pipeline.
It communicates with the Client Application through its REST API to submit results and retrieve configuration data.
It is ideal for CI/CD pipelines or distributed performance testing setups.
The Runner accepts several command-line parameters for controlling scenario execution:
| Short | Long | Description |
|---|---|---|
-s |
--scenario |
Name of the Scenario or Scenario file. |
-u |
--url |
URL of the Avalanche Client Application. If omitted, results are only logged to the console. |
-i |
--input |
Reads the Scenario definition from STDIN. The scenario must be provided using < filename.yml. This option blocks the thread if no data is received via STDIN. |
Before starting the Runner, ensure that:
- The Avalanche Client Application is running and accessible.
- Example:
http://localhost:8080
- Example:
- One or more Scenario files exist and are accessible to the Runner.
- Example:
./scenarios/scenario_1.yml
- Example:
Note:
Using the Avalanche Client Application is optional.
The Client Application should be used when the testresuls have to be stored for further analysis.
You can execute Scenarios using the Runner container by mounting your local scenarios directory and connecting it to the Avalanche Client instance.
docker pull registry.gitlab.com/wickedflame/avalanche/runner:latest
docker run --rm -i \
-v ./scenarios:/scenarios \
registry.gitlab.com/wickedflame/avalanche/runner:latest \
run -s scenario_1 -u http://localhost:8080-v ./scenarios:/scenariosmounts the local directory containing Scenario files.-s scenario_1specifies the Scenario file to execute (without the .yml extension).-u http://localhost:8080sets the URL of the Avalanche Client API where results are reported. If the--urlparameter is omitted, results are displayed only in the container’s console output and are not persisted in the Client.
Alternatively, you can stream a Scenario definition directly to the Runner using STDIN, removing the need for mounted volumes.
docker run --rm -i \
registry.gitlab.com/wickedflame/avalanche/runner:latest \
run -s scenario_1 -u http://localhost:8080 -i < scenarios/scenario_1.ymlIn this mode:
- The
-iflag tells the Runner to read from STDIN. - The
< scenarios/scenario_1.ymlcommand feeds the scenario file content directly into the Runner.
Tips
- Ensure the Scenario name matches the Scenario file name (without the .yml extension).
- Multiple Runner instances can be deployed concurrently for distributed load generation.
- Results sent to the Client Application are automatically aggregated and available for analysis in the web interface.
The Config section is optional.
This is applied to all TestCases that don't define a configuration on their own.
Config:
Users: 5
Iterations: 10
Duration: 0
Interval: 0
RampupTime: 0
UseCookies: True
Delay: 1
TestCases:
- Urls:
- 'https://testsite.com/'
Name: Startpage
- Urls:
- 'https://testsite.com/Home/Privacy'
Name: Privacy
Init:
Url: 'https://testsite.com/Home/Privacy'
Users: 5
Iterations: 10
Duration: 0
Interval: 0
RampupTime: 0
UseCookies: True
Delay: 1
Authorization:
Type: oauth
GrantType: password
Authority: https://avalanche.io/api/auth/v1/token
ClientId: avalancheclient
ClientSecret: avalanchesecret
Username:
Password:
Scope: test
| Name | Value | |
|---|---|---|
| Users | INT | Amount of users per testrun. Defaults to 1 |
| Iterations | INT | Amount of iterations per user |
| Duration | INT | Total duration of the testrun in minutes |
| Interval | INT | Interval of each user call in milliseconds |
| Delay | INT | Delay between each request in seconds |
| RampupTime | INT | Rampup-time in seconds |
| UseCookies | BOOL | Use same cookies for all requests per user? Defaults to true |
| Name | Name of the Testrun | |
| Urls | List of URL to call per Testrun | |
| Init | Properties for the initialization. Currently only the Url has to be provided |
Scenario:
Name: name of the scenario (same as filename)
TestCases: <- refactor to TestCases
- Name: name of the test
Users: amount of threads
Urls:
- 'https://host.docker.internal:32770/'
Iterations: 10
Duration: 0
Interval: 0
RampupTime: 0
UseCookies: True
Delay: 1
Init:
Url: 'https://host.docker.internal:32770/'
TestRun:
TestId: unique id for each testrun
Scenario: name of the scenario
TestCases:
- TestName: <- refactor to TestCase. name of the test. taken from the scenario config
TestCase: name of the test. taken from the scenario config
Id: generated id. not relevant...
ThreadId: Id of the thread that the test war run in
ThreadNumber: <- refactor to threadid
docker build -f dockerfile-runner -t "avalanche_runner_:latest" . --no-cache --force-rm=true
docker build -f dockerfile-client -t "avalanche_client:latest" . --no-cache --force-rm=true
docker run --rm -i -v ./scenarios:/scenarios avalanche_runner:latest run -s local
Load tests apply an ordinary amount of stress to an application to see how it performs. For example, you may load test an ecommerce application using traffic levels that you've seen during Black Friday or other peak holidays. The goal is to identify any bottlenecks that might arise and address them before new code is deployed.
In the DevOps process, load tests are often run alongside functional tests in a continuous integration and deployment pipeline to catch any issues early.
Stress tests are designed to break the application rather than address bottlenecks. It helps you understand its limits by applying unrealistic or unlikely load scenarios. By deliberately inducing failures, you can analyze the risks involved at various break points and adjust the application to make it break more gracefully at key junctures.
These tests are usually run on a periodic basis rather than within a DevOps pipeline. For example, you may run a stress test after implementing performance improvements.
Spike tests apply a sudden change in load to identify weaknesses within the application and underlying infrastructure. These tests are often extreme increments or decrements rather than a build-up in load. The goal is to see if all aspects of the system, including server and database, can handle sudden bursts in demand.
These tests are usually run prior to big events. For instance, an ecommerce website might run a spike test before Black Friday.
Endurance tests, also known as soak tests, keep an application under load over an extended period of time to see how it degrades. Oftentimes, an application might handle a short-term increase in load, but memory leaks or other issues could degrade performance over time. The goal is to identify and address these bottlenecks before they reach production.
These tests may be run parallel to a continuous integration pipeline, but their lengthy runtimes mean they may not be run as frequently as load tests.
Scalability tests measure an application's performance when certain elements are scaled up or down. For example, an e-commerce application might test what happens when the number of new customer sign-ups increases or how a decrease in new orders could impact resource usage. They might run at the hardware, software or database level.
These tests tend to run less frequently since they're designed to diagnose specific issues rather than broadly help identify bottlenecks within the entire application.
Also known as flood tests, measure how well an application responds to large volumes of data in the database. In addition to simulating network requests, a database is vastly expanded to see if there's an impact with database queries or accessibility with an increase in network requests. Basically it tries to uncover difficult-to-spot bottlenecks.
These tests are usually run before an application expects to see an increase in database size. For instance, an ecommerce application might run the test before adding new products.