Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
e8f3c10
docs: use pytest-jubilant
james-garner-canonical Mar 31, 2026
2fb6d38
docs: pin pytest-jubilant and jubilant versions
james-garner-canonical Mar 31, 2026
cc54d2d
docs: clarify the versions used
james-garner-canonical Mar 31, 2026
e75b92d
docs: more natural language
james-garner-canonical Mar 31, 2026
4d80e85
docs: use latest jubilant
james-garner-canonical Mar 31, 2026
904bf8b
docs: backticks for pytest-jubilant
james-garner-canonical Mar 31, 2026
64107b2
docs: minor corrections
james-garner-canonical Mar 31, 2026
ef4321d
docs: explain pytest-jubilant features that aren't explained elsewhere
james-garner-canonical Mar 31, 2026
0811ae4
docs: restore comment about jubilant docs in example charms
james-garner-canonical Mar 31, 2026
f4dbcde
docs: apply suggestions from code review
james-garner-canonical Mar 31, 2026
1b129dc
docs: update the migration guide
james-garner-canonical Mar 31, 2026
fb1db87
docs: typo
james-garner-canonical Mar 31, 2026
6720f91
docs: drop reference table, link out, move logs to separate section
james-garner-canonical Mar 31, 2026
eb1a289
docs: README
james-garner-canonical Mar 31, 2026
3dfea3c
docs: fix link
james-garner-canonical Apr 1, 2026
1d5e937
docs: yield -> return
james-garner-canonical Apr 1, 2026
7cdad55
docs: if not charm
james-garner-canonical Apr 1, 2026
eacd71e
docs: more helpful message if more than one charm
james-garner-canonical Apr 1, 2026
f6930d3
docs: too many parents
james-garner-canonical Apr 1, 2026
6ba6a61
docs: CWD -> current working directory
james-garner-canonical Apr 1, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/.custom_wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ Parca
protobuf
pydantic
pytest
README
reinstantiate
replan
repo
Expand Down
4 changes: 2 additions & 2 deletions docs/explanation/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ When writing an integration test, it is not sufficient to simply check that Juju
### Tools

- [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html) and
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html) and
- [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html)

- [Jubilant](https://documentation.ubuntu.com/jubilant/)
- [Jubilant](https://documentation.ubuntu.com/jubilant/), which wraps the Juju CLI, together with [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant), a pytest plugin that manages Juju models during tests
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we already have a list, how about a separate bullet:

Suggested change
- [Jubilant](https://documentation.ubuntu.com/jubilant/), which wraps the Juju CLI, together with [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant), a pytest plugin that manages Juju models during tests
- [Jubilant](https://documentation.ubuntu.com/jubilant/), which wraps the Juju CLI
- [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant), a pytest plugin that manages Juju models during tests


Integration tests and unit tests should run using the minor version of Python that is shipped with the OS specified in `charmcraft.yaml` (the `base.run-on` key). For example, if Ubuntu 22.04 is specified in `charmcraft.yaml`, you can use the following tox configuration:

Expand Down Expand Up @@ -172,7 +172,7 @@ Integration tests are a bit more complex, because these tests require a Juju con
channel: 1.32-strict/stable
- name: Run integration tests
# Set a predictable model name so it can be consumed by charm-logdump-action
run: tox -e integration -- --model testing
run: tox -e integration -- --juju-model testing
- name: Dump logs
uses: canonical/charm-logdump-action@main
if: failure()
Expand Down
2 changes: 1 addition & 1 deletion docs/howto/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ Harness is a deprecated framework for writing unit tests. You should migrate to

- {doc}`Migrate unit tests from Harness <migrate/migrate-unit-tests-from-harness>`

pytest-operator and python-libjuju are deprecated. You should migrate integration tests to Jubilant.
pytest-operator and python-libjuju are deprecated. You should migrate integration tests to Jubilant and `pytest-jubilant`.

- {doc}`Migrate integration tests from pytest-operator <migrate/migrate-integration-tests-from-pytest-operator>`

Expand Down
2 changes: 1 addition & 1 deletion docs/howto/migrate/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Migrate unit tests from Harness <migrate-unit-tests-from-harness>

## pytest-operator

pytest-operator and python-libjuju are deprecated. You should migrate integration tests to Jubilant.
pytest-operator and python-libjuju are deprecated. You should migrate integration tests to Jubilant and `pytest-jubilant`.

```{toctree}
:maxdepth: 1
Expand Down
176 changes: 119 additions & 57 deletions docs/howto/migrate/migrate-integration-tests-from-pytest-operator.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,25 @@
(pytest-operator-migration)=
# How to migrate integration tests from pytest-operator

Many charm integration tests use [pytest-operator](https://github.com/charmed-kubernetes/pytest-operator) and [python-libjuju](https://github.com/juju/python-libjuju). This guide explains how to migrate your integration tests from those libraries to Jubilant.
Older charm integration tests use [pytest-operator](https://github.com/charmed-kubernetes/pytest-operator) and [python-libjuju](https://github.com/juju/python-libjuju). This guide explains how to migrate your integration tests from those libraries to Jubilant and [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant).

```{tip}
Try bootstrapping your migration with an AI Agent (such as GitHub Copilot or Claude Code). Instruct the agent to clone the canonical/jubilant and canonical/pytest-jubilant repositories, study them, and then migrate the charm integration tests to Jubilant. You should end up with a great starting point to then continue as outlined in the rest of this guide.
Try bootstrapping your migration with an AI Agent (such as GitHub Copilot or Claude Code). Instruct the agent to clone the `canonical/jubilant` and `canonical/pytest-jubilant` repositories, study them, and then migrate the charm integration tests to Jubilant. You should end up with a great starting point to then continue as outlined in the rest of this guide.
```

To get help while you're migrating tests, please keep the {external+jubilant:doc}`Jubilant API Reference <reference/jubilant>` handy, and make use of your IDE's autocompletion -- Jubilant tries to provide good type annotations and docstrings.

Migrating your tests can be broken into three steps:

1. Update your dependencies
2. Add fixtures to `conftest.py`
2. Provide the resources your tests need
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't they still need to add a fixture for getting the packed charm? Although I suppose not if they're using a Charmcraft profile. So I'm good with your change.

3. Update the tests themselves

Let's look at each of these in turn.


## Update your dependencies

The first thing you'll need to do is add `jubilant` as a dependency to your `tox.ini` or `pyproject.toml` dependencies.
The first thing you'll need to do is add `jubilant` and `pytest-jubilant` as dependencies to your `tox.ini` or `pyproject.toml`. Pin to the current stable major versions, which are maintained with strong backwards compatibility guarantees.

You can also remove the dependencies on `juju` (python-libjuju), `pytest-operator`, and `pytest-asyncio`.

Expand All @@ -32,60 +31,90 @@ If you're using `tox.ini`, the diff might look like:
boto3
cosl
- juju>=3.0
+ jubilant~=1.0
+ jubilant>=1.8,<2
+ pytest-jubilant>=2,<3
pytest
- pytest-operator
- pytest-asyncio
-r{toxinidir}/requirements.txt
```

If you're migrating a large number of tests, you may want to do it in stages. In that case, keep the old dependencies in place till the end, and migrate tests one at a time, so that both pytest-operator and Jubilant tests can run together.
If you're migrating a large number of tests, you may want to do it in stages. In that case, keep the old dependencies in place till the end, and migrate tests one at a time, so that both `pytest-operator` and Jubilant tests can run together. Note that `pytest-operator` and `pytest-jubilant` use completely different CLI options, so make sure you provide the correct ones for each if you're trying to do something fancy.

## Provide the resources your tests need

## Add fixtures to `conftest.py`
Your integration tests may use a combination of `pytest-operator` features for the resources they need, including packed charms, Juju models, and deployed applications. This section covers how to provide these resources when writing `Jubilant` based integration tests.

The pytest-operator library includes pytest fixtures, but Jubilant does not include any fixtures, so you'll need to add one or two fixtures to your `conftest.py`.
### Provide packed charms to your Python tests

(a_juju_model_fixture)=
### A `juju` model fixture
`pytest-operator` provided a `build_charm` helper function. `pytest-jubilant` does not provide an equivalent helper, because it's cleaner to keep packing out of your Python integration tests.

Jubilant expects that a Juju controller has already been set up, either using [Concierge](https://github.com/jnsgruk/concierge) or a manual approach. However, you'll want a fixture that creates a temporary model. We recommend naming the fixture `juju`:
In CI, you may already follow a strategy of first packing your charms (in parallel), and then providing the packed charms to your (perhaps also parallelised) integration tests. A good way to provide the charms is via environment variables.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"via" 🙂 How about "with" instead?


```python
Locally, we recommend decoupling packing from integration testing by performing packing separately. In a simple case, where you have a single charm to test, this can be done with a single `charmcraft pack` command. Your local `integration` step might then look like this:

```ini
[testenv:integration]
pass_env =
CHARM_PATH
commands =
pytest --tb=native -vv --log-cli-level=DEBUG {toxinidir}/tests/integration {posargs}
```

In your integration tests themselves, you should define a fixture for your charm, which reads the environment variable if set, or falls back to looking for the packed charm in its expected location.

```py
# tests/integration/conftest.py
import os
import pathlib

import jubilant
import pytest

@pytest.fixture(scope='module')
def juju(request: pytest.FixtureRequest):
keep_models = bool(request.config.getoption('--keep-models'))
@pytest.fixture(scope="session")
def charm_path():
# Assuming the current working directory is the charm root:
yield get_charm_path(env_var="CHARM_PATH", default_dir=pathlib.Path())

with jubilant.temp_model(keep=keep_models) as juju:
juju.wait_timeout = 10 * 60

yield juju # run the test
def get_charm_path(env_var: str, default_dir: pathlib.Path) -> pathlib.Path:
charm = os.environ.get(env_var)
if not charm:
charms = list(default_dir.glob('*.charm'))
assert charms, f'No charms were found in {default_dir}'
assert len(charms) == 1, f'Found more than one charm {charms}'
charm = charms[0]
path = pathlib.Path(charm).resolve()
assert path.is_file(), f'{path} is not a file'
return path
```
Comment on lines +78 to +88
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should use the same fixture in the how-to guide, in the "Write fixtures" section? How about in canonical/charmcraft#2623?


if request.session.testsfailed:
log = juju.debug_log(limit=1000)
print(log, end='')
#### Packing multiple charms

def pytest_addoption(parser):
parser.addoption(
'--keep-models',
action='store_true',
default=False,
help='keep temporarily-created models',
)
In a more complicated case where you have multiple charms, you should use `*CHARM_PATH` in `pass_env` instead, and use named environment variables to pass each charm's location (e.g. `FOO_CHARM_PATH`, `BAR_CHARM_PATH`).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be slightly clearer?

Suggested change
In a more complicated case where you have multiple charms, you should use `*CHARM_PATH` in `pass_env` instead, and use named environment variables to pass each charm's location (e.g. `FOO_CHARM_PATH`, `BAR_CHARM_PATH`).
In a more complicated case where you have multiple charms, you should use `*_CHARM_PATH` in `pass_env` instead, and use named environment variables to pass each charm's location (e.g. `FOO_CHARM_PATH`, `BAR_CHARM_PATH`).


In this case, you'd want one fixture per charm. If you *don't* make these auto-use fixtures, then if you're running tests for just one charm, the other charms won't need to be packed.

When you have multiple charms, it may be useful to provide a local `pack` step like this:
```ini
[testenv:pack]
commands =
bash -c "cd charms/foo && charmcraft pack"
bash -c "cd charms/bar && charmcraft pack"
```

(a_juju_model_fixture)=
### The `juju` and `juju_factory` fixtures

The `pytest-jubilant` plugin provides a module-scoped `juju` fixture that creates a temporary model, destroys it after the tests, and dumps debug logs on failure. It also provides CLI options such as `--no-juju-teardown` (to keep models) and `--juju-model` (to set a custom model name prefix).

`pytest-jubilant` expects that a Juju controller has already been set up, either using [Concierge](https://github.com/jnsgruk/concierge) or a manual approach. The plugin automatically creates a temporary model per test module and tears it down afterward.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`pytest-jubilant` expects that a Juju controller has already been set up, either using [Concierge](https://github.com/jnsgruk/concierge) or a manual approach. The plugin automatically creates a temporary model per test module and tears it down afterward.
`pytest-jubilant` expects that a Juju controller has already been set up, either using [Concierge](https://github.com/canonical/concierge) or a manual approach. The plugin automatically creates a temporary model per test module and tears it down afterward.


In your tests, use the fixture like this:

```python
# tests/integration/test_charm.py

def test_active(juju: jubilant.Juju):
juju.deploy('mycharm')
def test_active(juju: jubilant.Juju, charm_path: pathlib.Path):
juju.deploy(charm_path)
juju.wait(jubilant.all_active)

# Or wait for just 'mycharm' to be active (ignoring other apps):
Expand All @@ -94,31 +123,76 @@ def test_active(juju: jubilant.Juju):

A few things to note about the fixture:

* It includes a command-line parameter `--keep-models`, to match pytest-operator. If the parameter is set, the fixture keeps the temporary model around after running the tests.
* It sets [`juju.wait_timeout`](jubilant.Juju.wait_timeout) to 10 minutes, to match python-libjuju's default `wait_for_idle` timeout.
* If any of the tests fail, it uses `juju.debug_log` to display the last 1000 lines of `juju debug-log` output.
* To keep models around after running the tests (matching pytest-operator's `--keep-models`), pass `--no-juju-teardown`.
* To match python-libjuju's 10-minute `wait_for_idle` timeout, set `juju.wait_timeout = 10 * 60` in a wrapper fixture or at the start of your test.
* If any of the tests fail, the plugin automatically dumps the last 1000 lines of `juju debug-log` output.
* It is module-scoped, like pytest-operator's `ops_test` fixture. This means that a new model is created for every `test_*.py` file, but not for every test.

If your `test_*.py` module needs multiple Juju models (previously managed with `ops_test.track_model`), use the `juju_factory` fixture. This fixture lets you add additional models with their own unique suffixes -- no suffix is equivalent to the `juju` fixture.

```py
import jubilant
import pytest
import pytest_jubilant


@pytest.mark.fixture(scope="module")
def other_model(juju_factory: pytest_jubilant.JujuFactory):
yield juju_factory.get_juju("other")


def test_cross_model(juju: jubilant.Juju, other_model: jubilant.Juju):
...
```

(how_to_migrate_an_application_fixture)=
### An application fixture
### Application setup

If you don't want to deploy your application in each test, you can add a module-scoped `app` fixture that deploys your charm and waits for it to go active.
A lot of the time, you won't want to deploy your application in each test. In this case, you should use tests marked with `juju_setup`. These can be skipped in subsequent test runs using `--no-juju-setup` if you previously kept your models up and the applications deployed with `--no-juju-teardown`. This corresponds to `pytest-operator`'s `skip_if_deployed` functionality.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to read this a couple of times to get the point. How about:

Suggested change
A lot of the time, you won't want to deploy your application in each test. In this case, you should use tests marked with `juju_setup`. These can be skipped in subsequent test runs using `--no-juju-setup` if you previously kept your models up and the applications deployed with `--no-juju-teardown`. This corresponds to `pytest-operator`'s `skip_if_deployed` functionality.
A lot of the time, you won't want to deploy your application in each test. In this case, you should use tests marked with `juju_setup`. These tests can be skipped in subsequent test runs using `--no-juju-setup` if you previously used `--no-juju-teardown` to keep your models up and the applications deployed. This corresponds to `pytest-operator`'s `skip_if_deployed` functionality.


The following fixture assumes that the charm has already been packed with `charmcraft pack` in a previous CI step (Jubilant has no equivalent of `ops_test.build_charm`):
```python
# tests/integration/test_actions.py
import pathlib

import jubilant
import pytest

APP = 'mycharm'


@pytest.mark.juju_setup
def test_deploy(juju: jubilant.Juju, my_charm: pathlib.Path):
juju.deploy(charm_path, APP)
juju.wait(jubilant.all_active)
assert ...


@pytest.mark.juju_setup
def test_some_setup_action(juju: jubilant.Juju):
juju.run(f'{APP}/0', 'some-setup-action')
assert ...


def test_some_repeatable_action(juju.jubilant.Juju):
task = juju.run(f'{APP}/0', 'some-setup-action')
assert task.results['...'] == '...'
```

Alternatively, if you just want your tests to depend on the deployed version of your application, you can write an application fixture.

```python
# tests/integration/conftest.py

import pathlib

import jubilant
import pytest

@pytest.fixture(scope='module')
def app(juju: jubilant.Juju):
def app(juju: jubilant.Juju, charm_path: pathlib.Path):
my_app_name = "mycharm"
juju.deploy(
charm_path('mycharm'),
'mycharm',
charm_path,
my_app_name,
resources={
'mycharm-image': 'ghcr.io/canonical/...',
},
Expand All @@ -130,21 +204,10 @@ def app(juju: jubilant.Juju):
)
# ... do any other application setup here ...
juju.wait(jubilant.all_active)

yield 'mycharm' # run the test


def charm_path(name: str) -> pathlib.Path:
"""Return full absolute path to given test charm."""
# We're in tests/integration/conftest.py, so parent*3 is repo top level.
charm_dir = pathlib.Path(__file__).parent.parent.parent
charms = [p.absolute() for p in charm_dir.glob(f'{name}_*.charm')]
assert charms, f'{name}_*.charm not found'
assert len(charms) == 1, 'more than one .charm file, unsure which to use'
return charms[0]
yield my_app_name
```

In your tests, you'll need to specify that the test depends on both fixtures:
In your tests, you'll need to specify that the test depends on `juju` as well as `app` so that you have a reference to a `jubilant.Juju` object managing the correct model.

```python
# tests/integration/test_charm.py
Expand All @@ -154,7 +217,6 @@ def test_active(juju: jubilant.Juju, app: str):
assert status.apps[app].is_active
```


## Update the tests themselves

Many features of pytest-operator and python-libjuju map quite directly to Jubilant, except without using `async`. Here is a summary of what you need to change:
Expand Down
6 changes: 3 additions & 3 deletions docs/howto/write-and-structure-charm-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ unit = [
]
# Dependencies of integration tests
integration = [
"jubilant",
"pytest",
"jubilant>=1.8,<2",
"pytest-jubilant>=2,<3",
]
# Additional groups
docs = [
Expand Down Expand Up @@ -447,7 +447,7 @@ a cloud in which to deploy it, is required. This example uses [Concierge](https:
run: uv tool install tox --with tox-uv
- name: Run integration tests
# Set a predictable model name so it can be consumed by charm-logdump-action
run: tox -e integration -- --model testing
run: tox -e integration -- --juju-model testing
- name: Dump logs
uses: canonical/charm-logdump-action@main
if: failure()
Expand Down
Loading