-
Notifications
You must be signed in to change notification settings - Fork 129
docs: update tutorials and how-tos to use pytest-jubilant #2406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
e8f3c10
2fb6d38
cc54d2d
e75b92d
4d80e85
904bf8b
64107b2
ef4321d
0811ae4
f4dbcde
1b129dc
fb1db87
6720f91
eb1a289
3dfea3c
1d5e937
7cdad55
eacd71e
f6930d3
6ba6a61
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -23,6 +23,7 @@ Parca | |
| protobuf | ||
| pydantic | ||
| pytest | ||
| README | ||
| reinstantiate | ||
| replan | ||
| repo | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||||
|---|---|---|---|---|---|---|---|---|
|
|
@@ -112,7 +112,7 @@ When writing an integration test, it is not sufficient to simply check that Juju | |||||||
| ### Tools | ||||||||
|
|
||||||||
| - [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html) and | ||||||||
| - [Jubilant](https://documentation.ubuntu.com/jubilant/) | ||||||||
| - [Jubilant](https://documentation.ubuntu.com/jubilant/), which wraps the Juju CLI, together with [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant), a pytest plugin that manages Juju models during tests | ||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since we already have a list, how about a separate bullet:
Suggested change
|
||||||||
|
|
||||||||
| Integration tests and unit tests should run using the minor version of Python that is shipped with the OS specified in `charmcraft.yaml` (the `base.run-on` key). For example, if Ubuntu 22.04 is specified in `charmcraft.yaml`, you can use the following tox configuration: | ||||||||
|
|
||||||||
|
|
@@ -172,7 +172,7 @@ Integration tests are a bit more complex, because these tests require a Juju con | |||||||
| channel: 1.32-strict/stable | ||||||||
| - name: Run integration tests | ||||||||
| # Set a predictable model name so it can be consumed by charm-logdump-action | ||||||||
| run: tox -e integration -- --model testing | ||||||||
| run: tox -e integration -- --juju-model testing | ||||||||
| - name: Dump logs | ||||||||
| uses: canonical/charm-logdump-action@main | ||||||||
| if: failure() | ||||||||
|
|
||||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -1,26 +1,25 @@ | ||||||
| (pytest-operator-migration)= | ||||||
| # How to migrate integration tests from pytest-operator | ||||||
|
|
||||||
| Many charm integration tests use [pytest-operator](https://github.com/charmed-kubernetes/pytest-operator) and [python-libjuju](https://github.com/juju/python-libjuju). This guide explains how to migrate your integration tests from those libraries to Jubilant. | ||||||
| Older charm integration tests use [pytest-operator](https://github.com/charmed-kubernetes/pytest-operator) and [python-libjuju](https://github.com/juju/python-libjuju). This guide explains how to migrate your integration tests from those libraries to Jubilant and [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant). | ||||||
|
|
||||||
| ```{tip} | ||||||
| Try bootstrapping your migration with an AI Agent (such as GitHub Copilot or Claude Code). Instruct the agent to clone the canonical/jubilant and canonical/pytest-jubilant repositories, study them, and then migrate the charm integration tests to Jubilant. You should end up with a great starting point to then continue as outlined in the rest of this guide. | ||||||
| Try bootstrapping your migration with an AI Agent (such as GitHub Copilot or Claude Code). Instruct the agent to clone the `canonical/jubilant` and `canonical/pytest-jubilant` repositories, study them, and then migrate the charm integration tests to Jubilant. You should end up with a great starting point to then continue as outlined in the rest of this guide. | ||||||
| ``` | ||||||
|
|
||||||
| To get help while you're migrating tests, please keep the {external+jubilant:doc}`Jubilant API Reference <reference/jubilant>` handy, and make use of your IDE's autocompletion -- Jubilant tries to provide good type annotations and docstrings. | ||||||
|
|
||||||
| Migrating your tests can be broken into three steps: | ||||||
|
|
||||||
| 1. Update your dependencies | ||||||
| 2. Add fixtures to `conftest.py` | ||||||
| 2. Provide the resources your tests need | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wouldn't they still need to add a fixture for getting the packed charm? Although I suppose not if they're using a Charmcraft profile. So I'm good with your change. |
||||||
| 3. Update the tests themselves | ||||||
|
|
||||||
| Let's look at each of these in turn. | ||||||
|
|
||||||
|
|
||||||
| ## Update your dependencies | ||||||
|
|
||||||
| The first thing you'll need to do is add `jubilant` as a dependency to your `tox.ini` or `pyproject.toml` dependencies. | ||||||
| The first thing you'll need to do is add `jubilant` and `pytest-jubilant` as dependencies to your `tox.ini` or `pyproject.toml`. Pin to the current stable major versions, which are maintained with strong backwards compatibility guarantees. | ||||||
|
|
||||||
| You can also remove the dependencies on `juju` (python-libjuju), `pytest-operator`, and `pytest-asyncio`. | ||||||
|
|
||||||
|
|
@@ -32,60 +31,90 @@ If you're using `tox.ini`, the diff might look like: | |||||
| boto3 | ||||||
| cosl | ||||||
| - juju>=3.0 | ||||||
| + jubilant~=1.0 | ||||||
| + jubilant>=1.8,<2 | ||||||
| + pytest-jubilant>=2,<3 | ||||||
| pytest | ||||||
| - pytest-operator | ||||||
| - pytest-asyncio | ||||||
| -r{toxinidir}/requirements.txt | ||||||
| ``` | ||||||
|
|
||||||
| If you're migrating a large number of tests, you may want to do it in stages. In that case, keep the old dependencies in place till the end, and migrate tests one at a time, so that both pytest-operator and Jubilant tests can run together. | ||||||
| If you're migrating a large number of tests, you may want to do it in stages. In that case, keep the old dependencies in place till the end, and migrate tests one at a time, so that both `pytest-operator` and Jubilant tests can run together. Note that `pytest-operator` and `pytest-jubilant` use completely different CLI options, so make sure you provide the correct ones for each if you're trying to do something fancy. | ||||||
|
|
||||||
| ## Provide the resources your tests need | ||||||
|
|
||||||
| ## Add fixtures to `conftest.py` | ||||||
| Your integration tests may use a combination of `pytest-operator` features for the resources they need, including packed charms, Juju models, and deployed applications. This section covers how to provide these resources when writing `Jubilant` based integration tests. | ||||||
|
|
||||||
| The pytest-operator library includes pytest fixtures, but Jubilant does not include any fixtures, so you'll need to add one or two fixtures to your `conftest.py`. | ||||||
| ### Provide packed charms to your Python tests | ||||||
|
|
||||||
| (a_juju_model_fixture)= | ||||||
| ### A `juju` model fixture | ||||||
| `pytest-operator` provided a `build_charm` helper function. `pytest-jubilant` does not provide an equivalent helper, because it's cleaner to keep packing out of your Python integration tests. | ||||||
|
|
||||||
| Jubilant expects that a Juju controller has already been set up, either using [Concierge](https://github.com/jnsgruk/concierge) or a manual approach. However, you'll want a fixture that creates a temporary model. We recommend naming the fixture `juju`: | ||||||
| In CI, you may already follow a strategy of first packing your charms (in parallel), and then providing the packed charms to your (perhaps also parallelised) integration tests. A good way to provide the charms is via environment variables. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "via" 🙂 How about "with" instead? |
||||||
|
|
||||||
| ```python | ||||||
| Locally, we recommend decoupling packing from integration testing by performing packing separately. In a simple case, where you have a single charm to test, this can be done with a single `charmcraft pack` command. Your local `integration` step might then look like this: | ||||||
|
|
||||||
| ```ini | ||||||
| [testenv:integration] | ||||||
| pass_env = | ||||||
| CHARM_PATH | ||||||
| commands = | ||||||
| pytest --tb=native -vv --log-cli-level=DEBUG {toxinidir}/tests/integration {posargs} | ||||||
| ``` | ||||||
|
|
||||||
| In your integration tests themselves, you should define a fixture for your charm, which reads the environment variable if set, or falls back to looking for the packed charm in its expected location. | ||||||
|
|
||||||
| ```py | ||||||
| # tests/integration/conftest.py | ||||||
| import os | ||||||
| import pathlib | ||||||
|
|
||||||
| import jubilant | ||||||
| import pytest | ||||||
|
|
||||||
| @pytest.fixture(scope='module') | ||||||
| def juju(request: pytest.FixtureRequest): | ||||||
| keep_models = bool(request.config.getoption('--keep-models')) | ||||||
| @pytest.fixture(scope="session") | ||||||
| def charm_path(): | ||||||
| # Assuming the current working directory is the charm root: | ||||||
| yield get_charm_path(env_var="CHARM_PATH", default_dir=pathlib.Path()) | ||||||
|
|
||||||
| with jubilant.temp_model(keep=keep_models) as juju: | ||||||
| juju.wait_timeout = 10 * 60 | ||||||
|
|
||||||
| yield juju # run the test | ||||||
| def get_charm_path(env_var: str, default_dir: pathlib.Path) -> pathlib.Path: | ||||||
| charm = os.environ.get(env_var) | ||||||
| if not charm: | ||||||
| charms = list(default_dir.glob('*.charm')) | ||||||
| assert charms, f'No charms were found in {default_dir}' | ||||||
| assert len(charms) == 1, f'Found more than one charm {charms}' | ||||||
| charm = charms[0] | ||||||
| path = pathlib.Path(charm).resolve() | ||||||
| assert path.is_file(), f'{path} is not a file' | ||||||
| return path | ||||||
| ``` | ||||||
|
Comment on lines
+78
to
+88
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you think we should use the same fixture in the how-to guide, in the "Write fixtures" section? How about in canonical/charmcraft#2623? |
||||||
|
|
||||||
| if request.session.testsfailed: | ||||||
| log = juju.debug_log(limit=1000) | ||||||
| print(log, end='') | ||||||
| #### Packing multiple charms | ||||||
|
|
||||||
| def pytest_addoption(parser): | ||||||
| parser.addoption( | ||||||
| '--keep-models', | ||||||
| action='store_true', | ||||||
| default=False, | ||||||
| help='keep temporarily-created models', | ||||||
| ) | ||||||
| In a more complicated case where you have multiple charms, you should use `*CHARM_PATH` in `pass_env` instead, and use named environment variables to pass each charm's location (e.g. `FOO_CHARM_PATH`, `BAR_CHARM_PATH`). | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would this be slightly clearer?
Suggested change
|
||||||
|
|
||||||
| In this case, you'd want one fixture per charm. If you *don't* make these auto-use fixtures, then if you're running tests for just one charm, the other charms won't need to be packed. | ||||||
|
|
||||||
| When you have multiple charms, it may be useful to provide a local `pack` step like this: | ||||||
| ```ini | ||||||
| [testenv:pack] | ||||||
| commands = | ||||||
| bash -c "cd charms/foo && charmcraft pack" | ||||||
| bash -c "cd charms/bar && charmcraft pack" | ||||||
| ``` | ||||||
|
|
||||||
| (a_juju_model_fixture)= | ||||||
| ### The `juju` and `juju_factory` fixtures | ||||||
|
|
||||||
| The `pytest-jubilant` plugin provides a module-scoped `juju` fixture that creates a temporary model, destroys it after the tests, and dumps debug logs on failure. It also provides CLI options such as `--no-juju-teardown` (to keep models) and `--juju-model` (to set a custom model name prefix). | ||||||
|
|
||||||
| `pytest-jubilant` expects that a Juju controller has already been set up, either using [Concierge](https://github.com/jnsgruk/concierge) or a manual approach. The plugin automatically creates a temporary model per test module and tears it down afterward. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
|
||||||
| In your tests, use the fixture like this: | ||||||
|
|
||||||
| ```python | ||||||
| # tests/integration/test_charm.py | ||||||
|
|
||||||
| def test_active(juju: jubilant.Juju): | ||||||
| juju.deploy('mycharm') | ||||||
| def test_active(juju: jubilant.Juju, charm_path: pathlib.Path): | ||||||
| juju.deploy(charm_path) | ||||||
| juju.wait(jubilant.all_active) | ||||||
|
|
||||||
| # Or wait for just 'mycharm' to be active (ignoring other apps): | ||||||
|
|
@@ -94,31 +123,76 @@ def test_active(juju: jubilant.Juju): | |||||
|
|
||||||
| A few things to note about the fixture: | ||||||
|
|
||||||
| * It includes a command-line parameter `--keep-models`, to match pytest-operator. If the parameter is set, the fixture keeps the temporary model around after running the tests. | ||||||
| * It sets [`juju.wait_timeout`](jubilant.Juju.wait_timeout) to 10 minutes, to match python-libjuju's default `wait_for_idle` timeout. | ||||||
| * If any of the tests fail, it uses `juju.debug_log` to display the last 1000 lines of `juju debug-log` output. | ||||||
| * To keep models around after running the tests (matching pytest-operator's `--keep-models`), pass `--no-juju-teardown`. | ||||||
| * To match python-libjuju's 10-minute `wait_for_idle` timeout, set `juju.wait_timeout = 10 * 60` in a wrapper fixture or at the start of your test. | ||||||
| * If any of the tests fail, the plugin automatically dumps the last 1000 lines of `juju debug-log` output. | ||||||
| * It is module-scoped, like pytest-operator's `ops_test` fixture. This means that a new model is created for every `test_*.py` file, but not for every test. | ||||||
|
|
||||||
| If your `test_*.py` module needs multiple Juju models (previously managed with `ops_test.track_model`), use the `juju_factory` fixture. This fixture lets you add additional models with their own unique suffixes -- no suffix is equivalent to the `juju` fixture. | ||||||
|
|
||||||
| ```py | ||||||
| import jubilant | ||||||
| import pytest | ||||||
| import pytest_jubilant | ||||||
|
|
||||||
|
|
||||||
| @pytest.mark.fixture(scope="module") | ||||||
| def other_model(juju_factory: pytest_jubilant.JujuFactory): | ||||||
| yield juju_factory.get_juju("other") | ||||||
|
|
||||||
|
|
||||||
| def test_cross_model(juju: jubilant.Juju, other_model: jubilant.Juju): | ||||||
| ... | ||||||
| ``` | ||||||
|
|
||||||
| (how_to_migrate_an_application_fixture)= | ||||||
| ### An application fixture | ||||||
| ### Application setup | ||||||
|
|
||||||
| If you don't want to deploy your application in each test, you can add a module-scoped `app` fixture that deploys your charm and waits for it to go active. | ||||||
| A lot of the time, you won't want to deploy your application in each test. In this case, you should use tests marked with `juju_setup`. These can be skipped in subsequent test runs using `--no-juju-setup` if you previously kept your models up and the applications deployed with `--no-juju-teardown`. This corresponds to `pytest-operator`'s `skip_if_deployed` functionality. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I had to read this a couple of times to get the point. How about:
Suggested change
|
||||||
|
|
||||||
| The following fixture assumes that the charm has already been packed with `charmcraft pack` in a previous CI step (Jubilant has no equivalent of `ops_test.build_charm`): | ||||||
| ```python | ||||||
| # tests/integration/test_actions.py | ||||||
| import pathlib | ||||||
|
|
||||||
| import jubilant | ||||||
| import pytest | ||||||
|
|
||||||
| APP = 'mycharm' | ||||||
|
|
||||||
|
|
||||||
| @pytest.mark.juju_setup | ||||||
| def test_deploy(juju: jubilant.Juju, my_charm: pathlib.Path): | ||||||
| juju.deploy(charm_path, APP) | ||||||
| juju.wait(jubilant.all_active) | ||||||
| assert ... | ||||||
|
|
||||||
|
|
||||||
| @pytest.mark.juju_setup | ||||||
| def test_some_setup_action(juju: jubilant.Juju): | ||||||
| juju.run(f'{APP}/0', 'some-setup-action') | ||||||
| assert ... | ||||||
|
|
||||||
|
|
||||||
| def test_some_repeatable_action(juju.jubilant.Juju): | ||||||
| task = juju.run(f'{APP}/0', 'some-setup-action') | ||||||
| assert task.results['...'] == '...' | ||||||
| ``` | ||||||
|
|
||||||
| Alternatively, if you just want your tests to depend on the deployed version of your application, you can write an application fixture. | ||||||
|
|
||||||
| ```python | ||||||
| # tests/integration/conftest.py | ||||||
|
|
||||||
| import pathlib | ||||||
|
|
||||||
| import jubilant | ||||||
| import pytest | ||||||
|
|
||||||
| @pytest.fixture(scope='module') | ||||||
| def app(juju: jubilant.Juju): | ||||||
| def app(juju: jubilant.Juju, charm_path: pathlib.Path): | ||||||
| my_app_name = "mycharm" | ||||||
| juju.deploy( | ||||||
| charm_path('mycharm'), | ||||||
| 'mycharm', | ||||||
| charm_path, | ||||||
| my_app_name, | ||||||
| resources={ | ||||||
| 'mycharm-image': 'ghcr.io/canonical/...', | ||||||
| }, | ||||||
|
|
@@ -130,21 +204,10 @@ def app(juju: jubilant.Juju): | |||||
| ) | ||||||
| # ... do any other application setup here ... | ||||||
| juju.wait(jubilant.all_active) | ||||||
|
|
||||||
| yield 'mycharm' # run the test | ||||||
|
|
||||||
|
|
||||||
| def charm_path(name: str) -> pathlib.Path: | ||||||
| """Return full absolute path to given test charm.""" | ||||||
| # We're in tests/integration/conftest.py, so parent*3 is repo top level. | ||||||
| charm_dir = pathlib.Path(__file__).parent.parent.parent | ||||||
| charms = [p.absolute() for p in charm_dir.glob(f'{name}_*.charm')] | ||||||
| assert charms, f'{name}_*.charm not found' | ||||||
| assert len(charms) == 1, 'more than one .charm file, unsure which to use' | ||||||
| return charms[0] | ||||||
| yield my_app_name | ||||||
| ``` | ||||||
|
|
||||||
| In your tests, you'll need to specify that the test depends on both fixtures: | ||||||
| In your tests, you'll need to specify that the test depends on `juju` as well as `app` so that you have a reference to a `jubilant.Juju` object managing the correct model. | ||||||
|
|
||||||
| ```python | ||||||
| # tests/integration/test_charm.py | ||||||
|
|
@@ -154,7 +217,6 @@ def test_active(juju: jubilant.Juju, app: str): | |||||
| assert status.apps[app].is_active | ||||||
| ``` | ||||||
|
|
||||||
|
|
||||||
| ## Update the tests themselves | ||||||
|
|
||||||
| Many features of pytest-operator and python-libjuju map quite directly to Jubilant, except without using `async`. Here is a summary of what you need to change: | ||||||
|
|
||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.