Skip to content

docs: update tutorials and how-tos to use pytest-jubilant#2406

Open
james-garner-canonical wants to merge 20 commits intocanonical:mainfrom
james-garner-canonical:26-03+docs+pytest-jubilant
Open

docs: update tutorials and how-tos to use pytest-jubilant#2406
james-garner-canonical wants to merge 20 commits intocanonical:mainfrom
james-garner-canonical:26-03+docs+pytest-jubilant

Conversation

@james-garner-canonical
Copy link
Copy Markdown
Contributor

@james-garner-canonical james-garner-canonical commented Mar 31, 2026

Following the release of pytest-jubilant 2.0 and its official adoption by Charm Tech, this PR updates our docs to recommend its use. This includes updating the tutorials and example charms to use the juju fixture it provides instead of rolling our own, updating the relevant how-to guides in the same way, and expanding the integration testing how-to guide to cover pytest-jubilant features and the recommended local integration testing workflow.

Preview build:

Comment on lines -346 to -353
You can use Jubilant with several models, in the same cloud or in
different clouds. This way you can, for example, integrate machine charms
with Kubernetes charms easily.
If you need multiple Juju models in a single test module, use the `juju_factory` fixture provided by `pytest-jubilant`:

```python
model_a = jubilant.Juju("some-model")
model_b = jubilant.Juju("another-controller:a-model")
new_model = jubilant.Juju().add_model("a-model", "some-cloud", controller=..., config=..., credential=...)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not entirely sure what to do here.

The workflow described here originally suggests that pytest-jubilant might benefit from the following:

  • JujuFactory.get_juju might benefit from the addition of keyword arguments that are forward to Juju.add_model, like cloud, controller, config, crediential.
  • Maybe we'd also want a CLI option like --juju-cloud to set the default cloud operated in explicitly, and perhaps --juju-controller as well

Or should we document a simpler workflow for now and see if we get a feature request to support models in different clouds in pytest-jubilant (current approach)?

I'm thinking this might be enough of an oversight on our part to warrant adding the feature preemptively.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to wait and add them later if needed. I think it'd likely be the forwarded keyword arguments without more CLI options, and those should be easy enough to add in a subsequent release.

There's the .cli() method if you need to do something that Jubilant doesn't currently support, and there's manually writing fixtures if there's something that pytest-jubilant doesn't currently support.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, they can achieve this by bypassing pytest-jubilant, as we documented before, losing all the features of pytest-jubilant for such models (managing setup/teardown, model reuse, dumping logs). You don't need the cli method, just Juju.add_model.

So I guess what I'm wanting to confirm is that we're fine with removing our current instructions for cross-cloud/controller testing from our docs and trusting that users will open a pytest-jubilant issue if they realise they need that capability?

Comment on lines +401 to +407
| Option | Description |
|---|---|
| `--juju-model PREFIX` | Use a custom model name prefix instead of a random one. Required if using `--no-juju-setup`. Model names are formed as `PREFIX-MODULE` (or `PREFIX-MODULE-SUFFIX` for extra models created via `juju_factory`), where `MODULE` is derived from the test file name. For example, running `tests/integration/test_charm.py` with `--juju-model mytest` creates a model called `mytest-test-charm`. |
| `--no-juju-teardown` | Keep models after the tests finish, instead of destroying them. Also skips tests marked with `@pytest.mark.juju_teardown`. |
| `--no-juju-setup` | Skip tests marked with `@pytest.mark.juju_setup` (for example, deployment tests). The model must already exist. Requires `--juju-model`. |
| `--juju-switch` | Switch to the active test model, so you can monitor it with `juju status` in another terminal. |
| `--juju-dump-logs [PATH]` | Dump `juju debug-log` output to disk for each model. Defaults to `.logs/`. |
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is like a little slice of reference docs embedded in the relevant part of the how-to (running your tests -- here are the CLI options you can use). I wonder if we should just link out to pytest-jubilant directly here. It feels a little odd since it doesn't have a proper docs site, just the repository readme.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per discussion with @dwilding, we'll drop this table and link out.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've dropped the table and moved the logging info to a separate section.

@james-garner-canonical james-garner-canonical marked this pull request as ready for review March 31, 2026 02:19
Comment on lines +409 to +425
If any tests fail, `pytest-jubilant` automatically prints the last 1000 lines of `juju debug-log` to stderr, even without `--juju-dump-logs`.

````{tip}
Use `--juju-dump-logs` in CI with `actions/upload-artifact` to make debug logs available as build artifacts:

```yaml
# In your integration test job
- run: tox -e integration -- --juju-dump-logs
- name: Upload logs
if: ${{ !cancelled() }}
uses: actions/upload-artifact@v4
with:
name: juju-dump-logs
path: .logs
```
````

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Highlighting this section for reference in #2405.

Copy link
Copy Markdown
Collaborator

@tonyandrewmeyer tonyandrewmeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Looks good overall. I've made a few suggestions - the main two themes are (a) let's not copy the bad practice of saying you're writing tests using pytest-x when you're really using something else and getting some pytest niceties from the plugin, and (b) I think the migration guide has strayed a little into migration from Jubilant-only, but should stay focused on migrating from pytest-operator.


import jubilant
import pytest
If you have a hand-written `juju` fixture in your `conftest.py`, you can remove it.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this pretty unlikely for someone migrating from pytest-operator?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll rework this.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've substantially reworked this document to more clearly cover the pytest-operator -> pytest-jubilant mapping.

Comment on lines -346 to -353
You can use Jubilant with several models, in the same cloud or in
different clouds. This way you can, for example, integrate machine charms
with Kubernetes charms easily.
If you need multiple Juju models in a single test module, use the `juju_factory` fixture provided by `pytest-jubilant`:

```python
model_a = jubilant.Juju("some-model")
model_b = jubilant.Juju("another-controller:a-model")
new_model = jubilant.Juju().add_model("a-model", "some-cloud", controller=..., config=..., credential=...)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to wait and add them later if needed. I think it'd likely be the forwarded keyword arguments without more CLI options, and those should be easy enough to add in a subsequent release.

There's the .cli() method if you need to do something that Jubilant doesn't currently support, and there's manually writing fixtures if there's something that pytest-jubilant doesn't currently support.

Co-authored-by: Tony Meyer <tony.meyer@gmail.com>
Co-authored-by: James Garner <james.garner@canonical.com>
Copy link
Copy Markdown
Contributor

@dwilding dwilding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really thorough and clear updates, thanks!

One thing I didn't add comments about, but that we discussed internally: instead of replacing pytest by pytest-jubilant in pyproject.toml, we should probably keep both dependencies listed.


- [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html) and
- [Jubilant](https://documentation.ubuntu.com/jubilant/)
- [Jubilant](https://documentation.ubuntu.com/jubilant/), which wraps the Juju CLI, together with [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant), a pytest plugin that manages Juju models during tests
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we already have a list, how about a separate bullet:

Suggested change
- [Jubilant](https://documentation.ubuntu.com/jubilant/), which wraps the Juju CLI, together with [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant), a pytest plugin that manages Juju models during tests
- [Jubilant](https://documentation.ubuntu.com/jubilant/), which wraps the Juju CLI
- [`pytest-jubilant`](https://github.com/canonical/pytest-jubilant), a pytest plugin that manages Juju models during tests

@@ -112,7 +112,7 @@ When writing an integration test, it is not sufficient to simply check that Juju
### Tools

- [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html) and
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html) and
- [`pytest`](https://pytest.org/) or [`unittest`](https://docs.python.org/3/library/unittest.html)


1. Update your dependencies
2. Add fixtures to `conftest.py`
2. Provide the resources your tests need
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't they still need to add a fixture for getting the packed charm? Although I suppose not if they're using a Charmcraft profile. So I'm good with your change.

`pytest-operator` provided a `build_charm` helper function. `pytest-jubilant` does not provide an equivalent helper, because it's cleaner to keep packing out of your Python integration tests.

Jubilant expects that a Juju controller has already been set up, either using [Concierge](https://github.com/jnsgruk/concierge) or a manual approach. However, you'll want a fixture that creates a temporary model. We recommend naming the fixture `juju`:
In CI, you may already follow a strategy of first packing your charms (in parallel), and then providing the packed charms to your (perhaps also parallelised) integration tests. A good way to provide the charms is via environment variables.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"via" 🙂 How about "with" instead?

Comment on lines +78 to +88
def get_charm_path(env_var: str, default_dir: pathlib.Path) -> pathlib.Path:
charm = os.environ.get(env_var)
if not charm:
charms = list(default_dir.glob('*.charm'))
assert charms, f'No charms were found in {default_dir}'
assert len(charms) == 1, f'Found more than one charm {charms}'
charm = charms[0]
path = pathlib.Path(charm).resolve()
assert path.is_file(), f'{path} is not a file'
return path
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should use the same fixture in the how-to guide, in the "Write fixtures" section? How about in canonical/charmcraft#2623?

#
# The integration tests use the Jubilant library. See https://documentation.ubuntu.com/jubilant/
# The pytest-jubilant plugin (https://github.com/canonical/pytest-jubilant) provides a
# module-scoped ``juju`` fixture that creates a temporary Juju model.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# module-scoped ``juju`` fixture that creates a temporary Juju model.
# module-scoped `juju` fixture that creates a temporary Juju model.

#
# The integration tests use the Jubilant library. See https://documentation.ubuntu.com/jubilant/
# The pytest-jubilant plugin (https://github.com/canonical/pytest-jubilant) provides a
# module-scoped ``juju`` fixture that creates a temporary Juju model.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# module-scoped ``juju`` fixture that creates a temporary Juju model.
# module-scoped `juju` fixture that creates a temporary Juju model.

#
# The integration tests use the Jubilant library. See https://documentation.ubuntu.com/jubilant/
# The pytest-jubilant plugin (https://github.com/canonical/pytest-jubilant) provides a
# module-scoped ``juju`` fixture that creates a temporary Juju model.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# module-scoped ``juju`` fixture that creates a temporary Juju model.
# module-scoped `juju` fixture that creates a temporary Juju model.

#
# The integration tests use the Jubilant library. See https://documentation.ubuntu.com/jubilant/
# The pytest-jubilant plugin (https://github.com/canonical/pytest-jubilant) provides a
# module-scoped ``juju`` fixture that creates a temporary Juju model.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# module-scoped ``juju`` fixture that creates a temporary Juju model.
# module-scoped `juju` fixture that creates a temporary Juju model.

## Write an integration test

Now that our charm integrates with the database, if there's not a database relation, the app will be in `blocked` status instead of `active`. Let's tweak our existing integration test `test_deploy` accordingly, setting the expected status as `blocked` in `juju.wait`:
Now that our charm integrates with the database, if there's not a database relation, the app will be in `blocked` status instead of `active`. Let's tweak our existing integration test `test_deploy` accordingly, setting the expected status as `blocked` in `juju.wait`. Replace the contents of `tests/integration/test_charm.py` with:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about one additional tweak, since we don't literally write blocked in the juju.wait call.

Suggested change
Now that our charm integrates with the database, if there's not a database relation, the app will be in `blocked` status instead of `active`. Let's tweak our existing integration test `test_deploy` accordingly, setting the expected status as `blocked` in `juju.wait`. Replace the contents of `tests/integration/test_charm.py` with:
Now that our charm integrates with the database, if there's not a database relation, the app will be in `blocked` status instead of `active`. Let's tweak our existing integration test `test_deploy` accordingly, to expect blocked status in `juju.wait`. Replace the contents of `tests/integration/test_charm.py` with:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants