Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adopt ruff format #2083

Merged
merged 9 commits into from
Feb 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ repos:
hooks:
- id: ruff
args: [ '--fix', '--show-fixes' ]
- id: ruff-format
exclude: '(src/xclim/indices/__init__.py|docs/installation.rst)'
- repo: https://github.com/pylint-dev/pylint
rev: v3.3.4
hooks:
Expand Down Expand Up @@ -85,8 +87,7 @@ repos:
hooks:
- id: blackdoc
additional_dependencies: [ 'black==25.1.0' ]
exclude: '(src/xclim/indices/__init__.py|docs/installation.rst)'
# - id: blackdoc-autoupdate-black
exclude: '(.py|docs/installation.rst)'
- repo: https://github.com/codespell-project/codespell
rev: v2.4.1
hooks:
Expand Down
5 changes: 5 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,11 @@ Breaking changes
Internal changes
^^^^^^^^^^^^^^^^
* `black`, `isort`, and `nbqa` have all been dropped from the development dependencies. (:issue:`1805`, :pull:`2082`).
* `ruff` has been configured to provide code formatting. (:pull:`2083`):
* The maximum line-length is now 120 characters.
* Docstring formatting is now enabled.
* Line endings in files now must be `Unix`-compatible (`LF`).
* The `blackdoc` pre-commit hook now only examines `.rst` and `.md` files. (:pull:`2083`).

v0.55.0 (2025-02-17)
--------------------
Expand Down
3 changes: 1 addition & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,7 @@ lint: ## check style with flake8 and black
python -m ruff check src/xclim tests
python -m flake8 --config=.flake8 src/xclim tests
python -m vulture src/xclim tests
python -m blackdoc --check --exclude=src/xclim/indices/__init__.py src/xclim
python -m blackdoc --check docs
python -m blackdoc --check README.rst CHANGELOG.rst CONTRIBUTING.rst docs --exclude=".py"
codespell src/xclim tests docs
python -m numpydoc lint src/xclim/*.py src/xclim/ensembles/*.py src/xclim/indices/*.py src/xclim/indicators/*.py src/xclim/testing/*.py
python -m deptry src
Expand Down
4 changes: 1 addition & 3 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -240,9 +240,7 @@ class XCStyle(AlphaStyle):

# General information about the project.
project = "xclim"
copyright = (
f"2018-{datetime.datetime.now().year}, Ouranos Inc., Travis Logan, and contributors"
)
copyright = f"2018-{datetime.datetime.now().year}, Ouranos Inc., Travis Logan, and contributors"
author = "xclim Project Development Team"

# The version info for the project you're documenting, acts as replacement
Expand Down
8 changes: 2 additions & 6 deletions docs/notebooks/analogs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,7 @@
"metadata": {},
"outputs": [],
"source": [
"results = analog.spatial_analogs(\n",
" sim[[\"tg_mean\"]], obs[[\"tg_mean\"]], method=\"seuclidean\"\n",
")\n",
"results = analog.spatial_analogs(sim[[\"tg_mean\"]], obs[[\"tg_mean\"]], method=\"seuclidean\")\n",
"\n",
"results.plot()\n",
"plt.plot(sim.lon, sim.lat, \"ro\", label=\"Target\")\n",
Expand Down Expand Up @@ -169,9 +167,7 @@
"metadata": {},
"outputs": [],
"source": [
"results = analog.spatial_analogs(\n",
" sim[[\"tg_mean\"]], obs[[\"tg_mean\"]], method=\"zech_aslan\"\n",
")\n",
"results = analog.spatial_analogs(sim[[\"tg_mean\"]], obs[[\"tg_mean\"]], method=\"zech_aslan\")\n",
"\n",
"results.plot(center=False)\n",
"plt.plot(sim.lon, sim.lat, \"ro\", label=\"Target\")\n",
Expand Down
8 changes: 2 additions & 6 deletions docs/notebooks/benchmarks/sdba_quantile.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -100,19 +100,15 @@
"for use_fnq in [True, False]:\n",
" sdba.nbutils.USE_FASTNANQUANTILE = use_fnq\n",
" # heat-up the jit\n",
" sdba.nbutils.quantile(\n",
" xr.DataArray(np.array([0, 1.5])), dim=\"dim_0\", q=np.array([0.5])\n",
" )\n",
" sdba.nbutils.quantile(xr.DataArray(np.array([0, 1.5])), dim=\"dim_0\", q=np.array([0.5]))\n",
" for size in np.arange(250, 2000 + 250, 250):\n",
" da = tx.isel(time=slice(0, size))\n",
" t0 = time.time()\n",
" for _i in range(num_tests):\n",
" sdba.nbutils.quantile(da, **kws).compute()\n",
" timed[use_fnq].append([size, time.time() - t0])\n",
"\n",
"for k, lab in zip(\n",
" [True, False], [\"xclim.core.utils.nan_quantile\", \"fastnanquantile\"], strict=False\n",
"):\n",
"for k, lab in zip([True, False], [\"xclim.core.utils.nan_quantile\", \"fastnanquantile\"], strict=False):\n",
" arr = np.array(timed[k])\n",
" plt.plot(arr[:, 0], arr[:, 1] / num_tests, label=lab)\n",
"plt.legend()\n",
Expand Down
27 changes: 7 additions & 20 deletions docs/notebooks/customize.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,7 @@
"metadata": {},
"outputs": [],
"source": [
"tasmax = (\n",
" xr.tutorial.load_dataset(\"air_temperature\")\n",
" .air.resample(time=\"D\")\n",
" .max(keep_attrs=True)\n",
")\n",
"tasmax = xr.tutorial.load_dataset(\"air_temperature\").air.resample(time=\"D\").max(keep_attrs=True)\n",
"tasmax = tasmax.where(tasmax.time.dt.day % 10 != 0)"
]
},
Expand Down Expand Up @@ -137,9 +133,7 @@
"outputs": [],
"source": [
"with xclim.set_options(check_missing=\"wmo\"):\n",
" tx_mean = xclim.atmos.tx_mean(\n",
" tasmax=tasmax, freq=\"MS\"\n",
" ) # compute monthly max tasmax\n",
" tx_mean = xclim.atmos.tx_mean(tasmax=tasmax, freq=\"MS\") # compute monthly max tasmax\n",
"tx_mean.sel(time=\"2013\", lat=75, lon=200)"
]
},
Expand All @@ -165,7 +159,7 @@
"\n",
"To add additional arguments, one should override the `__init__` (receiving those arguments) and the `validate` static method, which validates them. The options are then stored in the `options` property of the instance. See example below and the docstrings in the module.\n",
"\n",
"When registering the class with the `xclim.core.checks.register_missing_method` decorator, the keyword arguments will be registered as options for the missing method. "
"When registering the class with the `xclim.core.checks.register_missing_method` decorator, the keyword arguments will be registered as options for the missing method."
]
},
{
Expand All @@ -186,12 +180,9 @@
" super().__init__(max_n=max_n)\n",
"\n",
" def is_missing(self, valid, count, freq):\n",
" \"\"\"Return a boolean mask where True values are for elements that are considered missing and masked on the output.\"\"\"\n",
" \"\"\"Return a boolean mask for elements that are considered missing and masked on the output.\"\"\"\n",
" null = ~valid\n",
" return (\n",
" null.resample(time=freq).map(longest_run, dim=\"time\")\n",
" >= self.options[\"max_n\"]\n",
" )\n",
" return null.resample(time=freq).map(longest_run, dim=\"time\") >= self.options[\"max_n\"]\n",
"\n",
" @staticmethod\n",
" def validate(max_n):\n",
Expand All @@ -212,12 +203,8 @@
"metadata": {},
"outputs": [],
"source": [
"with xclim.set_options(\n",
" check_missing=\"consecutive\", missing_options={\"consecutive\": {\"max_n\": 2}}\n",
"):\n",
" tx_mean = xclim.atmos.tx_mean(\n",
" tasmax=tasmax, freq=\"MS\"\n",
" ) # compute monthly max tasmax\n",
"with xclim.set_options(check_missing=\"consecutive\", missing_options={\"consecutive\": {\"max_n\": 2}}):\n",
" tx_mean = xclim.atmos.tx_mean(tasmax=tasmax, freq=\"MS\") # compute monthly max tasmax\n",
"tx_mean.sel(time=\"2013\", lat=75, lon=200)"
]
},
Expand Down
46 changes: 18 additions & 28 deletions docs/notebooks/ensembles-advanced.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"nbsphinx": "hidden"
},
"metadata": {},
"outputs": [],
"source": [
"# This cell is not visible when the documentation is built.\n",
Expand All @@ -19,44 +17,40 @@
"np.random.normal(loc=3.5, scale=1.5, size=50)\n",
"# crit['delta_annual_tavg']\n",
"np.random.seed(0)\n",
"test = xr.DataArray(\n",
" np.random.normal(loc=3, scale=1.5, size=50), dims=[\"realization\"]\n",
").assign_coords(horizon=\"2041-2070\")\n",
"test = xr.DataArray(np.random.normal(loc=3, scale=1.5, size=50), dims=[\"realization\"]).assign_coords(\n",
" horizon=\"2041-2070\"\n",
")\n",
"test = xr.concat(\n",
" (\n",
" test,\n",
" xr.DataArray(\n",
" np.random.normal(loc=5.34, scale=2, size=50), dims=[\"realization\"]\n",
" ).assign_coords(horizon=\"2071-2100\"),\n",
" xr.DataArray(np.random.normal(loc=5.34, scale=2, size=50), dims=[\"realization\"]).assign_coords(\n",
" horizon=\"2071-2100\"\n",
" ),\n",
" ),\n",
" dim=\"horizon\",\n",
")\n",
"\n",
"ds_crit = xr.Dataset()\n",
"\n",
"ds_crit[\"delta_annual_tavg\"] = test\n",
"test = xr.DataArray(\n",
" np.random.normal(loc=5, scale=5, size=50), dims=[\"realization\"]\n",
").assign_coords(horizon=\"2041-2070\")\n",
"test = xr.DataArray(np.random.normal(loc=5, scale=5, size=50), dims=[\"realization\"]).assign_coords(horizon=\"2041-2070\")\n",
"test = xr.concat(\n",
" (\n",
" test,\n",
" xr.DataArray(\n",
" np.random.normal(loc=10, scale=8, size=50), dims=[\"realization\"]\n",
" ).assign_coords(horizon=\"2071-2100\"),\n",
" xr.DataArray(np.random.normal(loc=10, scale=8, size=50), dims=[\"realization\"]).assign_coords(\n",
" horizon=\"2071-2100\"\n",
" ),\n",
" ),\n",
" dim=\"horizon\",\n",
")\n",
"ds_crit[\"delta_annual_prtot\"] = test\n",
"test = xr.DataArray(\n",
" np.random.normal(loc=0, scale=3, size=50), dims=[\"realization\"]\n",
").assign_coords(horizon=\"2041-2070\")\n",
"test = xr.DataArray(np.random.normal(loc=0, scale=3, size=50), dims=[\"realization\"]).assign_coords(horizon=\"2041-2070\")\n",
"test = xr.concat(\n",
" (\n",
" test,\n",
" xr.DataArray(\n",
" np.random.normal(loc=2, scale=4, size=50), dims=[\"realization\"]\n",
" ).assign_coords(horizon=\"2071-2100\"),\n",
" xr.DataArray(np.random.normal(loc=2, scale=4, size=50), dims=[\"realization\"]).assign_coords(\n",
" horizon=\"2071-2100\"\n",
" ),\n",
" ),\n",
" dim=\"horizon\",\n",
")\n",
Expand Down Expand Up @@ -242,7 +236,7 @@
"\n",
"`xclim` also makes available a similar ensemble reduction algorithm, `ensembles.kkz_reduce_ensemble`. See: https://doi.org/10.1175/JCLI-D-14-00636.1\n",
"\n",
"The advantage of this algorithm is largely that fewer realizations are needed in order to reach the same level of representative members than the K-means clustering algorithm, as the KKZ methods tends towards identifying members that fall towards the extremes of criteria values. \n",
"The advantage of this algorithm is largely that fewer realizations are needed in order to reach the same level of representative members than the K-means clustering algorithm, as the KKZ methods tends towards identifying members that fall towards the extremes of criteria values.\n",
"\n",
"This technique also produces nested selection results, where an additional increase in desired selection size does not alter the previous choices, which is not the case for the K-means algorithm."
]
Expand Down Expand Up @@ -306,11 +300,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"metadata": {},
"outputs": [],
"source": [
"# UNNESTED results using k-means\n",
Expand All @@ -331,7 +321,7 @@
"\n",
"The **KKZ** algorithm iteratively maximizes distance from previous selected candidates - potentially resulting in 'off-center' results versus the original ensemble\n",
"\n",
"The **K-means** algorithm will redivide the data space with each iteration, producing results that are consistently centered on the original ensemble but lacking coverage in the extremes "
"The **K-means** algorithm will redivide the data space with each iteration, producing results that are consistently centered on the original ensemble but lacking coverage in the extremes"
]
},
{
Expand Down
Loading
Loading