Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Temporarily revert star annotation in call signatures - remove all internal xarray imports #2116

Merged
merged 20 commits into from
Mar 25, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,19 +34,19 @@ repos:
- id: toml-sort-fix
exclude: '.pylintrc.toml'
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.35.1
rev: v1.36.2
hooks:
- id: yamllint
args: [ '--config-file=.yamllint.yaml' ]
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.9.9
rev: v0.11.2
hooks:
- id: ruff
args: [ '--fix', '--show-fixes' ]
- id: ruff-format
exclude: '(src/xclim/indices/__init__.py|docs/installation.rst)'
- repo: https://github.com/pylint-dev/pylint
rev: v3.3.4
rev: v3.3.6
hooks:
- id: pylint
args: [ '--rcfile=.pylintrc.toml', '--errors-only', '--jobs=0', '--disable=import-error' ]
Expand Down Expand Up @@ -105,7 +105,7 @@ repos:
hooks:
- id: gitleaks
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.31.2
rev: 0.31.3
hooks:
- id: check-github-workflows
- id: check-readthedocs
Expand Down
5 changes: 5 additions & 0 deletions .zenodo.json
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,11 @@
"name": "Wong, Jack Kit-tai",
"affiliation": "University of Toronto, Toronto, Ontario, Canada",
"orcid": "0000-0003-1408-1502"
},
{
"name": "de Bruijn, Jens",
"affiliation": "Vrije Universiteit, Amsterdam, The Netherlands",
"orcid": "0000-0003-3961-6382"
}
],
"keywords": [
Expand Down
1 change: 1 addition & 0 deletions AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,3 +50,4 @@ Contributors
* Sebastian Lehner <[email protected]> `@seblehner <https://github.com/seblehner>`_
* Baptiste Hamon <[email protected]> `@baptistehamon <https://github.com/baptistehamon>`_
* Jack Kit-tai Wong <[email protected]> `@jack-ktw <https://github.com/jack-ktw>`_
* Jens de Bruijn <[email protected]> `@jensdebruijn <https://github.com/jensdebruijn>`_
8 changes: 6 additions & 2 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,15 @@ Changelog

v0.56.0 (unreleased)
--------------------
Contributors to this version: Trevor James Smith (:user:`Zeitsperre`), Hui-Min Wang (:user:`Hem-W`), Jack Kit-tai Wong(:user:`jack-ktw`), Adrien Lamarche (:user:`LamAdr`), Éric Dupuis (:user:`coxipi`).
Contributors to this version: Trevor James Smith (:user:`Zeitsperre`), Hui-Min Wang (:user:`Hem-W`), Jack Kit-tai Wong(:user:`jack-ktw`), Adrien Lamarche (:user:`LamAdr`), Éric Dupuis (:user:`coxipi`), Jens de Bruijn (:user:`jensdebruijn`).

Bug fixes
^^^^^^^^^
* Fix installation instructions in the Contributing guide (:issue:`2088`, :pull:`2089`).
* Fixed parameter order in typing.cast() causing intermittent errors in solar_zenith_angle calculation. (:issue:`2097`, :pull:`2098`)
* `xclim` now uses directly `operator` instead of using `xarray`'s derived `get_op` function. A refactoring in `xarray` had changed the position of `get_op` which caused a bug. (:issue:`2113`, :pull:`2114`).
+ All other uses of `xarray`'s internal API were also removed (:pull:`2116`).
* Fixed an issue with star-annotated call signatures to maintain Python 3.10 compatibility. (:pull:`2116`).

Breaking changes
^^^^^^^^^^^^^^^^
Expand All @@ -26,7 +29,8 @@ Internal changes
* Line endings in files now must be `Unix`-compatible (`LF`).
* The `blackdoc` pre-commit hook now only examines `.rst` and `.md` files. (:pull:`2083`).
* The `xclim` documentation now has a ``support`` page for detailing the project's usage and version support policies. (:pull:`2100`).
* The indicator `heat_wave_index` now uses `hot_spell_total_length` index. The `heat_wave_index` index is identitical to `hot_spell_total_length` and will be dropped in future versions. (:issue:`2031`, :pull:`2102`).
* The indicator `heat_wave_index` now uses `hot_spell_total_length` index. The `heat_wave_index` index is identical to `hot_spell_total_length` and will be dropped in future versions. (:issue:`2031`, :pull:`2102`).
* Updated pre-commit hooks to their latest versions. (:pull:`2116`).

New indicators
^^^^^^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion src/xclim/core/bootstrapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
import numpy as np
import xarray
from boltons.funcutils import wraps
from xarray.core.dataarray import DataArray
from xarray import DataArray

import xclim.core.utils
from xclim.core.calendar import parse_offset, percentile_doy
Expand Down
64 changes: 48 additions & 16 deletions src/xclim/core/calendar.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,16 @@
import numpy as np
import pandas as pd
import xarray as xr
from xarray.coding.cftime_offsets import to_cftime_datetime
from xarray.coding.cftimeindex import CFTimeIndex
from xarray.core import dtypes
from xarray.core.resample import DataArrayResample, DatasetResample
from packaging.version import Version
from xarray import CFTimeIndex

from xclim.core._types import DayOfYearStr
from xclim.core.formatting import update_xclim_history
from xclim.core.utils import uses_dask

XR2409 = Version(xr.__version__) >= Version("2024.09")


__all__ = [
"DayOfYearStr",
"adjust_doy_calendar",
Expand Down Expand Up @@ -51,6 +52,23 @@
"within_bnds_doy",
]


_MONTH_ABBREVIATIONS = {
1: "JAN",
2: "FEB",
3: "MAR",
4: "APR",
5: "MAY",
6: "JUN",
7: "JUL",
8: "AUG",
9: "SEP",
10: "OCT",
11: "NOV",
12: "DEC",
}


# Maximum day of year in each calendar.
max_doy = {
"standard": 366,
Expand Down Expand Up @@ -208,6 +226,20 @@ def _convert_doy_date(doy: int, year: int, src, tgt):
return float(same_date.dayofyr) + fracpart


# Copied from xarray.coding.calendar_ops
def _is_leap_year(years, calendar):
func = np.vectorize(cftime.is_leap_year)
return func(years, calendar=calendar)


# Copied from xarray.coding.calendar_ops
def _days_in_year(years, calendar):
"""The number of days in the year according to given calendar."""
if calendar == "360_day":
return xr.full_like(years, 360)
return _is_leap_year(years, calendar).astype(int) + 365


def convert_doy(
source: xr.DataArray | xr.Dataset,
target_cal: str,
Expand Down Expand Up @@ -272,7 +304,7 @@ def convert_doy(
max_doy_src = max_doy[source_cal]
else:
max_doy_src = xr.apply_ufunc(
xr.coding.calendar_ops._days_in_year,
_days_in_year,
year_of_the_doy,
vectorize=True,
dask="parallelized",
Expand All @@ -282,7 +314,7 @@ def convert_doy(
max_doy_tgt = max_doy[target_cal]
else:
max_doy_tgt = xr.apply_ufunc(
xr.coding.calendar_ops._days_in_year,
_days_in_year,
year_of_the_doy,
vectorize=True,
dask="parallelized",
Expand Down Expand Up @@ -736,7 +768,7 @@ def resample_doy(doy: xr.DataArray, arr: xr.DataArray | xr.Dataset) -> xr.DataAr


def time_bnds( # noqa: C901
time: (xr.DataArray | xr.Dataset | CFTimeIndex | pd.DatetimeIndex | DataArrayResample | DatasetResample),
time: (xr.DataArray | xr.Dataset | CFTimeIndex | pd.DatetimeIndex),
freq: str | None = None,
precision: str | None = None,
):
Expand Down Expand Up @@ -778,9 +810,10 @@ def time_bnds( # noqa: C901
"""
if isinstance(time, xr.DataArray | xr.Dataset):
time = time.indexes[time.name]
elif isinstance(time, DataArrayResample | DatasetResample):
# elif isinstance(time, DataArrayResample | DatasetResample):
elif hasattr(time, "groupers"):
for grouper in time.groupers:
if isinstance(grouper.grouper, xr.groupers.TimeResampler):
if "time" in grouper.codes.dims:
datetime = grouper.unique_coord.data
freq = freq or grouper.grouper.freq
if datetime.dtype == "O":
Expand Down Expand Up @@ -922,11 +955,9 @@ def _doy_days_since_doys(
Number of days (maximum doy) for the year of each value in base.
"""
calendar = get_calendar(base)

base_doy = base.dt.dayofyear

doy_max = xr.apply_ufunc(
xr.coding.calendar_ops._days_in_year,
_days_in_year,
base.dt.year,
vectorize=True,
kwargs={"calendar": calendar},
Expand Down Expand Up @@ -1309,8 +1340,8 @@ def select_time(

# Get doy of date, this is now safe because the calendar is uniform.
doys = _get_doys(
to_cftime_datetime(f"2000-{start}", calendar).dayofyr,
to_cftime_datetime(f"2000-{end}", calendar).dayofyr,
cftime.datetime.strptime(f"2000-{start}", "%Y-%m-%d", calendar=calendar).dayofyr,
cftime.datetime.strptime(f"2000-{end}", "%Y-%m-%d", calendar=calendar).dayofyr,
include_bounds,
)
mask = time.time.dt.dayofyear.isin(doys)
Expand Down Expand Up @@ -1349,7 +1380,7 @@ def stack_periods(
dim: str = "period",
start: str = "1970-01-01",
align_days: bool = True,
pad_value=dtypes.NA,
pad_value="<NA>",
):
"""
Construct a multi-period array.
Expand Down Expand Up @@ -1527,11 +1558,12 @@ def stack_periods(
# The "fake" axis that all periods share
fake_time = xr.date_range(start, periods=longest, freq=srcfreq, calendar=cal, use_cftime=use_cftime)
# Slice and concat along new dim. We drop the index and add a new one so that xarray can concat them together.
kwargs = {"fill_value": pad_value} if pad_value != "<NA>" else {}
out = xr.concat(
[da.isel(time=slc).drop_vars("time").assign_coords(time=np.arange(slc.stop - slc.start)) for slc in periods],
dim,
join="outer",
fill_value=pad_value,
**kwargs,
)
out = out.assign_coords(
time=(("time",), fake_time, da.time.attrs.copy()),
Expand Down
32 changes: 29 additions & 3 deletions src/xclim/core/formatting.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
from inspect import _empty, signature # noqa
from typing import Any

import pandas as pd
import xarray as xr
from boltons.funcutils import wraps

Expand Down Expand Up @@ -79,7 +80,7 @@ def __init__(
self.modifiers = modifiers
self.mapping = mapping

def format(self, format_string: str, /, *args: Any, **kwargs: dict) -> str:
def format(self, format_string: str, /, *args, **kwargs: dict) -> str:
r"""
Format a string.

Expand Down Expand Up @@ -338,7 +339,7 @@ def _parse_returns(section):

def merge_attributes(
attribute: str,
*inputs_list: xr.DataArray | xr.Dataset,
*inputs_list, # : xr.DataArray | xr.Dataset
new_line: str = "\n",
missing_str: str | None = None,
**inputs_kws: xr.DataArray | xr.Dataset,
Expand Down Expand Up @@ -390,7 +391,7 @@ def merge_attributes(

def update_history(
hist_str: str,
*inputs_list: xr.DataArray | xr.Dataset,
*inputs_list, # : xr.DataArray | xr.Dataset,
new_name: str | None = None,
**inputs_kws: xr.DataArray | xr.Dataset,
) -> str:
Expand Down Expand Up @@ -770,3 +771,28 @@ def get_percentile_metadata(data: xr.DataArray, prefix: str) -> dict[str, str]:
f"{prefix}_window": data.attrs.get("window", "<unknown window>"),
f"{prefix}_period": clim_bounds,
}


# Adapted from xarray.structure.merge_attrs
def _merge_attrs_drop_conflicts(*objs):
"""Merge attributes from different xarray objects, dropping any attributes that conflict."""
out = {}
dropped = set()
for obj in objs:
attrs = obj.attrs
out.update({key: value for key, value in attrs.items() if key not in out and key not in dropped})
out = {key: value for key, value in out.items() if key not in attrs or _equivalent_attrs(attrs[key], value)}
dropped |= {key for key in attrs if key not in out}
return out


# Adapted from xarray.core.utils.equivalent
def _equivalent_attrs(first, second) -> bool:
"""Return whether two attributes are identical or not."""
if first is second:
return True
if isinstance(first, list) or isinstance(second, list):
if len(first) != len(second):
return False
return all(_equivalent_attrs(f, s) for f, s in zip(first, second, strict=False))
return (first == second) or (pd.isnull(first) and pd.isnull(second))
7 changes: 4 additions & 3 deletions src/xclim/core/indicator.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,7 @@
from xclim.core.cfchecks import cfcheck_from_name
from xclim.core.formatting import (
AttrFormatter,
_merge_attrs_drop_conflicts,
default_formatter,
gen_call_string,
generate_indicator_docstring,
Expand Down Expand Up @@ -860,9 +861,9 @@ def __call__(self, *args, **kwds):
das, params, dsattrs = self._parse_variables_from_call(args, kwds)

if OPTIONS[KEEP_ATTRS] is True or (
OPTIONS[KEEP_ATTRS] == "xarray" and xarray.core.options._get_keep_attrs(False)
OPTIONS[KEEP_ATTRS] == "xarray" and xarray.get_options()["keep_attrs"] is True
):
out_attrs = xarray.core.merge.merge_attrs([da.attrs for da in das.values()], "drop_conflicts")
out_attrs = _merge_attrs_drop_conflicts(*das.values())
out_attrs.pop("units", None)
else:
out_attrs = {}
Expand Down Expand Up @@ -916,7 +917,7 @@ def __call__(self, *args, **kwds):
if OPTIONS[AS_DATASET]:
out = Dataset({o.name: o for o in outs})
if OPTIONS[KEEP_ATTRS] is True or (
OPTIONS[KEEP_ATTRS] == "xarray" and xarray.core.options._get_keep_attrs(False)
OPTIONS[KEEP_ATTRS] == "xarray" and xarray.get_options()["keep_attrs"] is True
):
out.attrs.update(dsattrs)
out.attrs["history"] = update_history(
Expand Down
2 changes: 1 addition & 1 deletion src/xclim/core/locales.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ def get_local_dict(locale: str | Sequence[str] | tuple[str, dict]) -> tuple[str,

def get_local_attrs(
indicator: str | Sequence[str],
*locales: str | Sequence[str] | tuple[str, dict],
*locales, # : str | Sequence[str] | tuple[str, dict],
names: Sequence[str] | None = None,
append_locale_name: bool = True,
) -> dict:
Expand Down
3 changes: 2 additions & 1 deletion src/xclim/core/missing.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,8 @@ def expected_count(
time = xr.DataArray(time.values, dims=("time",), coords={"time": time.values}, name="time")

if freq:
resamp = time.resample(time=freq).first()
# We only want the resulting time index, the actual resampling method is not important.
resamp = time.resample(time=freq).count()
resamp_time = resamp.indexes["time"]
_, _, is_start, _ = parse_offset(freq)
if is_start:
Expand Down
5 changes: 3 additions & 2 deletions src/xclim/core/options.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from __future__ import annotations

from collections.abc import Callable
from copy import deepcopy
from inspect import signature

from boltons.funcutils import wraps
Expand Down Expand Up @@ -78,7 +79,7 @@ def _valid_missing_options(mopts):

def _set_missing_options(mopts):
for meth, opts in mopts.items():
OPTIONS[MISSING_OPTIONS][meth].update(opts)
OPTIONS[MISSING_OPTIONS][meth].update(**opts)


def _set_metadata_locales(locales):
Expand Down Expand Up @@ -275,7 +276,7 @@ def __init__(self, **kwargs):
if k in _VALIDATORS and not _VALIDATORS[k](v):
raise ValueError(f"option {k!r} given an invalid value: {v!r}")

self.old[k] = OPTIONS[k]
self.old[k] = deepcopy(OPTIONS[k])

self._update(kwargs)

Expand Down
Loading
Loading