text
stringlengths
288
32.7k
source
stringlengths
84
118
file_type
stringclasses
2 values
guides/manage_spaces: guides/manage-spaces guides/webhooks_server: guides/webhooks hf_transfer: package_reference/environment_variables#hfhubenablehftransfer how-to-cache: guides/manage-cache how-to-discussions-and-pull-requests: guides/community how-to-downstream: guides/download how-to-inference: guides/inference how-to-manage: guides/repository how-to-model-cards: guides/model-cards how-to-upstream: guides/upload package_reference/inference_api: package_reference/inference_client package_reference/login: package_reference/authentication search-the-hub: guides/search
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/_redirects.yml
.yml
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quickstart The [Hugging Face Hub](https://huggingface.co/) is the go-to place for sharing machine learning models, demos, datasets, and metrics. `huggingface_hub` library helps you interact with the Hub without leaving your development environment. You can create and manage repositories easily, download and upload files, and get useful model and dataset metadata from the Hub. ## Installation To get started, install the `huggingface_hub` library: ```bash pip install --upgrade huggingface_hub ``` For more details, check out the [installation](installation) guide. ## Download files Repositories on the Hub are git version controlled, and users can download a single file or the whole repository. You can use the [`hf_hub_download`] function to download files. This function will download and cache a file on your local disk. The next time you need that file, it will load from your cache, so you don't need to re-download it. You will need the repository id and the filename of the file you want to download. For example, to download the [Pegasus](https://huggingface.co/google/pegasus-xsum) model configuration file: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="google/pegasus-xsum", filename="config.json") ``` To download a specific version of the file, use the `revision` parameter to specify the branch name, tag, or commit hash. If you choose to use the commit hash, it must be the full-length hash instead of the shorter 7-character commit hash: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download( ... repo_id="google/pegasus-xsum", ... filename="config.json", ... revision="4d33b01d79672f27f001f6abade33f22d993b151" ... ) ``` For more details and options, see the API reference for [`hf_hub_download`]. <a id="login"></a> <!-- backward compatible anchor --> ## Authentication In a lot of cases, you must be authenticated with a Hugging Face account to interact with the Hub: download private repos, upload files, create PRs,... [Create an account](https://huggingface.co/join) if you don't already have one, and then sign in to get your [User Access Token](https://huggingface.co/docs/hub/security-tokens) from your [Settings page](https://huggingface.co/settings/tokens). The User Access Token is used to authenticate your identity to the Hub. <Tip> Tokens can have `read` or `write` permissions. Make sure to have a `write` access token if you want to create or edit a repository. Otherwise, it's best to generate a `read` token to reduce risk in case your token is inadvertently leaked. </Tip> ### Login command The easiest way to authenticate is to save the token on your machine. You can do that from the terminal using the [`login`] command: ```bash huggingface-cli login ``` The command will tell you if you are already logged in and prompt you for your token. The token is then validated and saved in your `HF_HOME` directory (defaults to `~/.cache/huggingface/token`). Any script or library interacting with the Hub will use this token when sending requests. Alternatively, you can programmatically login using [`login`] in a notebook or a script: ```py >>> from huggingface_hub import login >>> login() ``` You can only be logged in to one account at a time. Logging in to a new account will automatically log you out of the previous one. To determine your currently active account, simply run the `huggingface-cli whoami` command. <Tip warning={true}> Once logged in, all requests to the Hub - even methods that don't necessarily require authentication - will use your access token by default. If you want to disable the implicit use of your token, you should set `HF_HUB_DISABLE_IMPLICIT_TOKEN=1` as an environment variable (see [reference](../package_reference/environment_variables#hfhubdisableimplicittoken)). </Tip> ### Manage multiple tokens locally You can save multiple tokens on your machine by simply logging in with the [`login`] command with each token. If you need to switch between these tokens locally, you can use the [`auth switch`] command: ```bash huggingface-cli auth switch ``` This command will prompt you to select a token by its name from a list of saved tokens. Once selected, the chosen token becomes the _active_ token, and it will be used for all interactions with the Hub. You can list all available access tokens on your machine with `huggingface-cli auth list`. ### Environment variable The environment variable `HF_TOKEN` can also be used to authenticate yourself. This is especially useful in a Space where you can set `HF_TOKEN` as a [Space secret](https://huggingface.co/docs/hub/spaces-overview#managing-secrets). <Tip> **NEW:** Google Colaboratory lets you define [private keys](https://twitter.com/GoogleColab/status/1719798406195867814) for your notebooks. Define a `HF_TOKEN` secret to be automatically authenticated! </Tip> Authentication via an environment variable or a secret has priority over the token stored on your machine. ### Method parameters Finally, it is also possible to authenticate by passing your token to any method that accepts `token` as a parameter. ``` from huggingface_hub import whoami user = whoami(token=...) ``` This is usually discouraged except in an environment where you don't want to store your token permanently or if you need to handle several tokens at once. <Tip warning={true}> Please be careful when passing tokens as a parameter. It is always best practice to load the token from a secure vault instead of hardcoding it in your codebase or notebook. Hardcoded tokens present a major leak risk if you share your code inadvertently. </Tip> ## Create a repository Once you've registered and logged in, create a repository with the [`create_repo`] function: ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> api.create_repo(repo_id="super-cool-model") ``` If you want your repository to be private, then: ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> api.create_repo(repo_id="super-cool-model", private=True) ``` Private repositories will not be visible to anyone except yourself. <Tip> To create a repository or to push content to the Hub, you must provide a User Access Token that has the `write` permission. You can choose the permission when creating the token in your [Settings page](https://huggingface.co/settings/tokens). </Tip> ## Upload files Use the [`upload_file`] function to add a file to your newly created repository. You need to specify: 1. The path of the file to upload. 2. The path of the file in the repository. 3. The repository id of where you want to add the file. ```py >>> from huggingface_hub import HfApi >>> api = HfApi() >>> api.upload_file( ... path_or_fileobj="/home/lysandre/dummy-test/README.md", ... path_in_repo="README.md", ... repo_id="lysandre/test-model", ... ) ``` To upload more than one file at a time, take a look at the [Upload](./guides/upload) guide which will introduce you to several methods for uploading files (with or without git). ## Next steps The `huggingface_hub` library provides an easy way for users to interact with the Hub with Python. To learn more about how you can manage your files and repositories on the Hub, we recommend reading our [how-to guides](./guides/overview) to: - [Manage your repository](./guides/repository). - [Download](./guides/download) files from the Hub. - [Upload](./guides/upload) files to the Hub. - [Search the Hub](./guides/search) for your desired model or dataset. - [Access the Inference API](./guides/inference) for fast inference.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/quick-start.md
.md
- sections: - local: index title: Home - local: quick-start title: Quickstart - local: installation title: Installation title: Get started - sections: - local: guides/overview title: Overview - local: guides/download title: Download files - local: guides/upload title: Upload files - local: guides/cli title: Use the CLI - local: guides/hf_file_system title: HfFileSystem - local: guides/repository title: Repository - local: guides/search title: Search - local: guides/inference title: Inference - local: guides/inference_endpoints title: Inference Endpoints - local: guides/community title: Community Tab - local: guides/collections title: Collections - local: guides/manage-cache title: Cache - local: guides/model-cards title: Model Cards - local: guides/manage-spaces title: Manage your Space - local: guides/integrations title: Integrate a library - local: guides/webhooks title: Webhooks title: How-to guides - sections: - local: concepts/git_vs_http title: Git vs HTTP paradigm title: Conceptual guides - sections: - local: package_reference/overview title: Overview - local: package_reference/authentication title: Authentication - local: package_reference/environment_variables title: Environment variables - local: package_reference/repository title: Managing local and online repositories - local: package_reference/hf_api title: Hugging Face Hub API - local: package_reference/file_download title: Downloading files - local: package_reference/mixins title: Mixins & serialization methods - local: package_reference/inference_types title: Inference Types - local: package_reference/inference_client title: Inference Client - local: package_reference/inference_endpoints title: Inference Endpoints - local: package_reference/hf_file_system title: HfFileSystem - local: package_reference/utilities title: Utilities - local: package_reference/community title: Discussions and Pull Requests - local: package_reference/cache title: Cache-system reference - local: package_reference/cards title: Repo Cards and Repo Card Data - local: package_reference/space_runtime title: Space runtime - local: package_reference/collections title: Collections - local: package_reference/tensorboard title: TensorBoard logger - local: package_reference/webhooks_server title: Webhooks server - local: package_reference/serialization title: Serialization title: Reference
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/_toctree.yml
.yml
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # πŸ€— Hub client library The `huggingface_hub` library allows you to interact with the [Hugging Face Hub](https://hf.co), a machine learning platform for creators and collaborators. Discover pre-trained models and datasets for your projects or play with the hundreds of machine learning apps hosted on the Hub. You can also create and share your own models and datasets with the community. The `huggingface_hub` library provides a simple way to do all these things with Python. Read the [quick start guide](quick-start) to get up and running with the `huggingface_hub` library. You will learn how to download files from the Hub, create a repository, and upload files to the Hub. Keep reading to learn more about how to manage your repositories on the πŸ€— Hub, how to interact in discussions or even how to access the Inference API. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guides/overview"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use huggingface_hub to solve real-world problems.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/overview"> <div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">Exhaustive and technical description of huggingface_hub classes and methods.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concepts/git_vs_http"> <div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">High-level explanations for building a better understanding of huggingface_hub philosophy.</p> </a> </div> </div> <!-- <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/overview" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> <p class="text-gray-700">Learn the basics and become familiar with using huggingface_hub to programmatically interact with the πŸ€— Hub!</p> </a> --> ## Contribute All contributions to the `huggingface_hub` are welcomed and equally valued! πŸ€— Besides adding or fixing existing issues in the code, you can also help improve the documentation by making sure it is accurate and up-to-date, help answer questions on issues, and request new features you think will improve the library. Take a look at the [contribution guide](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md) to learn more about how to submit a new issue or feature request, how to submit a pull request, and how to test your contributions to make sure everything works as expected. Contributors should also be respectful of our [code of conduct](https://github.com/huggingface/huggingface_hub/blob/main/CODE_OF_CONDUCT.md) to create an inclusive and welcoming collaborative space for everyone.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/index.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installation Before you start, you will need to setup your environment by installing the appropriate packages. `huggingface_hub` is tested on **Python 3.8+**. ## Install with pip It is highly recommended to install `huggingface_hub` in a [virtual environment](https://docs.python.org/3/library/venv.html). If you are unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: ```bash python -m venv .env ``` Activate the virtual environment. On Linux and macOS: ```bash source .env/bin/activate ``` Activate virtual environment on Windows: ```bash .env/Scripts/activate ``` Now you're ready to install `huggingface_hub` [from the PyPi registry](https://pypi.org/project/huggingface-hub/): ```bash pip install --upgrade huggingface_hub ``` Once done, [check installation](#check-installation) is working correctly. ### Install optional dependencies Some dependencies of `huggingface_hub` are [optional](https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies) because they are not required to run the core features of `huggingface_hub`. However, some features of the `huggingface_hub` may not be available if the optional dependencies aren't installed. You can install optional dependencies via `pip`: ```bash # Install dependencies for tensorflow-specific features # /!\ Warning: this is not equivalent to `pip install tensorflow` pip install 'huggingface_hub[tensorflow]' # Install dependencies for both torch-specific and CLI-specific features. pip install 'huggingface_hub[cli,torch]' ``` Here is the list of optional dependencies in `huggingface_hub`: - `cli`: provide a more convenient CLI interface for `huggingface_hub`. - `fastai`, `torch`, `tensorflow`: dependencies to run framework-specific features. - `dev`: dependencies to contribute to the lib. Includes `testing` (to run tests), `typing` (to run type checker) and `quality` (to run linters). ### Install from source In some cases, it is interesting to install `huggingface_hub` directly from source. This allows you to use the bleeding edge `main` version rather than the latest stable version. The `main` version is useful for staying up-to-date with the latest developments, for instance if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. However, this means the `main` version may not always be stable. We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an Issue so we can fix it even sooner! ```bash pip install git+https://github.com/huggingface/huggingface_hub ``` When installing from source, you can also specify a specific branch. This is useful if you want to test a new feature or a new bug-fix that has not been merged yet: ```bash pip install git+https://github.com/huggingface/huggingface_hub@my-feature-branch ``` Once done, [check installation](#check-installation) is working correctly. ### Editable install Installing from source allows you to setup an [editable install](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs). This is a more advanced installation if you plan to contribute to `huggingface_hub` and need to test changes in the code. You need to clone a local copy of `huggingface_hub` on your machine. ```bash # First, clone repo locally git clone https://github.com/huggingface/huggingface_hub.git # Then, install with -e flag cd huggingface_hub pip install -e . ``` These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in `./.venv/lib/python3.13/site-packages/`, Python will also search the folder you cloned `./huggingface_hub/`. ## Install with conda If you are more familiar with it, you can install `huggingface_hub` using the [conda-forge channel](https://anaconda.org/conda-forge/huggingface_hub): ```bash conda install -c conda-forge huggingface_hub ``` Once done, [check installation](#check-installation) is working correctly. ## Check installation Once installed, check that `huggingface_hub` works properly by running the following command: ```bash python -c "from huggingface_hub import model_info; print(model_info('gpt2'))" ``` This command will fetch information from the Hub about the [gpt2](https://huggingface.co/gpt2) model. Output should look like this: ```text Model Name: gpt2 Tags: ['pytorch', 'tf', 'jax', 'tflite', 'rust', 'safetensors', 'gpt2', 'text-generation', 'en', 'doi:10.57967/hf/0039', 'transformers', 'exbert', 'license:mit', 'has_space'] Task: text-generation ``` ## Windows limitations With our goal of democratizing good ML everywhere, we built `huggingface_hub` to be a cross-platform library and in particular to work correctly on both Unix-based and Windows systems. However, there are a few cases where `huggingface_hub` has some limitations when run on Windows. Here is an exhaustive list of known issues. Please let us know if you encounter any undocumented problem by opening [an issue on Github](https://github.com/huggingface/huggingface_hub/issues/new/choose). - `huggingface_hub`'s cache system relies on symlinks to efficiently cache files downloaded from the Hub. On Windows, you must activate developer mode or run your script as admin to enable symlinks. If they are not activated, the cache-system still works but in a non-optimized manner. Please read [the cache limitations](./guides/manage-cache#limitations) section for more details. - Filepaths on the Hub can have special characters (e.g. `"path/to?/my/file"`). Windows is more restrictive on [special characters](https://learn.microsoft.com/en-us/windows/win32/intl/character-sets-used-in-file-names) which makes it impossible to download those files on Windows. Hopefully this is a rare case. Please reach out to the repo owner if you think this is a mistake or to us to figure out a solution. ## Next steps Once `huggingface_hub` is properly installed on your machine, you might want [configure environment variables](package_reference/environment_variables) or [check one of our guides](guides/overview) to get started.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/installation.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Authentication The `huggingface_hub` library allows users to programmatically manage authentication to the Hub. This includes logging in, logging out, switching between tokens, and listing available tokens. For more details about authentication, check out [this section](../quick-start#authentication). ## login ### login ```python Login the machine to access the Hub. The `token` is persisted in cache and set as a git credential. Once done, the machine is logged in and the access token will be available across all `huggingface_hub` components. If `token` is not provided, it will be prompted to the user either with a widget (in a notebook) or via the terminal. To log in from outside of a script, one can also use `huggingface-cli login` which is a cli command that wraps [`login`]. <Tip> [`login`] is a drop-in replacement method for [`notebook_login`] as it wraps and extends its capabilities. </Tip> <Tip> When the token is not passed, [`login`] will automatically detect if the script runs in a notebook or not. However, this detection might not be accurate due to the variety of notebooks that exists nowadays. If that is the case, you can always force the UI by using [`notebook_login`] or [`interpreter_login`]. </Tip> Args: token (`str`, *optional*): User access token to generate from https://huggingface.co/settings/token. add_to_git_credential (`bool`, defaults to `False`): If `True`, token will be set as git credential. If no git credential helper is configured, a warning will be displayed to the user. If `token` is `None`, the value of `add_to_git_credential` is ignored and will be prompted again to the end user. new_session (`bool`, defaults to `True`): If `True`, will request a token even if one is already saved on the machine. write_permission (`bool`): Ignored and deprecated argument. Raises: [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If an organization token is passed. Only personal account tokens are valid to log in. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If token is invalid. [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) If running in a notebook but `ipywidgets` is not installed. ``` ## interpreter_login ### interpreter_login ```python Displays a prompt to log in to the HF website and store the token. This is equivalent to [`login`] without passing a token when not run in a notebook. [`interpreter_login`] is useful if you want to force the use of the terminal prompt instead of a notebook widget. For more details, see [`login`]. Args: new_session (`bool`, defaults to `True`): If `True`, will request a token even if one is already saved on the machine. write_permission (`bool`): Ignored and deprecated argument. ``` ## notebook_login ### notebook_login ```python Displays a widget to log in to the HF website and store the token. This is equivalent to [`login`] without passing a token when run in a notebook. [`notebook_login`] is useful if you want to force the use of the notebook widget instead of a prompt in the terminal. For more details, see [`login`]. Args: new_session (`bool`, defaults to `True`): If `True`, will request a token even if one is already saved on the machine. write_permission (`bool`): Ignored and deprecated argument. ``` ## logout ### logout ```python Logout the machine from the Hub. Token is deleted from the machine and removed from git credential. Args: token_name (`str`, *optional*): Name of the access token to logout from. If `None`, will logout from all saved access tokens. Raises: [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError): If the access token name is not found. ``` ## auth_switch ### auth_switch ```python Switch to a different access token. Args: token_name (`str`): Name of the access token to switch to. add_to_git_credential (`bool`, defaults to `False`): If `True`, token will be set as git credential. If no git credential helper is configured, a warning will be displayed to the user. If `token` is `None`, the value of `add_to_git_credential` is ignored and will be prompted again to the end user. Raises: [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError): If the access token name is not found. ``` ## auth_list ### auth_list ```python List all stored access tokens. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/authentication.md
.md
# Inference Endpoints Inference Endpoints provides a secure production solution to easily deploy models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models). This page is a reference for `huggingface_hub`'s integration with Inference Endpoints. For more information about the Inference Endpoints product, check out its [official documentation](https://huggingface.co/docs/inference-endpoints/index). <Tip> Check out the [related guide](../guides/inference_endpoints) to learn how to use `huggingface_hub` to manage your Inference Endpoints programmatically. </Tip> Inference Endpoints can be fully managed via API. The endpoints are documented with [Swagger](https://api.endpoints.huggingface.cloud/). The [`InferenceEndpoint`] class is a simple wrapper built on top on this API. ## Methods A subset of the Inference Endpoint features are implemented in [`HfApi`]: - [`get_inference_endpoint`] and [`list_inference_endpoints`] to get information about your Inference Endpoints - [`create_inference_endpoint`], [`update_inference_endpoint`] and [`delete_inference_endpoint`] to deploy and manage Inference Endpoints - [`pause_inference_endpoint`] and [`resume_inference_endpoint`] to pause and resume an Inference Endpoint - [`scale_to_zero_inference_endpoint`] to manually scale an Endpoint to 0 replicas ## InferenceEndpoint The main dataclass is [`InferenceEndpoint`]. It contains information about a deployed `InferenceEndpoint`, including its configuration and current state. Once deployed, you can run inference on the Endpoint using the [`InferenceEndpoint.client`] and [`InferenceEndpoint.async_client`] properties that respectively return an [`InferenceClient`] and an [`AsyncInferenceClient`] object. ### InferenceEndpoint ```python Contains information about a deployed Inference Endpoint. Args: name (`str`): The unique name of the Inference Endpoint. namespace (`str`): The namespace where the Inference Endpoint is located. repository (`str`): The name of the model repository deployed on this Inference Endpoint. status ([`InferenceEndpointStatus`]): The current status of the Inference Endpoint. url (`str`, *optional*): The URL of the Inference Endpoint, if available. Only a deployed Inference Endpoint will have a URL. framework (`str`): The machine learning framework used for the model. revision (`str`): The specific model revision deployed on the Inference Endpoint. task (`str`): The task associated with the deployed model. created_at (`datetime.datetime`): The timestamp when the Inference Endpoint was created. updated_at (`datetime.datetime`): The timestamp of the last update of the Inference Endpoint. type ([`InferenceEndpointType`]): The type of the Inference Endpoint (public, protected, private). raw (`Dict`): The raw dictionary data returned from the API. token (`str` or `bool`, *optional*): Authentication token for the Inference Endpoint, if set when requesting the API. Will default to the locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server. Example: ```python >>> from huggingface_hub import get_inference_endpoint >>> endpoint = get_inference_endpoint("my-text-to-image") >>> endpoint InferenceEndpoint(name='my-text-to-image', ...) # Get status >>> endpoint.status 'running' >>> endpoint.url 'https://my-text-to-image.region.vendor.endpoints.huggingface.cloud' # Run inference >>> endpoint.client.text_to_image(...) # Pause endpoint to save $$$ >>> endpoint.pause() # ... # Resume and wait for deployment >>> endpoint.resume() >>> endpoint.wait() >>> endpoint.client.text_to_image(...) ``` ``` - from_raw - client - async_client - all ## InferenceEndpointStatus ### InferenceEndpointStatus ```python An enumeration. ``` ## InferenceEndpointType ### InferenceEndpointType ```python An enumeration. ``` ## InferenceEndpointError ### InferenceEndpointError ```python Generic exception when dealing with Inference Endpoints. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_endpoints.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # HfApi Client Below is the documentation for the `HfApi` class, which serves as a Python wrapper for the Hugging Face Hub's API. All methods from the `HfApi` are also accessible from the package's root directly. Both approaches are detailed below. Using the root method is more straightforward but the [`HfApi`] class gives you more flexibility. In particular, you can pass a token that will be reused in all HTTP calls. This is different than `huggingface-cli login` or [`login`] as the token is not persisted on the machine. It is also possible to provide a different endpoint or configure a custom user-agent. ```python from huggingface_hub import HfApi, list_models # Use root method models = list_models() # Or configure a HfApi client hf_api = HfApi( endpoint="https://huggingface.co", # Can be a Private Hub endpoint. token="hf_xxx", # Token is not persisted on the machine. ) models = hf_api.list_models() ``` ## HfApi ### HfApi ```python Client to interact with the Hugging Face Hub via HTTP. The client is initialized with some high-level settings used in all requests made to the Hub (HF endpoint, authentication, user agents...). Using the `HfApi` client is preferred but not mandatory as all of its public methods are exposed directly at the root of `huggingface_hub`. Args: endpoint (`str`, *optional*): Endpoint of the Hub. Defaults to <https://huggingface.co>. token (Union[bool, str, None], optional): A valid user access token (string). Defaults to the locally saved token, which is the recommended method for authentication (see https://huggingface.co/docs/huggingface_hub/quick-start#authentication). To disable authentication, pass `False`. library_name (`str`, *optional*): The name of the library that is making the HTTP request. Will be added to the user-agent header. Example: `"transformers"`. library_version (`str`, *optional*): The version of the library that is making the HTTP request. Will be added to the user-agent header. Example: `"4.24.0"`. user_agent (`str`, `dict`, *optional*): The user agent info in the form of a dictionary or a single string. It will be completed with information about the installed packages. headers (`dict`, *optional*): Additional headers to be sent with each request. Example: `{"X-My-Header": "value"}`. Headers passed here are taking precedence over the default headers. ``` ## API Dataclasses ### AccessRequest ### huggingface_hub.hf_api.AccessRequest ```python Data structure containing information about a user access request. Attributes: username (`str`): Username of the user who requested access. fullname (`str`): Fullname of the user who requested access. email (`Optional[str]`): Email of the user who requested access. Can only be `None` in the /accepted list if the user was granted access manually. timestamp (`datetime`): Timestamp of the request. status (`Literal["pending", "accepted", "rejected"]`): Status of the request. Can be one of `["pending", "accepted", "rejected"]`. fields (`Dict[str, Any]`, *optional*): Additional fields filled by the user in the gate form. ``` ### CommitInfo ### huggingface_hub.hf_api.CommitInfo ```python Data structure containing information about a newly created commit. Returned by any method that creates a commit on the Hub: [`create_commit`], [`upload_file`], [`upload_folder`], [`delete_file`], [`delete_folder`]. It inherits from `str` for backward compatibility but using methods specific to `str` is deprecated. Attributes: commit_url (`str`): Url where to find the commit. commit_message (`str`): The summary (first line) of the commit that has been created. commit_description (`str`): Description of the commit that has been created. Can be empty. oid (`str`): Commit hash id. Example: `"91c54ad1727ee830252e457677f467be0bfd8a57"`. pr_url (`str`, *optional*): Url to the PR that has been created, if any. Populated when `create_pr=True` is passed. pr_revision (`str`, *optional*): Revision of the PR that has been created, if any. Populated when `create_pr=True` is passed. Example: `"refs/pr/1"`. pr_num (`int`, *optional*): Number of the PR discussion that has been created, if any. Populated when `create_pr=True` is passed. Can be passed as `discussion_num` in [`get_discussion_details`]. Example: `1`. repo_url (`RepoUrl`): Repo URL of the commit containing info like repo_id, repo_type, etc. _url (`str`, *optional*): Legacy url for `str` compatibility. Can be the url to the uploaded file on the Hub (if returned by [`upload_file`]), to the uploaded folder on the Hub (if returned by [`upload_folder`]) or to the commit on the Hub (if returned by [`create_commit`]). Defaults to `commit_url`. It is deprecated to use this attribute. Please use `commit_url` instead. ``` ### DatasetInfo ### huggingface_hub.hf_api.DatasetInfo ```python Contains information about a dataset on the Hub. <Tip> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing datasets using [`list_datasets`] only a subset of the attributes are returned. </Tip> Attributes: id (`str`): ID of dataset. author (`str`): Author of the dataset. sha (`str`): Repo SHA at this particular revision. created_at (`datetime`, *optional*): Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`, corresponding to the date when we began to store creation dates. last_modified (`datetime`, *optional*): Date of last commit to the repo. private (`bool`): Is the repo private. disabled (`bool`, *optional*): Is the repo disabled. gated (`Literal["auto", "manual", False]`, *optional*): Is the repo gated. If so, whether there is manual or automatic approval. downloads (`int`): Number of downloads of the dataset over the last 30 days. downloads_all_time (`int`): Cumulated number of downloads of the model since its creation. likes (`int`): Number of likes of the dataset. tags (`List[str]`): List of tags of the dataset. card_data (`DatasetCardData`, *optional*): Model Card Metadata as a [`huggingface_hub.repocard_data.DatasetCardData`] object. siblings (`List[RepoSibling]`): List of [`huggingface_hub.hf_api.RepoSibling`] objects that constitute the dataset. paperswithcode_id (`str`, *optional*): Papers with code ID of the dataset. trending_score (`int`, *optional*): Trending score of the dataset. ``` ### GitRefInfo ### huggingface_hub.hf_api.GitRefInfo ```python Contains information about a git reference for a repo on the Hub. Attributes: name (`str`): Name of the reference (e.g. tag name or branch name). ref (`str`): Full git ref on the Hub (e.g. `"refs/heads/main"` or `"refs/tags/v1.0"`). target_commit (`str`): OID of the target commit for the ref (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`) ``` ### GitCommitInfo ### huggingface_hub.hf_api.GitCommitInfo ```python Contains information about a git commit for a repo on the Hub. Check out [`list_repo_commits`] for more details. Attributes: commit_id (`str`): OID of the commit (e.g. `"e7da7f221d5bf496a48136c0cd264e630fe9fcc8"`) authors (`List[str]`): List of authors of the commit. created_at (`datetime`): Datetime when the commit was created. title (`str`): Title of the commit. This is a free-text value entered by the authors. message (`str`): Description of the commit. This is a free-text value entered by the authors. formatted_title (`str`): Title of the commit formatted as HTML. Only returned if `formatted=True` is set. formatted_message (`str`): Description of the commit formatted as HTML. Only returned if `formatted=True` is set. ``` ### GitRefs ### huggingface_hub.hf_api.GitRefs ```python Contains information about all git references for a repo on the Hub. Object is returned by [`list_repo_refs`]. Attributes: branches (`List[GitRefInfo]`): A list of [`GitRefInfo`] containing information about branches on the repo. converts (`List[GitRefInfo]`): A list of [`GitRefInfo`] containing information about "convert" refs on the repo. Converts are refs used (internally) to push preprocessed data in Dataset repos. tags (`List[GitRefInfo]`): A list of [`GitRefInfo`] containing information about tags on the repo. pull_requests (`List[GitRefInfo]`, *optional*): A list of [`GitRefInfo`] containing information about pull requests on the repo. Only returned if `include_prs=True` is set. ``` ### ModelInfo ### huggingface_hub.hf_api.ModelInfo ```python Contains information about a model on the Hub. <Tip> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing models using [`list_models`] only a subset of the attributes are returned. </Tip> Attributes: id (`str`): ID of model. author (`str`, *optional*): Author of the model. sha (`str`, *optional*): Repo SHA at this particular revision. created_at (`datetime`, *optional*): Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`, corresponding to the date when we began to store creation dates. last_modified (`datetime`, *optional*): Date of last commit to the repo. private (`bool`): Is the repo private. disabled (`bool`, *optional*): Is the repo disabled. downloads (`int`): Number of downloads of the model over the last 30 days. downloads_all_time (`int`): Cumulated number of downloads of the model since its creation. gated (`Literal["auto", "manual", False]`, *optional*): Is the repo gated. If so, whether there is manual or automatic approval. gguf (`Dict`, *optional*): GGUF information of the model. inference (`Literal["cold", "frozen", "warm"]`, *optional*): Status of the model on the inference API. Warm models are available for immediate use. Cold models will be loaded on first inference call. Frozen models are not available in Inference API. likes (`int`): Number of likes of the model. library_name (`str`, *optional*): Library associated with the model. tags (`List[str]`): List of tags of the model. Compared to `card_data.tags`, contains extra tags computed by the Hub (e.g. supported libraries, model's arXiv). pipeline_tag (`str`, *optional*): Pipeline tag associated with the model. mask_token (`str`, *optional*): Mask token used by the model. widget_data (`Any`, *optional*): Widget data associated with the model. model_index (`Dict`, *optional*): Model index for evaluation. config (`Dict`, *optional*): Model configuration. transformers_info (`TransformersInfo`, *optional*): Transformers-specific info (auto class, processor, etc.) associated with the model. trending_score (`int`, *optional*): Trending score of the model. card_data (`ModelCardData`, *optional*): Model Card Metadata as a [`huggingface_hub.repocard_data.ModelCardData`] object. siblings (`List[RepoSibling]`): List of [`huggingface_hub.hf_api.RepoSibling`] objects that constitute the model. spaces (`List[str]`, *optional*): List of spaces using the model. safetensors (`SafeTensorsInfo`, *optional*): Model's safetensors information. security_repo_status (`Dict`, *optional*): Model's security scan status. ``` ### RepoSibling ### huggingface_hub.hf_api.RepoSibling ```python Contains basic information about a repo file inside a repo on the Hub. <Tip> All attributes of this class are optional except `rfilename`. This is because only the file names are returned when listing repositories on the Hub (with [`list_models`], [`list_datasets`] or [`list_spaces`]). If you need more information like file size, blob id or lfs details, you must request them specifically from one repo at a time (using [`model_info`], [`dataset_info`] or [`space_info`]) as it adds more constraints on the backend server to retrieve these. </Tip> Attributes: rfilename (str): file name, relative to the repo root. size (`int`, *optional*): The file's size, in bytes. This attribute is defined when `files_metadata` argument of [`repo_info`] is set to `True`. It's `None` otherwise. blob_id (`str`, *optional*): The file's git OID. This attribute is defined when `files_metadata` argument of [`repo_info`] is set to `True`. It's `None` otherwise. lfs (`BlobLfsInfo`, *optional*): The file's LFS metadata. This attribute is defined when`files_metadata` argument of [`repo_info`] is set to `True` and the file is stored with Git LFS. It's `None` otherwise. ``` ### RepoFile ### huggingface_hub.hf_api.RepoFile ```python Contains information about a file on the Hub. Attributes: path (str): file path relative to the repo root. size (`int`): The file's size, in bytes. blob_id (`str`): The file's git OID. lfs (`BlobLfsInfo`): The file's LFS metadata. last_commit (`LastCommitInfo`, *optional*): The file's last commit metadata. Only defined if [`list_repo_tree`] and [`get_paths_info`] are called with `expand=True`. security (`BlobSecurityInfo`, *optional*): The file's security scan metadata. Only defined if [`list_repo_tree`] and [`get_paths_info`] are called with `expand=True`. ``` ### RepoUrl ### huggingface_hub.hf_api.RepoUrl ```python Subclass of `str` describing a repo URL on the Hub. `RepoUrl` is returned by `HfApi.create_repo`. It inherits from `str` for backward compatibility. At initialization, the URL is parsed to populate properties: - endpoint (`str`) - namespace (`Optional[str]`) - repo_name (`str`) - repo_id (`str`) - repo_type (`Literal["model", "dataset", "space"]`) - url (`str`) Args: url (`Any`): String value of the repo url. endpoint (`str`, *optional*): Endpoint of the Hub. Defaults to <https://huggingface.co>. Example: ```py >>> RepoUrl('https://huggingface.co/gpt2') RepoUrl('https://huggingface.co/gpt2', endpoint='https://huggingface.co', repo_type='model', repo_id='gpt2') >>> RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co') RepoUrl('https://hub-ci.huggingface.co/datasets/dummy_user/dummy_dataset', endpoint='https://hub-ci.huggingface.co', repo_type='dataset', repo_id='dummy_user/dummy_dataset') >>> RepoUrl('hf://datasets/my-user/my-dataset') RepoUrl('hf://datasets/my-user/my-dataset', endpoint='https://huggingface.co', repo_type='dataset', repo_id='user/dataset') >>> HfApi.create_repo("dummy_model") RepoUrl('https://huggingface.co/Wauplin/dummy_model', endpoint='https://huggingface.co', repo_type='model', repo_id='Wauplin/dummy_model') ``` Raises: [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If URL cannot be parsed. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If `repo_type` is unknown. ``` ### SafetensorsRepoMetadata ### huggingface_hub.utils.SafetensorsRepoMetadata ```python Metadata for a Safetensors repo. A repo is considered to be a Safetensors repo if it contains either a 'model.safetensors' weight file (non-shared model) or a 'model.safetensors.index.json' index file (sharded model) at its root. This class is returned by [`get_safetensors_metadata`]. For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format. Attributes: metadata (`Dict`, *optional*): The metadata contained in the 'model.safetensors.index.json' file, if it exists. Only populated for sharded models. sharded (`bool`): Whether the repo contains a sharded model or not. weight_map (`Dict[str, str]`): A map of all weights. Keys are tensor names and values are filenames of the files containing the tensors. files_metadata (`Dict[str, SafetensorsFileMetadata]`): A map of all files metadata. Keys are filenames and values are the metadata of the corresponding file, as a [`SafetensorsFileMetadata`] object. parameter_count (`Dict[str, int]`): A map of the number of parameters per data type. Keys are data types and values are the number of parameters of that data type. ``` ### SafetensorsFileMetadata ### huggingface_hub.utils.SafetensorsFileMetadata ```python Metadata for a Safetensors file hosted on the Hub. This class is returned by [`parse_safetensors_file_metadata`]. For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format. Attributes: metadata (`Dict`): The metadata contained in the file. tensors (`Dict[str, TensorInfo]`): A map of all tensors. Keys are tensor names and values are information about the corresponding tensor, as a [`TensorInfo`] object. parameter_count (`Dict[str, int]`): A map of the number of parameters per data type. Keys are data types and values are the number of parameters of that data type. ``` ### SpaceInfo ### huggingface_hub.hf_api.SpaceInfo ```python Contains information about a Space on the Hub. <Tip> Most attributes of this class are optional. This is because the data returned by the Hub depends on the query made. In general, the more specific the query, the more information is returned. On the contrary, when listing spaces using [`list_spaces`] only a subset of the attributes are returned. </Tip> Attributes: id (`str`): ID of the Space. author (`str`, *optional*): Author of the Space. sha (`str`, *optional*): Repo SHA at this particular revision. created_at (`datetime`, *optional*): Date of creation of the repo on the Hub. Note that the lowest value is `2022-03-02T23:29:04.000Z`, corresponding to the date when we began to store creation dates. last_modified (`datetime`, *optional*): Date of last commit to the repo. private (`bool`): Is the repo private. gated (`Literal["auto", "manual", False]`, *optional*): Is the repo gated. If so, whether there is manual or automatic approval. disabled (`bool`, *optional*): Is the Space disabled. host (`str`, *optional*): Host URL of the Space. subdomain (`str`, *optional*): Subdomain of the Space. likes (`int`): Number of likes of the Space. tags (`List[str]`): List of tags of the Space. siblings (`List[RepoSibling]`): List of [`huggingface_hub.hf_api.RepoSibling`] objects that constitute the Space. card_data (`SpaceCardData`, *optional*): Space Card Metadata as a [`huggingface_hub.repocard_data.SpaceCardData`] object. runtime (`SpaceRuntime`, *optional*): Space runtime information as a [`huggingface_hub.hf_api.SpaceRuntime`] object. sdk (`str`, *optional*): SDK used by the Space. models (`List[str]`, *optional*): List of models used by the Space. datasets (`List[str]`, *optional*): List of datasets used by the Space. trending_score (`int`, *optional*): Trending score of the Space. ``` ### TensorInfo ### huggingface_hub.utils.TensorInfo ```python Information about a tensor. For more details regarding the safetensors format, check out https://huggingface.co/docs/safetensors/index#format. Attributes: dtype (`str`): The data type of the tensor ("F64", "F32", "F16", "BF16", "I64", "I32", "I16", "I8", "U8", "BOOL"). shape (`List[int]`): The shape of the tensor. data_offsets (`Tuple[int, int]`): The offsets of the data in the file as a tuple `[BEGIN, END]`. parameter_count (`int`): The number of parameters in the tensor. ``` ### User ### huggingface_hub.hf_api.User ```python Contains information about a user on the Hub. Attributes: username (`str`): Name of the user on the Hub (unique). fullname (`str`): User's full name. avatar_url (`str`): URL of the user's avatar. details (`str`, *optional*): User's details. is_following (`bool`, *optional*): Whether the authenticated user is following this user. is_pro (`bool`, *optional*): Whether the user is a pro user. num_models (`int`, *optional*): Number of models created by the user. num_datasets (`int`, *optional*): Number of datasets created by the user. num_spaces (`int`, *optional*): Number of spaces created by the user. num_discussions (`int`, *optional*): Number of discussions initiated by the user. num_papers (`int`, *optional*): Number of papers authored by the user. num_upvotes (`int`, *optional*): Number of upvotes received by the user. num_likes (`int`, *optional*): Number of likes given by the user. num_following (`int`, *optional*): Number of users this user is following. num_followers (`int`, *optional*): Number of users following this user. orgs (list of [`Organization`]): List of organizations the user is part of. ``` ### UserLikes ### huggingface_hub.hf_api.UserLikes ```python Contains information about a user likes on the Hub. Attributes: user (`str`): Name of the user for which we fetched the likes. total (`int`): Total number of likes. datasets (`List[str]`): List of datasets liked by the user (as repo_ids). models (`List[str]`): List of models liked by the user (as repo_ids). spaces (`List[str]`): List of spaces liked by the user (as repo_ids). ``` ### WebhookInfo ### huggingface_hub.hf_api.WebhookInfo ```python Data structure containing information about a webhook. Attributes: id (`str`): ID of the webhook. url (`str`): URL of the webhook. watched (`List[WebhookWatchedItem]`): List of items watched by the webhook, see [`WebhookWatchedItem`]. domains (`List[WEBHOOK_DOMAIN_T]`): List of domains the webhook is watching. Can be one of `["repo", "discussions"]`. secret (`str`, *optional*): Secret of the webhook. disabled (`bool`): Whether the webhook is disabled or not. ``` ### WebhookWatchedItem ### huggingface_hub.hf_api.WebhookWatchedItem ```python Data structure containing information about the items watched by a webhook. Attributes: type (`Literal["dataset", "model", "org", "space", "user"]`): Type of the item to be watched. Can be one of `["dataset", "model", "org", "space", "user"]`. name (`str`): Name of the item to be watched. Can be the username, organization name, model name, dataset name or space name. ``` ## CommitOperation Below are the supported values for [`CommitOperation`]: ### CommitOperationAdd ```python Data structure holding necessary info to upload a file to a repository on the Hub. Args: path_in_repo (`str`): Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"` path_or_fileobj (`str`, `Path`, `bytes`, or `BinaryIO`): Either: - a path to a local file (as `str` or `pathlib.Path`) to upload - a buffer of bytes (`bytes`) holding the content of the file to upload - a "file object" (subclass of `io.BufferedIOBase`), typically obtained with `open(path, "rb")`. It must support `seek()` and `tell()` methods. Raises: [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If `path_or_fileobj` is not one of `str`, `Path`, `bytes` or `io.BufferedIOBase`. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If `path_or_fileobj` is a `str` or `Path` but not a path to an existing file. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If `path_or_fileobj` is a `io.BufferedIOBase` but it doesn't support both `seek()` and `tell()`. ``` ### CommitOperationDelete ```python Data structure holding necessary info to delete a file or a folder from a repository on the Hub. Args: path_in_repo (`str`): Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"` for a file or `"checkpoints/1fec34a/"` for a folder. is_folder (`bool` or `Literal["auto"]`, *optional*) Whether the Delete Operation applies to a folder or not. If "auto", the path type (file or folder) is guessed automatically by looking if path ends with a "/" (folder) or not (file). To explicitly set the path type, you can set `is_folder=True` or `is_folder=False`. ``` ### CommitOperationCopy ```python Data structure holding necessary info to copy a file in a repository on the Hub. Limitations: - Only LFS files can be copied. To copy a regular file, you need to download it locally and re-upload it - Cross-repository copies are not supported. Note: you can combine a [`CommitOperationCopy`] and a [`CommitOperationDelete`] to rename an LFS file on the Hub. Args: src_path_in_repo (`str`): Relative filepath in the repo of the file to be copied, e.g. `"checkpoints/1fec34a/weights.bin"`. path_in_repo (`str`): Relative filepath in the repo where to copy the file, e.g. `"checkpoints/1fec34a/weights_copy.bin"`. src_revision (`str`, *optional*): The git revision of the file to be copied. Can be any valid git revision. Default to the target commit revision. ``` ## CommitScheduler ### CommitScheduler ```python Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes). The recommended way to use the scheduler is to use it as a context manager. This ensures that the scheduler is properly stopped and the last commit is triggered when the script ends. The scheduler can also be stopped manually with the `stop` method. Checkout the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#scheduled-uploads) to learn more about how to use it. Args: repo_id (`str`): The id of the repo to commit to. folder_path (`str` or `Path`): Path to the local folder to upload regularly. every (`int` or `float`, *optional*): The number of minutes between each commit. Defaults to 5 minutes. path_in_repo (`str`, *optional*): Relative path of the directory in the repo, for example: `"checkpoints/"`. Defaults to the root folder of the repository. repo_type (`str`, *optional*): The type of the repo to commit to. Defaults to `model`. revision (`str`, *optional*): The revision of the repo to commit to. Defaults to `main`. private (`bool`, *optional*): Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists. token (`str`, *optional*): The token to use to commit to the repo. Defaults to the token saved on the machine. allow_patterns (`List[str]` or `str`, *optional*): If provided, only files matching at least one pattern are uploaded. ignore_patterns (`List[str]` or `str`, *optional*): If provided, files matching any of the patterns are not uploaded. squash_history (`bool`, *optional*): Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is useful to avoid degraded performances on the repo when it grows too large. hf_api (`HfApi`, *optional*): The [`HfApi`] client to use to commit to the Hub. Can be set with custom settings (user agent, token,...). Example: ```py >>> from pathlib import Path >>> from huggingface_hub import CommitScheduler # Scheduler uploads every 10 minutes >>> csv_path = Path("watched_folder/data.csv") >>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10) >>> with csv_path.open("a") as f: ... f.write("first line") # Some time later (...) >>> with csv_path.open("a") as f: ... f.write("second line") ``` Example using a context manager: ```py >>> from pathlib import Path >>> from huggingface_hub import CommitScheduler >>> with CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path="watched_folder", every=10) as scheduler: ... csv_path = Path("watched_folder/data.csv") ... with csv_path.open("a") as f: ... f.write("first line") ... (...) ... with csv_path.open("a") as f: ... f.write("second line") # Scheduler is now stopped and last commit have been triggered ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_api.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Overview This section contains an exhaustive and technical description of `huggingface_hub` classes and methods.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/overview.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Managing collections Check out the [`HfApi`] documentation page for the reference of methods to manage your Space on the Hub. - Get collection content: [`get_collection`] - Create new collection: [`create_collection`] - Update a collection: [`update_collection_metadata`] - Delete a collection: [`delete_collection`] - Add an item to a collection: [`add_collection_item`] - Update an item in a collection: [`update_collection_item`] - Remove an item from a collection: [`delete_collection_item`] ### Collection ### Collection ```python Contains information about a Collection on the Hub. Attributes: slug (`str`): Slug of the collection. E.g. `"TheBloke/recent-models-64f9a55bb3115b4f513ec026"`. title (`str`): Title of the collection. E.g. `"Recent models"`. owner (`str`): Owner of the collection. E.g. `"TheBloke"`. items (`List[CollectionItem]`): List of items in the collection. last_updated (`datetime`): Date of the last update of the collection. position (`int`): Position of the collection in the list of collections of the owner. private (`bool`): Whether the collection is private or not. theme (`str`): Theme of the collection. E.g. `"green"`. upvotes (`int`): Number of upvotes of the collection. description (`str`, *optional*): Description of the collection, as plain text. url (`str`): (property) URL of the collection on the Hub. ``` ### CollectionItem ### CollectionItem ```python Contains information about an item of a Collection (model, dataset, Space or paper). Attributes: item_object_id (`str`): Unique ID of the item in the collection. item_id (`str`): ID of the underlying object on the Hub. Can be either a repo_id or a paper id e.g. `"jbilcke-hf/ai-comic-factory"`, `"2307.09288"`. item_type (`str`): Type of the underlying object. Can be one of `"model"`, `"dataset"`, `"space"` or `"paper"`. position (`int`): Position of the item in the collection. note (`str`, *optional*): Note associated with the item, as plain text. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/collections.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Interacting with Discussions and Pull Requests Check the [`HfApi`] documentation page for the reference of methods enabling interaction with Pull Requests and Discussions on the Hub. - [`get_repo_discussions`] - [`get_discussion_details`] - [`create_discussion`] - [`create_pull_request`] - [`rename_discussion`] - [`comment_discussion`] - [`edit_discussion_comment`] - [`change_discussion_status`] - [`merge_pull_request`] ## Data structures ### Discussion ```python A Discussion or Pull Request on the Hub. This dataclass is not intended to be instantiated directly. Attributes: title (`str`): The title of the Discussion / Pull Request status (`str`): The status of the Discussion / Pull Request. It must be one of: * `"open"` * `"closed"` * `"merged"` (only for Pull Requests ) * `"draft"` (only for Pull Requests ) num (`int`): The number of the Discussion / Pull Request. repo_id (`str`): The id (`"{namespace}/{repo_name}"`) of the repo on which the Discussion / Pull Request was open. repo_type (`str`): The type of the repo on which the Discussion / Pull Request was open. Possible values are: `"model"`, `"dataset"`, `"space"`. author (`str`): The username of the Discussion / Pull Request author. Can be `"deleted"` if the user has been deleted since. is_pull_request (`bool`): Whether or not this is a Pull Request. created_at (`datetime`): The `datetime` of creation of the Discussion / Pull Request. endpoint (`str`): Endpoint of the Hub. Default is https://huggingface.co. git_reference (`str`, *optional*): (property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise. url (`str`): (property) URL of the discussion on the Hub. ``` ### DiscussionWithDetails ```python Subclass of [`Discussion`]. Attributes: title (`str`): The title of the Discussion / Pull Request status (`str`): The status of the Discussion / Pull Request. It can be one of: * `"open"` * `"closed"` * `"merged"` (only for Pull Requests ) * `"draft"` (only for Pull Requests ) num (`int`): The number of the Discussion / Pull Request. repo_id (`str`): The id (`"{namespace}/{repo_name}"`) of the repo on which the Discussion / Pull Request was open. repo_type (`str`): The type of the repo on which the Discussion / Pull Request was open. Possible values are: `"model"`, `"dataset"`, `"space"`. author (`str`): The username of the Discussion / Pull Request author. Can be `"deleted"` if the user has been deleted since. is_pull_request (`bool`): Whether or not this is a Pull Request. created_at (`datetime`): The `datetime` of creation of the Discussion / Pull Request. events (`list` of [`DiscussionEvent`]) The list of [`DiscussionEvents`] in this Discussion or Pull Request. conflicting_files (`Union[List[str], bool, None]`, *optional*): A list of conflicting files if this is a Pull Request. `None` if `self.is_pull_request` is `False`. `True` if there are conflicting files but the list can't be retrieved. target_branch (`str`, *optional*): The branch into which changes are to be merged if this is a Pull Request . `None` if `self.is_pull_request` is `False`. merge_commit_oid (`str`, *optional*): If this is a merged Pull Request , this is set to the OID / SHA of the merge commit, `None` otherwise. diff (`str`, *optional*): The git diff if this is a Pull Request , `None` otherwise. endpoint (`str`): Endpoint of the Hub. Default is https://huggingface.co. git_reference (`str`, *optional*): (property) Git reference to which changes can be pushed if this is a Pull Request, `None` otherwise. url (`str`): (property) URL of the discussion on the Hub. ``` ### DiscussionEvent ```python An event in a Discussion or Pull Request. Use concrete classes: * [`DiscussionComment`] * [`DiscussionStatusChange`] * [`DiscussionCommit`] * [`DiscussionTitleChange`] Attributes: id (`str`): The ID of the event. An hexadecimal string. type (`str`): The type of the event. created_at (`datetime`): A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime) object holding the creation timestamp for the event. author (`str`): The username of the Discussion / Pull Request author. Can be `"deleted"` if the user has been deleted since. ``` ### DiscussionComment ```python A comment in a Discussion / Pull Request. Subclass of [`DiscussionEvent`]. Attributes: id (`str`): The ID of the event. An hexadecimal string. type (`str`): The type of the event. created_at (`datetime`): A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime) object holding the creation timestamp for the event. author (`str`): The username of the Discussion / Pull Request author. Can be `"deleted"` if the user has been deleted since. content (`str`): The raw markdown content of the comment. Mentions, links and images are not rendered. edited (`bool`): Whether or not this comment has been edited. hidden (`bool`): Whether or not this comment has been hidden. ``` ### DiscussionStatusChange ```python A change of status in a Discussion / Pull Request. Subclass of [`DiscussionEvent`]. Attributes: id (`str`): The ID of the event. An hexadecimal string. type (`str`): The type of the event. created_at (`datetime`): A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime) object holding the creation timestamp for the event. author (`str`): The username of the Discussion / Pull Request author. Can be `"deleted"` if the user has been deleted since. new_status (`str`): The status of the Discussion / Pull Request after the change. It can be one of: * `"open"` * `"closed"` * `"merged"` (only for Pull Requests ) ``` ### DiscussionCommit ```python A commit in a Pull Request. Subclass of [`DiscussionEvent`]. Attributes: id (`str`): The ID of the event. An hexadecimal string. type (`str`): The type of the event. created_at (`datetime`): A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime) object holding the creation timestamp for the event. author (`str`): The username of the Discussion / Pull Request author. Can be `"deleted"` if the user has been deleted since. summary (`str`): The summary of the commit. oid (`str`): The OID / SHA of the commit, as a hexadecimal string. ``` ### DiscussionTitleChange ```python A rename event in a Discussion / Pull Request. Subclass of [`DiscussionEvent`]. Attributes: id (`str`): The ID of the event. An hexadecimal string. type (`str`): The type of the event. created_at (`datetime`): A [`datetime`](https://docs.python.org/3/library/datetime.html?highlight=datetime#datetime.datetime) object holding the creation timestamp for the event. author (`str`): The username of the Discussion / Pull Request author. Can be `"deleted"` if the user has been deleted since. old_title (`str`): The previous title for the Discussion / Pull Request. new_title (`str`): The new title. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/community.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Downloading files ## Download a single file ### hf_hub_download ### huggingface_hub.hf_hub_download ```python Download a given file if it's not already present in the local cache. The new cache file layout looks like this: - The cache directory contains one subfolder per repo_id (namespaced by repo type) - inside each repo folder: - refs is a list of the latest known revision => commit_hash pairs - blobs contains the actual file blobs (identified by their git-sha or sha256, depending on whether they're LFS files or not) - snapshots contains one subfolder per commit, each "commit" contains the subset of the files that have been resolved at that particular commit. Each filename is a symlink to the blob at that particular commit. ``` [ 96] . └── [ 160] models--julien-c--EsperBERTo-small β”œβ”€β”€ [ 160] blobs β”‚ β”œβ”€β”€ [321M] 403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd β”‚ β”œβ”€β”€ [ 398] 7cb18dc9bafbfcf74629a4b760af1b160957a83e β”‚ └── [1.4K] d7edf6bd2a681fb0175f7735299831ee1b22b812 β”œβ”€β”€ [ 96] refs β”‚ └── [ 40] main └── [ 128] snapshots β”œβ”€β”€ [ 128] 2439f60ef33a0d46d85da5001d52aeda5b00ce9f β”‚ β”œβ”€β”€ [ 52] README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812 β”‚ └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd └── [ 128] bbc77c8132af1cc5cf678da3f1ddf2de43606d48 β”œβ”€β”€ [ 52] README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd ``` If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir` to store some metadata related to the downloaded files. While this mechanism is not as robust as the main cache-system, it's optimized for regularly pulling the latest version of a repository. Args: repo_id (`str`): A user or an organization name and a repo name separated by a `/`. filename (`str`): The name of the file in the repo. subfolder (`str`, *optional*): An optional value corresponding to a folder inside the model repo. repo_type (`str`, *optional*): Set to `"dataset"` or `"space"` if downloading from a dataset or space, `None` or `"model"` if downloading from a model. Default is `None`. revision (`str`, *optional*): An optional Git revision id which can be a branch name, a tag, or a commit hash. library_name (`str`, *optional*): The name of the library to which the object corresponds. library_version (`str`, *optional*): The version of the library. cache_dir (`str`, `Path`, *optional*): Path to the folder where cached files are stored. local_dir (`str` or `Path`, *optional*): If provided, the downloaded file will be placed under this directory. user_agent (`dict`, `str`, *optional*): The user-agent info in the form of a dictionary or a string. force_download (`bool`, *optional*, defaults to `False`): Whether the file should be downloaded even if it already exists in the local cache. proxies (`dict`, *optional*): Dictionary mapping protocol to the URL of the proxy passed to `requests.request`. etag_timeout (`float`, *optional*, defaults to `10`): When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed to `requests.request`. token (`str`, `bool`, *optional*): A token to be used for the download. - If `True`, the token is read from the HuggingFace config folder. - If a string, it's used as the authentication token. local_files_only (`bool`, *optional*, defaults to `False`): If `True`, avoid downloading the file and return the path to the local cached file if it exists. headers (`dict`, *optional*): Additional headers to be sent with the request. Returns: `str`: Local path of file or if networking is off, last version of file cached on disk. Raises: [`~utils.RepositoryNotFoundError`] If the repository to download from cannot be found. This may be because it doesn't exist, or because it is set to `private` and you do not have access. [`~utils.RevisionNotFoundError`] If the revision to download from cannot be found. [`~utils.EntryNotFoundError`] If the file to download cannot be found. [`~utils.LocalEntryNotFoundError`] If network is disabled or unavailable and file is not found in cache. [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) If `token=True` but the token cannot be found. [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) If ETag cannot be determined. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If some parameter value is invalid. ``` ### hf_hub_url ### huggingface_hub.hf_hub_url ```python Construct the URL of a file from the given information. The resolved address can either be a huggingface.co-hosted url, or a link to Cloudfront (a Content Delivery Network, or CDN) for large files which are more than a few MBs. Args: repo_id (`str`): A namespace (user or an organization) name and a repo name separated by a `/`. filename (`str`): The name of the file in the repo. subfolder (`str`, *optional*): An optional value corresponding to a folder inside the repo. repo_type (`str`, *optional*): Set to `"dataset"` or `"space"` if downloading from a dataset or space, `None` or `"model"` if downloading from a model. Default is `None`. revision (`str`, *optional*): An optional Git revision id which can be a branch name, a tag, or a commit hash. Example: ```python >>> from huggingface_hub import hf_hub_url >>> hf_hub_url( ... repo_id="julien-c/EsperBERTo-small", filename="pytorch_model.bin" ... ) 'https://huggingface.co/julien-c/EsperBERTo-small/resolve/main/pytorch_model.bin' ``` <Tip> Notes: Cloudfront is replicated over the globe so downloads are way faster for the end user (and it also lowers our bandwidth costs). Cloudfront aggressively caches files by default (default TTL is 24 hours), however this is not an issue here because we implement a git-based versioning system on huggingface.co, which means that we store the files on S3/Cloudfront in a content-addressable way (i.e., the file name is its hash). Using content-addressable filenames means cache can't ever be stale. In terms of client-side caching from this library, we base our caching on the objects' entity tag (`ETag`), which is an identifier of a specific version of a resource [1]_. An object's ETag is: its git-sha1 if stored in git, or its sha256 if stored in git-lfs. </Tip> References: - [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag ``` ## Download a snapshot of the repo ### huggingface_hub.snapshot_download ```python Download repo files. Download a whole snapshot of a repo's files at the specified revision. This is useful when you want all files from a repo, because you don't know which ones you will need a priori. All files are nested inside a folder in order to keep their actual filename relative to that folder. You can also filter which files to download using `allow_patterns` and `ignore_patterns`. If `local_dir` is provided, the file structure from the repo will be replicated in this location. When using this option, the `cache_dir` will not be used and a `.cache/huggingface/` folder will be created at the root of `local_dir` to store some metadata related to the downloaded files. While this mechanism is not as robust as the main cache-system, it's optimized for regularly pulling the latest version of a repository. An alternative would be to clone the repo but this requires git and git-lfs to be installed and properly configured. It is also not possible to filter which files to download when cloning a repository using git. Args: repo_id (`str`): A user or an organization name and a repo name separated by a `/`. repo_type (`str`, *optional*): Set to `"dataset"` or `"space"` if downloading from a dataset or space, `None` or `"model"` if downloading from a model. Default is `None`. revision (`str`, *optional*): An optional Git revision id which can be a branch name, a tag, or a commit hash. cache_dir (`str`, `Path`, *optional*): Path to the folder where cached files are stored. local_dir (`str` or `Path`, *optional*): If provided, the downloaded files will be placed under this directory. library_name (`str`, *optional*): The name of the library to which the object corresponds. library_version (`str`, *optional*): The version of the library. user_agent (`str`, `dict`, *optional*): The user-agent info in the form of a dictionary or a string. proxies (`dict`, *optional*): Dictionary mapping protocol to the URL of the proxy passed to `requests.request`. etag_timeout (`float`, *optional*, defaults to `10`): When fetching ETag, how many seconds to wait for the server to send data before giving up which is passed to `requests.request`. force_download (`bool`, *optional*, defaults to `False`): Whether the file should be downloaded even if it already exists in the local cache. token (`str`, `bool`, *optional*): A token to be used for the download. - If `True`, the token is read from the HuggingFace config folder. - If a string, it's used as the authentication token. headers (`dict`, *optional*): Additional headers to include in the request. Those headers take precedence over the others. local_files_only (`bool`, *optional*, defaults to `False`): If `True`, avoid downloading the file and return the path to the local cached file if it exists. allow_patterns (`List[str]` or `str`, *optional*): If provided, only files matching at least one pattern are downloaded. ignore_patterns (`List[str]` or `str`, *optional*): If provided, files matching any of the patterns are not downloaded. max_workers (`int`, *optional*): Number of concurrent threads to download files (1 thread = 1 file download). Defaults to 8. tqdm_class (`tqdm`, *optional*): If provided, overwrites the default behavior for the progress bar. Passed argument must inherit from `tqdm.auto.tqdm` or at least mimic its behavior. Note that the `tqdm_class` is not passed to each individual download. Defaults to the custom HF progress bar that can be disabled by setting `HF_HUB_DISABLE_PROGRESS_BARS` environment variable. Returns: `str`: folder path of the repo snapshot. Raises: [`~utils.RepositoryNotFoundError`] If the repository to download from cannot be found. This may be because it doesn't exist, or because it is set to `private` and you do not have access. [`~utils.RevisionNotFoundError`] If the revision to download from cannot be found. [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) If `token=True` and the token cannot be found. [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) if ETag cannot be determined. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) if some parameter value is invalid. ``` ## Get metadata about a file ### get_hf_file_metadata ### huggingface_hub.get_hf_file_metadata ```python Fetch metadata of a file versioned on the Hub for a given url. Args: url (`str`): File url, for example returned by [`hf_hub_url`]. token (`str` or `bool`, *optional*): A token to be used for the download. - If `True`, the token is read from the HuggingFace config folder. - If `False` or `None`, no token is provided. - If a string, it's used as the authentication token. proxies (`dict`, *optional*): Dictionary mapping protocol to the URL of the proxy passed to `requests.request`. timeout (`float`, *optional*, defaults to 10): How many seconds to wait for the server to send metadata before giving up. library_name (`str`, *optional*): The name of the library to which the object corresponds. library_version (`str`, *optional*): The version of the library. user_agent (`dict`, `str`, *optional*): The user-agent info in the form of a dictionary or a string. headers (`dict`, *optional*): Additional headers to be sent with the request. Returns: A [`HfFileMetadata`] object containing metadata such as location, etag, size and commit_hash. ``` ### HfFileMetadata ### huggingface_hub.HfFileMetadata ```python Data structure containing information about a file versioned on the Hub. Returned by [`get_hf_file_metadata`] based on a URL. Args: commit_hash (`str`, *optional*): The commit_hash related to the file. etag (`str`, *optional*): Etag of the file on the server. location (`str`): Location where to download the file. Can be a Hub url or not (CDN). size (`size`): Size of the file. In case of an LFS file, contains the size of the actual LFS file, not the pointer. ``` ## Caching The methods displayed above are designed to work with a caching system that prevents re-downloading files. The caching system was updated in v0.8.0 to become the central cache-system shared across libraries that depend on the Hub. Read the [cache-system guide](../guides/manage-cache) for a detailed presentation of caching at at HF.
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/file_download.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Serialization `huggingface_hub` provides helpers to save and load ML model weights in a standardized way. This part of the library is still under development and will be improved in future releases. The goal is to harmonize how weights are saved and loaded across the Hub, both to remove code duplication across libraries and to establish consistent conventions. ## DDUF file format DDUF is a file format designed for diffusion models. It allows saving all the information to run a model in a single file. This work is inspired by the [GGUF](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md) format. `huggingface_hub` provides helpers to save and load DDUF files, ensuring the file format is respected. <Tip warning={true}> This is a very early version of the parser. The API and implementation can evolve in the near future. The parser currently does very little validation. For more details about the file format, check out https://github.com/huggingface/huggingface.js/tree/main/packages/dduf. </Tip> ### How to write a DDUF file? Here is how to export a folder containing different parts of a diffusion model using [`export_folder_as_dduf`]: ```python # Export a folder as a DDUF file >>> from huggingface_hub import export_folder_as_dduf >>> export_folder_as_dduf("FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev") ``` For more flexibility, you can use [`export_entries_as_dduf`] and pass a list of files to include in the final DDUF file: ```python # Export specific files from the local disk. >>> from huggingface_hub import export_entries_as_dduf >>> export_entries_as_dduf( ... dduf_path="stable-diffusion-v1-4-FP16.dduf", ... entries=[ # List entries to add to the DDUF file (here, only FP16 weights) ... ("model_index.json", "path/to/model_index.json"), ... ("vae/config.json", "path/to/vae/config.json"), ... ("vae/diffusion_pytorch_model.fp16.safetensors", "path/to/vae/diffusion_pytorch_model.fp16.safetensors"), ... ("text_encoder/config.json", "path/to/text_encoder/config.json"), ... ("text_encoder/model.fp16.safetensors", "path/to/text_encoder/model.fp16.safetensors"), ... # ... add more entries here ... ] ... ) ``` The `entries` parameter also supports passing an iterable of paths or bytes. This can prove useful if you have a loaded model and want to serialize it directly into a DDUF file instead of having to serialize each component to disk first and then as a DDUF file. Here is an example of how a `StableDiffusionPipeline` can be serialized as DDUF: ```python # Export state_dicts one by one from a loaded pipeline >>> from diffusers import DiffusionPipeline >>> from typing import Generator, Tuple >>> import safetensors.torch >>> from huggingface_hub import export_entries_as_dduf >>> pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") ... # ... do some work with the pipeline >>> def as_entries(pipe: DiffusionPipeline) -> Generator[Tuple[str, bytes], None, None]: ... # Build a generator that yields the entries to add to the DDUF file. ... # The first element of the tuple is the filename in the DDUF archive (must use UNIX separator!). The second element is the content of the file. ... # Entries will be evaluated lazily when the DDUF file is created (only 1 entry is loaded in memory at a time) ... yield "vae/config.json", pipe.vae.to_json_string().encode() ... yield "vae/diffusion_pytorch_model.safetensors", safetensors.torch.save(pipe.vae.state_dict()) ... yield "text_encoder/config.json", pipe.text_encoder.config.to_json_string().encode() ... yield "text_encoder/model.safetensors", safetensors.torch.save(pipe.text_encoder.state_dict()) ... # ... add more entries here >>> export_entries_as_dduf(dduf_path="stable-diffusion-v1-4.dduf", entries=as_entries(pipe)) ``` **Note:** in practice, `diffusers` provides a method to directly serialize a pipeline in a DDUF file. The snippet above is only meant as an example. ### How to read a DDUF file? ```python >>> import json >>> import safetensors.torch >>> from huggingface_hub import read_dduf_file # Read DDUF metadata >>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf") # Returns a mapping filename <> DDUFEntry >>> dduf_entries["model_index.json"] DDUFEntry(filename='model_index.json', offset=66, length=587) # Load model index as JSON >>> json.loads(dduf_entries["model_index.json"].read_text()) {'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', 'scheduler': ['diffusers', 'FlowMatchEulerDiscreteScheduler'], 'text_encoder': ['transformers', 'CLIPTextModel'], 'text_encoder_2': ['transformers', 'T5EncoderModel'], 'tokenizer': ['transformers', 'CLIPTokenizer'], 'tokenizer_2': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'FluxTransformer2DModel'], 'vae': ['diffusers', 'AutoencoderKL']} # Load VAE weights using safetensors >>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm: ... state_dict = safetensors.torch.load(mm) ``` ### Helpers ### huggingface_hub.export_entries_as_dduf ```python Write a DDUF file from an iterable of entries. This is a lower-level helper than [`export_folder_as_dduf`] that allows more flexibility when serializing data. In particular, you don't need to save the data on disk before exporting it in the DDUF file. Args: dduf_path (`str` or `os.PathLike`): The path to the DDUF file to write. entries (`Iterable[Tuple[str, Union[str, Path, bytes]]]`): An iterable of entries to write in the DDUF file. Each entry is a tuple with the filename and the content. The filename should be the path to the file in the DDUF archive. The content can be a string or a pathlib.Path representing a path to a file on the local disk or directly the content as bytes. Raises: - [`DDUFExportError`]: If anything goes wrong during the export (e.g. invalid entry name, missing 'model_index.json', etc.). Example: ```python # Export specific files from the local disk. >>> from huggingface_hub import export_entries_as_dduf >>> export_entries_as_dduf( ... dduf_path="stable-diffusion-v1-4-FP16.dduf", ... entries=[ # List entries to add to the DDUF file (here, only FP16 weights) ... ("model_index.json", "path/to/model_index.json"), ... ("vae/config.json", "path/to/vae/config.json"), ... ("vae/diffusion_pytorch_model.fp16.safetensors", "path/to/vae/diffusion_pytorch_model.fp16.safetensors"), ... ("text_encoder/config.json", "path/to/text_encoder/config.json"), ... ("text_encoder/model.fp16.safetensors", "path/to/text_encoder/model.fp16.safetensors"), ... # ... add more entries here ... ] ... ) ``` ```python # Export state_dicts one by one from a loaded pipeline >>> from diffusers import DiffusionPipeline >>> from typing import Generator, Tuple >>> import safetensors.torch >>> from huggingface_hub import export_entries_as_dduf >>> pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") ... # ... do some work with the pipeline >>> def as_entries(pipe: DiffusionPipeline) -> Generator[Tuple[str, bytes], None, None]: ... # Build an generator that yields the entries to add to the DDUF file. ... # The first element of the tuple is the filename in the DDUF archive (must use UNIX separator!). The second element is the content of the file. ... # Entries will be evaluated lazily when the DDUF file is created (only 1 entry is loaded in memory at a time) ... yield "vae/config.json", pipe.vae.to_json_string().encode() ... yield "vae/diffusion_pytorch_model.safetensors", safetensors.torch.save(pipe.vae.state_dict()) ... yield "text_encoder/config.json", pipe.text_encoder.config.to_json_string().encode() ... yield "text_encoder/model.safetensors", safetensors.torch.save(pipe.text_encoder.state_dict()) ... # ... add more entries here >>> export_entries_as_dduf(dduf_path="stable-diffusion-v1-4.dduf", entries=as_entries(pipe)) ``` ``` ### huggingface_hub.export_folder_as_dduf ```python Export a folder as a DDUF file. AUses [`export_entries_as_dduf`] under the hood. Args: dduf_path (`str` or `os.PathLike`): The path to the DDUF file to write. folder_path (`str` or `os.PathLike`): The path to the folder containing the diffusion model. Example: ```python >>> from huggingface_hub import export_folder_as_dduf >>> export_folder_as_dduf(dduf_path="FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev") ``` ``` ### huggingface_hub.read_dduf_file ```python Read a DDUF file and return a dictionary of entries. Only the metadata is read, the data is not loaded in memory. Args: dduf_path (`str` or `os.PathLike`): The path to the DDUF file to read. Returns: `Dict[str, DDUFEntry]`: A dictionary of [`DDUFEntry`] indexed by filename. Raises: - [`DDUFCorruptedFileError`]: If the DDUF file is corrupted (i.e. doesn't follow the DDUF format). Example: ```python >>> import json >>> import safetensors.torch >>> from huggingface_hub import read_dduf_file # Read DDUF metadata >>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf") # Returns a mapping filename <> DDUFEntry >>> dduf_entries["model_index.json"] DDUFEntry(filename='model_index.json', offset=66, length=587) # Load model index as JSON >>> json.loads(dduf_entries["model_index.json"].read_text()) {'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', ... # Load VAE weights using safetensors >>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm: ... state_dict = safetensors.torch.load(mm) ``` ``` ### huggingface_hub.DDUFEntry ```python Object representing a file entry in a DDUF file. See [`read_dduf_file`] for how to read a DDUF file. Attributes: filename (str): The name of the file in the DDUF archive. offset (int): The offset of the file in the DDUF archive. length (int): The length of the file in the DDUF archive. dduf_path (str): The path to the DDUF archive (for internal use). ``` ### Errors ### huggingface_hub.errors.DDUFError ```python Base exception for errors related to the DDUF format. ``` ### huggingface_hub.errors.DDUFCorruptedFileError ```python Exception thrown when the DDUF file is corrupted. ``` ### huggingface_hub.errors.DDUFExportError ```python Base exception for errors during DDUF export. ``` ### huggingface_hub.errors.DDUFInvalidEntryNameError ```python Exception thrown when the entry name is invalid. ``` ## Saving tensors The main helper of the `serialization` module takes a torch `nn.Module` as input and saves it to disk. It handles the logic to save shared tensors (see [safetensors explanation](https://huggingface.co/docs/safetensors/torch_shared_tensors)) as well as logic to split the state dictionary into shards, using [`split_torch_state_dict_into_shards`] under the hood. At the moment, only `torch` framework is supported. If you want to save a state dictionary (e.g. a mapping between layer names and related tensors) instead of a `nn.Module`, you can use [`save_torch_state_dict`] which provides the same features. This is useful for example if you want to apply custom logic to the state dict before saving it. ### save_torch_model ### huggingface_hub.save_torch_model ```python Saves a given torch model to disk, handling sharding and shared tensors issues. See also [`save_torch_state_dict`] to save a state dict with more flexibility. For more information about tensor sharing, check out [this guide](https://huggingface.co/docs/safetensors/torch_shared_tensors). The model state dictionary is split into shards so that each shard is smaller than a given size. The shards are saved in the `save_directory` with the given `filename_pattern`. If the model is too big to fit in a single shard, an index file is saved in the `save_directory` to indicate where each tensor is saved. This helper uses [`split_torch_state_dict_into_shards`] under the hood. If `safe_serialization` is `True`, the shards are saved as safetensors (the default). Otherwise, the shards are saved as pickle. Before saving the model, the `save_directory` is cleaned from any previous shard files. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> <Tip warning={true}> If your model is a `transformers.PreTrainedModel`, you should pass `model._tied_weights_keys` as `shared_tensors_to_discard` to properly handle shared tensors saving. This ensures the correct duplicate tensors are discarded during saving. </Tip> Args: model (`torch.nn.Module`): The model to save on disk. save_directory (`str` or `Path`): The directory in which the model will be saved. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"` or `pytorch_model{suffix}.bin` depending on `safe_serialization` parameter. force_contiguous (`boolean`, *optional*): Forcing the state_dict to be saved as contiguous tensors. This has no effect on the correctness of the model, but it could potentially change performance if the layout of the tensor was chosen specifically for that reason. Defaults to `True`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. metadata (`Dict[str, str]`, *optional*): Extra information to save along with the model. Some metadata will be added for each dropped tensors. This information will not be enough to recover the entire shared structure but might help understanding things. safe_serialization (`bool`, *optional*): Whether to save as safetensors, which is the default behavior. If `False`, the shards are saved as pickle. Safe serialization is recommended for security reasons. Saving as pickle is deprecated and will be removed in a future version. is_main_process (`bool`, *optional*): Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function from all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. Defaults to True. shared_tensors_to_discard (`List[str]`, *optional*): List of tensor names to drop when saving shared tensors. If not provided and shared tensors are detected, it will drop the first name alphabetically. Example: ```py >>> from huggingface_hub import save_torch_model >>> model = ... # A PyTorch model # Save state dict to "path/to/folder". The model will be split into shards of 5GB each and saved as safetensors. >>> save_torch_model(model, "path/to/folder") # Load model back >>> from huggingface_hub import load_torch_model # TODO >>> load_torch_model(model, "path/to/folder") >>> ``` ``` ### save_torch_state_dict ### huggingface_hub.save_torch_state_dict ```python Save a model state dictionary to the disk, handling sharding and shared tensors issues. See also [`save_torch_model`] to directly save a PyTorch model. For more information about tensor sharing, check out [this guide](https://huggingface.co/docs/safetensors/torch_shared_tensors). The model state dictionary is split into shards so that each shard is smaller than a given size. The shards are saved in the `save_directory` with the given `filename_pattern`. If the model is too big to fit in a single shard, an index file is saved in the `save_directory` to indicate where each tensor is saved. This helper uses [`split_torch_state_dict_into_shards`] under the hood. If `safe_serialization` is `True`, the shards are saved as safetensors (the default). Otherwise, the shards are saved as pickle. Before saving the model, the `save_directory` is cleaned from any previous shard files. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> <Tip warning={true}> If your model is a `transformers.PreTrainedModel`, you should pass `model._tied_weights_keys` as `shared_tensors_to_discard` to properly handle shared tensors saving. This ensures the correct duplicate tensors are discarded during saving. </Tip> Args: state_dict (`Dict[str, torch.Tensor]`): The state dictionary to save. save_directory (`str` or `Path`): The directory in which the model will be saved. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"` or `pytorch_model{suffix}.bin` depending on `safe_serialization` parameter. force_contiguous (`boolean`, *optional*): Forcing the state_dict to be saved as contiguous tensors. This has no effect on the correctness of the model, but it could potentially change performance if the layout of the tensor was chosen specifically for that reason. Defaults to `True`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. metadata (`Dict[str, str]`, *optional*): Extra information to save along with the model. Some metadata will be added for each dropped tensors. This information will not be enough to recover the entire shared structure but might help understanding things. safe_serialization (`bool`, *optional*): Whether to save as safetensors, which is the default behavior. If `False`, the shards are saved as pickle. Safe serialization is recommended for security reasons. Saving as pickle is deprecated and will be removed in a future version. is_main_process (`bool`, *optional*): Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function from all processes. In this case, set `is_main_process=True` only on the main process to avoid race conditions. Defaults to True. shared_tensors_to_discard (`List[str]`, *optional*): List of tensor names to drop when saving shared tensors. If not provided and shared tensors are detected, it will drop the first name alphabetically. Example: ```py >>> from huggingface_hub import save_torch_state_dict >>> model = ... # A PyTorch model # Save state dict to "path/to/folder". The model will be split into shards of 5GB each and saved as safetensors. >>> state_dict = model_to_save.state_dict() >>> save_torch_state_dict(state_dict, "path/to/folder") ``` ``` The `serialization` module also contains low-level helpers to split a state dictionary into several shards, while creating a proper index in the process. These helpers are available for `torch` and `tensorflow` tensors and are designed to be easily extended to any other ML frameworks. ### split_tf_state_dict_into_shards ### huggingface_hub.split_tf_state_dict_into_shards ```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> Args: state_dict (`Dict[str, Tensor]`): The state dictionary to save. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"tf_model{suffix}.h5"`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. ``` ### split_torch_state_dict_into_shards ### huggingface_hub.split_torch_state_dict_into_shards ```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip> To save a model state dictionary to the disk, see [`save_torch_state_dict`]. This helper uses `split_torch_state_dict_into_shards` under the hood. </Tip> <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> Args: state_dict (`Dict[str, torch.Tensor]`): The state dictionary to save. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. Example: ```py >>> import json >>> import os >>> from safetensors.torch import save_file as safe_save_file >>> from huggingface_hub import split_torch_state_dict_into_shards >>> def save_state_dict(state_dict: Dict[str, torch.Tensor], save_directory: str): ... state_dict_split = split_torch_state_dict_into_shards(state_dict) ... for filename, tensors in state_dict_split.filename_to_tensors.items(): ... shard = {tensor: state_dict[tensor] for tensor in tensors} ... safe_save_file( ... shard, ... os.path.join(save_directory, filename), ... metadata={"format": "pt"}, ... ) ... if state_dict_split.is_sharded: ... index = { ... "metadata": state_dict_split.metadata, ... "weight_map": state_dict_split.tensor_to_filename, ... } ... with open(os.path.join(save_directory, "model.safetensors.index.json"), "w") as f: ... f.write(json.dumps(index, indent=2)) ``` ``` ### split_state_dict_into_shards_factory This is the underlying factory from which each framework-specific helper is derived. In practice, you are not expected to use this factory directly except if you need to adapt it to a framework that is not yet supported. If that is the case, please let us know by [opening a new issue](https://github.com/huggingface/huggingface_hub/issues/new) on the `huggingface_hub` repo. ### huggingface_hub.split_state_dict_into_shards_factory ```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip> Args: state_dict (`Dict[str, Tensor]`): The state dictionary to save. get_storage_size (`Callable[[Tensor], int]`): A function that returns the size of a tensor when saved on disk in bytes. get_storage_id (`Callable[[Tensor], Optional[Any]]`, *optional*): A function that returns a unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB. Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. ``` ## Loading tensors The loading helpers support both single-file and sharded checkpoints in either safetensors or pickle format. [`load_torch_model`] takes a `nn.Module` and a checkpoint path (either a single file or a directory) as input and load the weights into the model. ### load_torch_model ### huggingface_hub.load_torch_model ```python Load a checkpoint into a model, handling both sharded and non-sharded checkpoints. Args: model (`torch.nn.Module`): The model in which to load the checkpoint. checkpoint_path (`str` or `os.PathLike`): Path to either the checkpoint file or directory containing the checkpoint(s). strict (`bool`, *optional*, defaults to `False`): Whether to strictly enforce that the keys in the model state dict match the keys in the checkpoint. safe (`bool`, *optional*, defaults to `True`): If `safe` is True, the safetensors files will be loaded. If `safe` is False, the function will first attempt to load safetensors files if they are available, otherwise it will fall back to loading pickle files. `filename_pattern` parameter takes precedence over `safe` parameter. weights_only (`bool`, *optional*, defaults to `False`): If True, only loads the model weights without optimizer states and other metadata. Only supported in PyTorch >= 1.13. map_location (`str` or `torch.device`, *optional*): A `torch.device` object, string or a dict specifying how to remap storage locations. It indicates the location where all tensors should be loaded. mmap (`bool`, *optional*, defaults to `False`): Whether to use memory-mapped file loading. Memory mapping can improve loading performance for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. filename_pattern (`str`, *optional*): The pattern to look for the index file. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"`. Returns: `NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields. - `missing_keys` is a list of str containing the missing keys, i.e. keys that are in the model but not in the checkpoint. - `unexpected_keys` is a list of str containing the unexpected keys, i.e. keys that are in the checkpoint but not in the model. Raises: [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) If the checkpoint file or directory does not exist. [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the checkpoint path is invalid or if the checkpoint format cannot be determined. Example: ```python >>> from huggingface_hub import load_torch_model >>> model = ... # A PyTorch model >>> load_torch_model(model, "path/to/checkpoint") ``` ``` ### load_state_dict_from_file ### huggingface_hub.load_state_dict_from_file ```python Loads a checkpoint file, handling both safetensors and pickle checkpoint formats. Args: checkpoint_file (`str` or `os.PathLike`): Path to the checkpoint file to load. Can be either a safetensors or pickle (`.bin`) checkpoint. map_location (`str` or `torch.device`, *optional*): A `torch.device` object, string or a dict specifying how to remap storage locations. It indicates the location where all tensors should be loaded. weights_only (`bool`, *optional*, defaults to `False`): If True, only loads the model weights without optimizer states and other metadata. Only supported for pickle (`.bin`) checkpoints with PyTorch >= 1.13. Has no effect when loading safetensors files. mmap (`bool`, *optional*, defaults to `False`): Whether to use memory-mapped file loading. Memory mapping can improve loading performance for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. Has no effect when loading safetensors files, as the `safetensors` library uses memory mapping by default. Returns: `Union[Dict[str, "torch.Tensor"], Any]`: The loaded checkpoint. - For safetensors files: always returns a dictionary mapping parameter names to tensors. - For pickle files: returns any Python object that was pickled (commonly a state dict, but could be an entire model, optimizer state, or any other Python object). Raises: [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) If the checkpoint file does not exist. [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively. [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) If the checkpoint file format is invalid or if git-lfs files are not properly downloaded. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the checkpoint file path is empty or invalid. Example: ```python >>> from huggingface_hub import load_state_dict_from_file # Load a PyTorch checkpoint >>> state_dict = load_state_dict_from_file("path/to/model.bin", map_location="cpu") >>> model.load_state_dict(state_dict) # Load a safetensors checkpoint >>> state_dict = load_state_dict_from_file("path/to/model.safetensors") >>> model.load_state_dict(state_dict) ``` ``` ## Tensors helpers ### get_torch_storage_id ### huggingface_hub.get_torch_storage_id ```python Return unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id. In the case of meta tensors, we return None since we can't tell if they share the same storage. Taken from https://github.com/huggingface/transformers/blob/1ecf5f7c982d761b4daaa96719d162c324187c64/src/transformers/pytorch_utils.py#L278. ``` ### get_torch_storage_size ### huggingface_hub.get_torch_storage_size ```python Taken from https://github.com/huggingface/safetensors/blob/08db34094e9e59e2f9218f2df133b7b4aaff5a99/bindings/python/py_src/safetensors/torch.py#L31C1-L41C59 ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TensorBoard logger TensorBoard is a visualization toolkit for machine learning experimentation. TensorBoard allows tracking and visualizing metrics such as loss and accuracy, visualizing the model graph, viewing histograms, displaying images and much more. TensorBoard is well integrated with the Hugging Face Hub. The Hub automatically detects TensorBoard traces (such as `tfevents`) when pushed to the Hub which starts an instance to visualize them. To get more information about TensorBoard integration on the Hub, check out [this guide](https://huggingface.co/docs/hub/tensorboard). To benefit from this integration, `huggingface_hub` provides a custom logger to push logs to the Hub. It works as a drop-in replacement for [SummaryWriter](https://tensorboardx.readthedocs.io/en/latest/tensorboard.html) with no extra code needed. Traces are still saved locally and a background job push them to the Hub at regular interval. ## HFSummaryWriter ### HFSummaryWriter ```python Wrapper around the tensorboard's `SummaryWriter` to push training logs to the Hub. Data is logged locally and then pushed to the Hub asynchronously. Pushing data to the Hub is done in a separate thread to avoid blocking the training script. In particular, if the upload fails for any reason (e.g. a connection issue), the main script will not be interrupted. Data is automatically pushed to the Hub every `commit_every` minutes (default to every 5 minutes). <Tip warning={true}> `HFSummaryWriter` is experimental. Its API is subject to change in the future without prior notice. </Tip> Args: repo_id (`str`): The id of the repo to which the logs will be pushed. logdir (`str`, *optional*): The directory where the logs will be written. If not specified, a local directory will be created by the underlying `SummaryWriter` object. commit_every (`int` or `float`, *optional*): The frequency (in minutes) at which the logs will be pushed to the Hub. Defaults to 5 minutes. squash_history (`bool`, *optional*): Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is useful to avoid degraded performances on the repo when it grows too large. repo_type (`str`, *optional*): The type of the repo to which the logs will be pushed. Defaults to "model". repo_revision (`str`, *optional*): The revision of the repo to which the logs will be pushed. Defaults to "main". repo_private (`bool`, *optional*): Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists. path_in_repo (`str`, *optional*): The path to the folder in the repo where the logs will be pushed. Defaults to "tensorboard/". repo_allow_patterns (`List[str]` or `str`, *optional*): A list of patterns to include in the upload. Defaults to `"*.tfevents.*"`. Check out the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. repo_ignore_patterns (`List[str]` or `str`, *optional*): A list of patterns to exclude in the upload. Check out the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. token (`str`, *optional*): Authentication token. Will default to the stored token. See https://huggingface.co/settings/token for more details kwargs: Additional keyword arguments passed to `SummaryWriter`. Examples: ```diff # Taken from https://pytorch.org/docs/stable/tensorboard.html - from torch.utils.tensorboard import SummaryWriter + from huggingface_hub import HFSummaryWriter import numpy as np - writer = SummaryWriter() + writer = HFSummaryWriter(repo_id="username/my-trained-model") for n_iter in range(100): writer.add_scalar('Loss/train', np.random.random(), n_iter) writer.add_scalar('Loss/test', np.random.random(), n_iter) writer.add_scalar('Accuracy/train', np.random.random(), n_iter) writer.add_scalar('Accuracy/test', np.random.random(), n_iter) ``` ```py >>> from huggingface_hub import HFSummaryWriter # Logs are automatically pushed every 15 minutes (5 by default) + when exiting the context manager >>> with HFSummaryWriter(repo_id="test_hf_logger", commit_every=15) as logger: ... logger.add_scalar("a", 1) ... logger.add_scalar("b", 2) ``` ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inference Inference is the process of using a trained model to make predictions on new data. As this process can be compute-intensive, running on a dedicated server can be an interesting option. The `huggingface_hub` library provides an easy way to call a service that runs inference for hosted models. There are several services you can connect to: - [Inference API](https://huggingface.co/docs/api-inference/index): a service that allows you to run accelerated inference on Hugging Face's infrastructure for free. This service is a fast way to get started, test different models, and prototype AI products. - [Inference Endpoints](https://huggingface.co/inference-endpoints): a product to easily deploy models to production. Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice. These services can be called with the [`InferenceClient`] object. Please refer to [this guide](../guides/inference) for more information on how to use it. ## Inference Client ### InferenceClient ```python Initialize a new Inference Client. [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used seamlessly with either the (free) Inference API or self-hosted Inference Endpoints. Args: model (`str`, `optional`): The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct` or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is automatically selected for the task. Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2 arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) documentation for details). When passing a URL as `model`, the client will not append any suffix path to it. token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server. Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2 arguments are mutually exclusive and have the exact same behavior. timeout (`float`, `optional`): The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. headers (`Dict[str, str]`, `optional`): Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values. cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server. proxies (`Any`, `optional`): Proxies to use for the request. base_url (`str`, `optional`): Base URL to run inference. This is a duplicated argument from `model` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None. api_key (`str`, `optional`): Token to use for authentication. This is a duplicated argument from `token` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None. ``` ## Async Inference Client An async version of the client is also provided, based on `asyncio` and `aiohttp`. To use it, you can either install `aiohttp` directly or use the `[inference]` extra: ```sh pip install --upgrade huggingface_hub[inference] # or # pip install aiohttp ``` ### AsyncInferenceClient ```python Initialize a new Inference Client. [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used seamlessly with either the (free) Inference API or self-hosted Inference Endpoints. Args: model (`str`, `optional`): The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct` or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is automatically selected for the task. Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2 arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) documentation for details). When passing a URL as `model`, the client will not append any suffix path to it. token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server. Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2 arguments are mutually exclusive and have the exact same behavior. timeout (`float`, `optional`): The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. headers (`Dict[str, str]`, `optional`): Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values. cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server. trust_env ('bool', 'optional'): Trust environment settings for proxy configuration if the parameter is `True` (`False` by default). proxies (`Any`, `optional`): Proxies to use for the request. base_url (`str`, `optional`): Base URL to run inference. This is a duplicated argument from `model` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None. api_key (`str`, `optional`): Token to use for authentication. This is a duplicated argument from `token` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None. ``` ## InferenceTimeoutError ### InferenceTimeoutError ```python Error raised when a model is unavailable or the request times out. ``` ### ModelStatus ### huggingface_hub.inference._common.ModelStatus ```python This Dataclass represents the model status in the Hugging Face Inference API. Args: loaded (`bool`): If the model is currently loaded into Hugging Face's InferenceAPI. Models are loaded on-demand, leading to the user's first request taking longer. If a model is loaded, you can be assured that it is in a healthy state. state (`str`): The current state of the model. This can be 'Loaded', 'Loadable', 'TooBig'. If a model's state is 'Loadable', it's not too big and has a supported backend. Loadable models are automatically loaded when the user first requests inference on the endpoint. This means it is transparent for the user to load a model, except that the first call takes longer to complete. compute_type (`Dict`): Information about the compute resource the model is using or will use, such as 'gpu' type and number of replicas. framework (`str`): The name of the framework that the model was built with, such as 'transformers' or 'text-generation-inference'. ``` ## InferenceAPI [`InferenceAPI`] is the legacy way to call the Inference API. The interface is more simplistic and requires knowing the input parameters and output format for each task. It also lacks the ability to connect to other services like Inference Endpoints or AWS SageMaker. [`InferenceAPI`] will soon be deprecated so we recommend using [`InferenceClient`] whenever possible. Check out [this guide](../guides/inference#legacy-inferenceapi-client) to learn how to switch from [`InferenceAPI`] to [`InferenceClient`] in your scripts. ### InferenceApi ```python Client to configure requests and make calls to the HuggingFace Inference API. Example: ```python >>> from huggingface_hub.inference_api import InferenceApi >>> # Mask-fill example >>> inference = InferenceApi("bert-base-uncased") >>> inference(inputs="The goal of life is [MASK].") [{'sequence': 'the goal of life is life.', 'score': 0.10933292657136917, 'token': 2166, 'token_str': 'life'}] >>> # Question Answering example >>> inference = InferenceApi("deepset/roberta-base-squad2") >>> inputs = { ... "question": "What's my name?", ... "context": "My name is Clara and I live in Berkeley.", ... } >>> inference(inputs) {'score': 0.9326569437980652, 'start': 11, 'end': 16, 'answer': 'Clara'} >>> # Zero-shot example >>> inference = InferenceApi("typeform/distilbert-base-uncased-mnli") >>> inputs = "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!" >>> params = {"candidate_labels": ["refund", "legal", "faq"]} >>> inference(inputs, params) {'sequence': 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!', 'labels': ['refund', 'faq', 'legal'], 'scores': [0.9378499388694763, 0.04914155602455139, 0.013008488342165947]} >>> # Overriding configured task >>> inference = InferenceApi("bert-base-uncased", task="feature-extraction") >>> # Text-to-image >>> inference = InferenceApi("stabilityai/stable-diffusion-2-1") >>> inference("cat") <PIL.PngImagePlugin.PngImageFile image (...)> >>> # Return as raw response to parse the output yourself >>> inference = InferenceApi("mio/amadeus") >>> response = inference("hello world", raw_response=True) >>> response.headers {"Content-Type": "audio/flac", ...} >>> response.content # raw bytes from server b'(...)' ``` ``` - __init__ - __call__ - all
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Filesystem API The `HfFileSystem` class provides a pythonic file interface to the Hugging Face Hub based on [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/). ## HfFileSystem `HfFileSystem` is based on [fsspec](https://filesystem-spec.readthedocs.io/en/latest/), so it is compatible with most of the APIs that it offers. For more details, check out [our guide](../guides/hf_file_system) and fsspec's [API Reference](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem). ### HfFileSystem Error fetching docstring for huggingface_hub.HfFileSystem : No huggingface_hub attribute HfFileSystem - __init__ - all
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Managing local and online repositories The `Repository` class is a helper class that wraps `git` and `git-lfs` commands. It provides tooling adapted for managing repositories which can be very large. It is the recommended tool as soon as any `git` operation is involved, or when collaboration will be a point of focus with the repository itself. ## The Repository class ### Repository ```python Helper class to wrap the git and git-lfs commands. The aim is to facilitate interacting with huggingface.co hosted model or dataset repos, though not a lot here (if any) is actually specific to huggingface.co. <Tip warning={true}> [`Repository`] is deprecated in favor of the http-based alternatives implemented in [`HfApi`]. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. For more details, please read https://huggingface.co/docs/huggingface_hub/concepts/git_vs_http. </Tip> ``` - __init__ - current_branch - all ## Helper methods ### huggingface_hub.repository.is_git_repo ```python Check if the folder is the root or part of a git repository Args: folder (`str`): The folder in which to run the command. Returns: `bool`: `True` if the repository is part of a repository, `False` otherwise. ``` ### huggingface_hub.repository.is_local_clone ```python Check if the folder is a local clone of the remote_url Args: folder (`str` or `Path`): The folder in which to run the command. remote_url (`str`): The url of a git repository. Returns: `bool`: `True` if the repository is a local clone of the remote repository specified, `False` otherwise. ``` ### huggingface_hub.repository.is_tracked_with_lfs ```python Check if the file passed is tracked with git-lfs. Args: filename (`str` or `Path`): The filename to check. Returns: `bool`: `True` if the file passed is tracked with git-lfs, `False` otherwise. ``` ### huggingface_hub.repository.is_git_ignored ```python Check if file is git-ignored. Supports nested .gitignore files. Args: filename (`str` or `Path`): The filename to check. Returns: `bool`: `True` if the file passed is ignored by `git`, `False` otherwise. ``` ### huggingface_hub.repository.files_to_be_staged ```python Returns a list of filenames that are to be staged. Args: pattern (`str` or `Path`): The pattern of filenames to check. Put `.` to get all files. folder (`str` or `Path`): The folder in which to run the command. Returns: `List[str]`: List of files that are to be staged. ``` ### huggingface_hub.repository.is_tracked_upstream ```python Check if the current checked-out branch is tracked upstream. Args: folder (`str` or `Path`): The folder in which to run the command. Returns: `bool`: `True` if the current checked-out branch is tracked upstream, `False` otherwise. ``` ### huggingface_hub.repository.commits_to_push ```python Check the number of commits that would be pushed upstream Args: folder (`str` or `Path`): The folder in which to run the command. upstream (`str`, *optional*): The name of the upstream repository with which the comparison should be made. Returns: `int`: Number of commits that would be pushed upstream were a `git push` to proceed. ``` ## Following asynchronous commands The `Repository` utility offers several methods which can be launched asynchronously: - `git_push` - `git_pull` - `push_to_hub` - The `commit` context manager See below for utilities to manage such asynchronous methods. ### Repository ```python Helper class to wrap the git and git-lfs commands. The aim is to facilitate interacting with huggingface.co hosted model or dataset repos, though not a lot here (if any) is actually specific to huggingface.co. <Tip warning={true}> [`Repository`] is deprecated in favor of the http-based alternatives implemented in [`HfApi`]. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. For more details, please read https://huggingface.co/docs/huggingface_hub/concepts/git_vs_http. </Tip> ``` - commands_failed - commands_in_progress - wait_for_commands ### huggingface_hub.repository.CommandInProgress ```python Utility to follow commands launched asynchronously. ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Utilities ## Configure logging The `huggingface_hub` package exposes a `logging` utility to control the logging level of the package itself. You can import it as such: ```py from huggingface_hub import logging ``` Then, you may define the verbosity in order to update the amount of logs you'll see: ```python from huggingface_hub import logging logging.set_verbosity_error() logging.set_verbosity_warning() logging.set_verbosity_info() logging.set_verbosity_debug() logging.set_verbosity(...) ``` The levels should be understood as follows: - `error`: only show critical logs about usage which may result in an error or unexpected behavior. - `warning`: show logs that aren't critical but usage may result in unintended behavior. Additionally, important informative logs may be shown. - `info`: show most logs, including some verbose logging regarding what is happening under the hood. If something is behaving in an unexpected manner, we recommend switching the verbosity level to this in order to get more information. - `debug`: show all logs, including some internal logs which may be used to track exactly what's happening under the hood. ### logging.get_verbosity Error fetching docstring for logging.get_verbosity: module 'logging' has no attribute 'get_verbosity' ### logging.set_verbosity Error fetching docstring for logging.set_verbosity: module 'logging' has no attribute 'set_verbosity' ### logging.set_verbosity_info Error fetching docstring for logging.set_verbosity_info: module 'logging' has no attribute 'set_verbosity_info' ### logging.set_verbosity_debug Error fetching docstring for logging.set_verbosity_debug: module 'logging' has no attribute 'set_verbosity_debug' ### logging.set_verbosity_warning Error fetching docstring for logging.set_verbosity_warning: module 'logging' has no attribute 'set_verbosity_warning' ### logging.set_verbosity_error Error fetching docstring for logging.set_verbosity_error: module 'logging' has no attribute 'set_verbosity_error' ### logging.disable_propagation Error fetching docstring for logging.disable_propagation: module 'logging' has no attribute 'disable_propagation' ### logging.enable_propagation Error fetching docstring for logging.enable_propagation: module 'logging' has no attribute 'enable_propagation' ### Repo-specific helper methods The methods exposed below are relevant when modifying modules from the `huggingface_hub` library itself. Using these shouldn't be necessary if you use `huggingface_hub` and you don't modify them. ### logging.get_logger Error fetching docstring for logging.get_logger: module 'logging' has no attribute 'get_logger' ## Configure progress bars Progress bars are a useful tool to display information to the user while a long-running task is being executed (e.g. when downloading or uploading files). `huggingface_hub` exposes a [`~utils.tqdm`] wrapper to display progress bars in a consistent way across the library. By default, progress bars are enabled. You can disable them globally by setting `HF_HUB_DISABLE_PROGRESS_BARS` environment variable. You can also enable/disable them using [`~utils.enable_progress_bars`] and [`~utils.disable_progress_bars`]. If set, the environment variable has priority on the helpers. ```py >>> from huggingface_hub import snapshot_download >>> from huggingface_hub.utils import are_progress_bars_disabled, disable_progress_bars, enable_progress_bars >>> # Disable progress bars globally >>> disable_progress_bars() >>> # Progress bar will not be shown ! >>> snapshot_download("gpt2") >>> are_progress_bars_disabled() True >>> # Re-enable progress bars globally >>> enable_progress_bars() ``` ### Group-specific control of progress bars You can also enable or disable progress bars for specific groups. This allows you to manage progress bar visibility more granularly within different parts of your application or library. When a progress bar is disabled for a group, all subgroups under it are also affected unless explicitly overridden. ```py # Disable progress bars for a specific group >>> disable_progress_bars("peft.foo") >>> assert not are_progress_bars_disabled("peft") >>> assert not are_progress_bars_disabled("peft.something") >>> assert are_progress_bars_disabled("peft.foo") >>> assert are_progress_bars_disabled("peft.foo.bar") # Re-enable progress bars for a subgroup >>> enable_progress_bars("peft.foo.bar") >>> assert are_progress_bars_disabled("peft.foo") >>> assert not are_progress_bars_disabled("peft.foo.bar") # Use groups with tqdm # No progress bar for `name="peft.foo"` >>> for _ in tqdm(range(5), name="peft.foo"): ... pass # Progress bar will be shown for `name="peft.foo.bar"` >>> for _ in tqdm(range(5), name="peft.foo.bar"): ... pass 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:00<00:00, 117817.53it/s] ``` ### are_progress_bars_disabled ### huggingface_hub.utils.are_progress_bars_disabled ```python Check if progress bars are disabled globally or for a specific group. This function returns whether progress bars are disabled for a given group or globally. It checks the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable first, then the programmatic settings. Args: name (`str`, *optional*): The group name to check; if None, checks the global setting. Returns: `bool`: True if progress bars are disabled, False otherwise. ``` ### disable_progress_bars ### huggingface_hub.utils.disable_progress_bars ```python Disable progress bars either globally or for a specified group. This function updates the state of progress bars based on a group name. If no group name is provided, all progress bars are disabled. The operation respects the `HF_HUB_DISABLE_PROGRESS_BARS` environment variable's setting. Args: name (`str`, *optional*): The name of the group for which to disable the progress bars. If None, progress bars are disabled globally. Raises: Warning: If the environment variable precludes changes. ``` ### enable_progress_bars ### huggingface_hub.utils.enable_progress_bars ```python Enable progress bars either globally or for a specified group. This function sets the progress bars to enabled for the specified group or globally if no group is specified. The operation is subject to the `HF_HUB_DISABLE_PROGRESS_BARS` environment setting. Args: name (`str`, *optional*): The name of the group for which to enable the progress bars. If None, progress bars are enabled globally. Raises: Warning: If the environment variable precludes changes. ``` ## Configure HTTP backend In some environments, you might want to configure how HTTP calls are made, for example if you are using a proxy. `huggingface_hub` let you configure this globally using [`configure_http_backend`]. All requests made to the Hub will then use your settings. Under the hood, `huggingface_hub` uses `requests.Session` so you might want to refer to the [`requests` documentation](https://requests.readthedocs.io/en/latest/user/advanced) to learn more about the available parameters. Since `requests.Session` is not guaranteed to be thread-safe, `huggingface_hub` creates one session instance per thread. Using sessions allows us to keep the connection open between HTTP calls and ultimately save time. If you are integrating `huggingface_hub` in a third-party library and wants to make a custom call to the Hub, use [`get_session`] to get a Session configured by your users (i.e. replace any `requests.get(...)` call by `get_session().get(...)`). ### configure_http_backend ```python Configure the HTTP backend by providing a `backend_factory`. Any HTTP calls made by `huggingface_hub` will use a Session object instantiated by this factory. This can be useful if you are running your scripts in a specific environment requiring custom configuration (e.g. custom proxy or certifications). Use [`get_session`] to get a configured Session. Since `requests.Session` is not guaranteed to be thread-safe, `huggingface_hub` creates 1 Session instance per thread. They are all instantiated using the same `backend_factory` set in [`configure_http_backend`]. A LRU cache is used to cache the created sessions (and connections) between calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned. See [this issue](https://github.com/psf/requests/issues/2766) to know more about thread-safety in `requests`. Example: ```py import requests from huggingface_hub import configure_http_backend, get_session # Create a factory function that returns a Session with configured proxies def backend_factory() -> requests.Session: session = requests.Session() session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"} return session # Set it as the default session factory configure_http_backend(backend_factory=backend_factory) # In practice, this is mostly done internally in `huggingface_hub` session = get_session() ``` ``` ### get_session ```python Get a `requests.Session` object, using the session factory from the user. Use [`get_session`] to get a configured Session. Since `requests.Session` is not guaranteed to be thread-safe, `huggingface_hub` creates 1 Session instance per thread. They are all instantiated using the same `backend_factory` set in [`configure_http_backend`]. A LRU cache is used to cache the created sessions (and connections) between calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned. See [this issue](https://github.com/psf/requests/issues/2766) to know more about thread-safety in `requests`. Example: ```py import requests from huggingface_hub import configure_http_backend, get_session # Create a factory function that returns a Session with configured proxies def backend_factory() -> requests.Session: session = requests.Session() session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"} return session # Set it as the default session factory configure_http_backend(backend_factory=backend_factory) # In practice, this is mostly done internally in `huggingface_hub` session = get_session() ``` ``` ## Handle HTTP errors `huggingface_hub` defines its own HTTP errors to refine the `HTTPError` raised by `requests` with additional information sent back by the server. ### Raise for status [`~utils.hf_raise_for_status`] is meant to be the central method to "raise for status" from any request made to the Hub. It wraps the base `requests.raise_for_status` to provide additional information. Any `HTTPError` thrown is converted into a `HfHubHTTPError`. ```py import requests from huggingface_hub.utils import hf_raise_for_status, HfHubHTTPError response = requests.post(...) try: hf_raise_for_status(response) except HfHubHTTPError as e: print(str(e)) # formatted message e.request_id, e.server_message # details returned by server # Complete the error message with additional information once it's raised e.append_to_message("\n`create_commit` expects the repository to exist.") raise ``` ### huggingface_hub.utils.hf_raise_for_status ```python Internal version of `response.raise_for_status()` that will refine a potential HTTPError. Raised exception will be an instance of `HfHubHTTPError`. This helper is meant to be the unique method to raise_for_status when making a call to the Hugging Face Hub. Example: ```py import requests from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError response = get_session().post(...) try: hf_raise_for_status(response) except HfHubHTTPError as e: print(str(e)) # formatted message e.request_id, e.server_message # details returned by server # Complete the error message with additional information once it's raised e.append_to_message(" `create_commit` expects the repository to exist.") raise ``` Args: response (`Response`): Response from the server. endpoint_name (`str`, *optional*): Name of the endpoint that has been called. If provided, the error message will be more complete. <Tip warning={true}> Raises when the request has failed: - [`~utils.RepositoryNotFoundError`] If the repository to download from cannot be found. This may be because it doesn't exist, because `repo_type` is not set correctly, or because the repo is `private` and you do not have access. - [`~utils.GatedRepoError`] If the repository exists but is gated and the user is not on the authorized list. - [`~utils.RevisionNotFoundError`] If the repository exists but the revision couldn't be find. - [`~utils.EntryNotFoundError`] If the repository exists but the entry (e.g. the requested file) couldn't be find. - [`~utils.BadRequestError`] If request failed with a HTTP 400 BadRequest error. - [`~utils.HfHubHTTPError`] If request failed for a reason not listed above. </Tip> ``` ### HTTP errors Here is a list of HTTP errors thrown in `huggingface_hub`. #### HfHubHTTPError `HfHubHTTPError` is the parent class for any HF Hub HTTP error. It takes care of parsing the server response and format the error message to provide as much information to the user as possible. ### huggingface_hub.utils.HfHubHTTPError ```python HTTPError to inherit from for any custom HTTP Error raised in HF Hub. Any HTTPError is converted at least into a `HfHubHTTPError`. If some information is sent back by the server, it will be added to the error message. Added details: - Request id from "X-Request-Id" header if exists. If not, fallback to "X-Amzn-Trace-Id" header if exists. - Server error message from the header "X-Error-Message". - Server error message if we can found one in the response body. Example: ```py import requests from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError response = get_session().post(...) try: hf_raise_for_status(response) except HfHubHTTPError as e: print(str(e)) # formatted message e.request_id, e.server_message # details returned by server # Complete the error message with additional information once it's raised e.append_to_message(" `create_commit` expects the repository to exist.") raise ``` ``` #### RepositoryNotFoundError ### huggingface_hub.utils.RepositoryNotFoundError ```python Raised when trying to access a hf.co URL with an invalid repository name, or with a private repo name the user does not have access to. Example: ```py >>> from huggingface_hub import model_info >>> model_info("<non_existent_repository>") (...) huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: PvMw_VjBMjVdMz53WKIzP) Repository Not Found for url: https://huggingface.co/api/models/%3Cnon_existent_repository%3E. Please make sure you specified the correct `repo_id` and `repo_type`. If the repo is private, make sure you are authenticated. Invalid username or password. ``` ``` #### GatedRepoError ### huggingface_hub.utils.GatedRepoError ```python Raised when trying to access a gated repository for which the user is not on the authorized list. Note: derives from `RepositoryNotFoundError` to ensure backward compatibility. Example: ```py >>> from huggingface_hub import model_info >>> model_info("<gated_repository>") (...) huggingface_hub.utils._errors.GatedRepoError: 403 Client Error. (Request ID: ViT1Bf7O_026LGSQuVqfa) Cannot access gated repo for url https://huggingface.co/api/models/ardent-figment/gated-model. Access to model ardent-figment/gated-model is restricted and you are not in the authorized list. Visit https://huggingface.co/ardent-figment/gated-model to ask for access. ``` ``` #### RevisionNotFoundError ### huggingface_hub.utils.RevisionNotFoundError ```python Raised when trying to access a hf.co URL with a valid repository but an invalid revision. Example: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download('bert-base-cased', 'config.json', revision='<non-existent-revision>') (...) huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Mwhe_c3Kt650GcdKEFomX) Revision Not Found for url: https://huggingface.co/bert-base-cased/resolve/%3Cnon-existent-revision%3E/config.json. ``` ``` #### EntryNotFoundError ### huggingface_hub.utils.EntryNotFoundError ```python Raised when trying to access a hf.co URL with a valid repository and revision but an invalid filename. Example: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download('bert-base-cased', '<non-existent-file>') (...) huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: 53pNl6M0MxsnG5Sw8JA6x) Entry Not Found for url: https://huggingface.co/bert-base-cased/resolve/main/%3Cnon-existent-file%3E. ``` ``` #### BadRequestError ### huggingface_hub.utils.BadRequestError ```python Raised by `hf_raise_for_status` when the server returns a HTTP 400 error. Example: ```py >>> resp = requests.post("hf.co/api/check", ...) >>> hf_raise_for_status(resp, endpoint_name="check") huggingface_hub.utils._errors.BadRequestError: Bad request for check endpoint: {details} (Request ID: XXX) ``` ``` #### LocalEntryNotFoundError ### huggingface_hub.utils.LocalEntryNotFoundError ```python Raised when trying to access a file or snapshot that is not on the disk when network is disabled or unavailable (connection issue). The entry may exist on the Hub. Note: `ValueError` type is to ensure backward compatibility. Note: `LocalEntryNotFoundError` derives from `HTTPError` because of `EntryNotFoundError` even when it is not a network issue. Example: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download('bert-base-cased', '<non-cached-file>', local_files_only=True) (...) huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False. ``` ``` #### OfflineModeIsEnabled ### huggingface_hub.utils.OfflineModeIsEnabled ```python Raised when a request is made but `HF_HUB_OFFLINE=1` is set as environment variable. ``` ## Telemetry `huggingface_hub` includes an helper to send telemetry data. This information helps us debug issues and prioritize new features. Users can disable telemetry collection at any time by setting the `HF_HUB_DISABLE_TELEMETRY=1` environment variable. Telemetry is also disabled in offline mode (i.e. when setting HF_HUB_OFFLINE=1). If you are maintainer of a third-party library, sending telemetry data is as simple as making a call to [`send_telemetry`]. Data is sent in a separate thread to reduce as much as possible the impact for users. ### utils.send_telemetry Error fetching docstring for utils.send_telemetry: module 'utils' has no attribute 'send_telemetry' ## Validators `huggingface_hub` includes custom validators to validate method arguments automatically. Validation is inspired by the work done in [Pydantic](https://pydantic-docs.helpmanual.io/) to validate type hints but with more limited features. ### Generic decorator [`~utils.validate_hf_hub_args`] is a generic decorator to encapsulate methods that have arguments following `huggingface_hub`'s naming. By default, all arguments that has a validator implemented will be validated. If an input is not valid, a [`~utils.HFValidationError`] is thrown. Only the first non-valid value throws an error and stops the validation process. Usage: ```py >>> from huggingface_hub.utils import validate_hf_hub_args >>> @validate_hf_hub_args ... def my_cool_method(repo_id: str): ... print(repo_id) >>> my_cool_method(repo_id="valid_repo_id") valid_repo_id >>> my_cool_method("other..repo..id") huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'. >>> my_cool_method(repo_id="other..repo..id") huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'. >>> @validate_hf_hub_args ... def my_cool_auth_method(token: str): ... print(token) >>> my_cool_auth_method(token="a token") "a token" >>> my_cool_auth_method(use_auth_token="a use_auth_token") "a use_auth_token" >>> my_cool_auth_method(token="a token", use_auth_token="a use_auth_token") UserWarning: Both `token` and `use_auth_token` are passed (...). `use_auth_token` value will be ignored. "a token" ``` #### validate_hf_hub_args ### utils.validate_hf_hub_args Error fetching docstring for utils.validate_hf_hub_args: module 'utils' has no attribute 'validate_hf_hub_args' #### HFValidationError ### utils.HFValidationError Error fetching docstring for utils.HFValidationError: module 'utils' has no attribute 'HFValidationError' ### Argument validators Validators can also be used individually. Here is a list of all arguments that can be validated. #### repo_id ### utils.validate_repo_id Error fetching docstring for utils.validate_repo_id: module 'utils' has no attribute 'validate_repo_id' #### smoothly_deprecate_use_auth_token Not exactly a validator, but ran as well. ### utils.smoothly_deprecate_use_auth_token Error fetching docstring for utils.smoothly_deprecate_use_auth_token: module 'utils' has no attribute 'smoothly_deprecate_use_auth_token'
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Managing your Space runtime Check the [`HfApi`] documentation page for the reference of methods to manage your Space on the Hub. - Duplicate a Space: [`duplicate_space`] - Fetch current runtime: [`get_space_runtime`] - Manage secrets: [`add_space_secret`] and [`delete_space_secret`] - Manage hardware: [`request_space_hardware`] - Manage state: [`pause_space`], [`restart_space`], [`set_space_sleep_time`] ## Data structures ### SpaceRuntime ### SpaceRuntime ```python Contains information about the current runtime of a Space. Args: stage (`str`): Current stage of the space. Example: RUNNING. hardware (`str` or `None`): Current hardware of the space. Example: "cpu-basic". Can be `None` if Space is `BUILDING` for the first time. requested_hardware (`str` or `None`): Requested hardware. Can be different than `hardware` especially if the request has just been made. Example: "t4-medium". Can be `None` if no hardware has been requested yet. sleep_time (`int` or `None`): Number of seconds the Space will be kept alive after the last request. By default (if value is `None`), the Space will never go to sleep if it's running on an upgraded hardware, while it will go to sleep after 48 hours on a free 'cpu-basic' hardware. For more details, see https://huggingface.co/docs/hub/spaces-gpus#sleep-time. raw (`dict`): Raw response from the server. Contains more information about the Space runtime like number of replicas, number of cpu, memory size,... ``` ### SpaceHardware ### SpaceHardware ```python Enumeration of hardwares available to run your Space on the Hub. Value can be compared to a string: ```py assert SpaceHardware.CPU_BASIC == "cpu-basic" ``` Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L73 (private url). ``` ### SpaceStage ### SpaceStage ```python Enumeration of possible stage of a Space on the Hub. Value can be compared to a string: ```py assert SpaceStage.BUILDING == "BUILDING" ``` Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L61 (private url). ``` ### SpaceStorage ### SpaceStorage ```python Enumeration of persistent storage available for your Space on the Hub. Value can be compared to a string: ```py assert SpaceStorage.SMALL == "small" ``` Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts#L24 (private url). ``` ### SpaceVariable ### SpaceVariable ```python Contains information about the current variables of a Space. Args: key (`str`): Variable key. Example: `"MODEL_REPO_ID"` value (`str`): Variable value. Example: `"the_model_repo_id"`. description (`str` or None): Description of the variable. Example: `"Model Repo ID of the implemented model"`. updatedAt (`datetime` or None): datetime of the last update of the variable (if the variable has been updated at least once). ```
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Webhooks Server Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on specific repos or to all repos belonging to particular users/organizations you're interested in following. To learn more about webhooks on the Huggingface Hub, you can read the Webhooks [guide](https://huggingface.co/docs/hub/webhooks). <Tip> Check out this [guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your webhooks server and deploy it as a Space. </Tip> <Tip warning={true}> This is an experimental feature. This means that we are still working on improving the API. Breaking changes might be introduced in the future without prior notice. Make sure to pin the version of `huggingface_hub` in your requirements. A warning is triggered when you use an experimental feature. You can disable it by setting `HF_HUB_DISABLE_EXPERIMENTAL_WARNING=1` as an environment variable. </Tip> ## Server The server is a [Gradio](https://gradio.app/) app. It has a UI to display instructions for you or your users and an API to listen to webhooks. Implementing a webhook endpoint is as simple as decorating a function. You can then debug it by redirecting the Webhooks to your machine (using a Gradio tunnel) before deploying it to a Space. ### WebhooksServer ### huggingface_hub.WebhooksServer ```python The [`WebhooksServer`] class lets you create an instance of a Gradio app that can receive Huggingface webhooks. These webhooks can be registered using the [`~WebhooksServer.add_webhook`] decorator. Webhook endpoints are added to the app as a POST endpoint to the FastAPI router. Once all the webhooks are registered, the `launch` method has to be called to start the app. It is recommended to accept [`WebhookPayload`] as the first argument of the webhook function. It is a Pydantic model that contains all the information about the webhook event. The data will be parsed automatically for you. Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your WebhooksServer and deploy it on a Space. <Tip warning={true}> `WebhooksServer` is experimental. Its API is subject to change in the future. </Tip> <Tip warning={true}> You must have `gradio` installed to use `WebhooksServer` (`pip install --upgrade gradio`). </Tip> Args: ui (`gradio.Blocks`, optional): A Gradio UI instance to be used as the Space landing page. If `None`, a UI displaying instructions about the configured webhooks is created. webhook_secret (`str`, optional): A secret key to verify incoming webhook requests. You can set this value to any secret you want as long as you also configure it in your [webhooks settings panel](https://huggingface.co/settings/webhooks). You can also set this value as the `WEBHOOK_SECRET` environment variable. If no secret is provided, the webhook endpoints are opened without any security. Example: ```python import gradio as gr from huggingface_hub import WebhooksServer, WebhookPayload with gr.Blocks() as ui: ... app = WebhooksServer(ui=ui, webhook_secret="my_secret_key") @app.add_webhook("/say_hello") async def hello(payload: WebhookPayload): return {"message": "hello"} app.launch() ``` ``` ### @webhook_endpoint ### huggingface_hub.webhook_endpoint ```python Decorator to start a [`WebhooksServer`] and register the decorated function as a webhook endpoint. This is a helper to get started quickly. If you need more flexibility (custom landing page or webhook secret), you can use [`WebhooksServer`] directly. You can register multiple webhook endpoints (to the same server) by using this decorator multiple times. Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your server and deploy it on a Space. <Tip warning={true}> `webhook_endpoint` is experimental. Its API is subject to change in the future. </Tip> <Tip warning={true}> You must have `gradio` installed to use `webhook_endpoint` (`pip install --upgrade gradio`). </Tip> Args: path (`str`, optional): The URL path to register the webhook function. If not provided, the function name will be used as the path. In any case, all webhooks are registered under `/webhooks`. Examples: The default usage is to register a function as a webhook endpoint. The function name will be used as the path. The server will be started automatically at exit (i.e. at the end of the script). ```python from huggingface_hub import webhook_endpoint, WebhookPayload @webhook_endpoint async def trigger_training(payload: WebhookPayload): if payload.repo.type == "dataset" and payload.event.action == "update": # Trigger a training job if a dataset is updated ... # Server is automatically started at the end of the script. ``` Advanced usage: register a function as a webhook endpoint and start the server manually. This is useful if you are running it in a notebook. ```python from huggingface_hub import webhook_endpoint, WebhookPayload @webhook_endpoint async def trigger_training(payload: WebhookPayload): if payload.repo.type == "dataset" and payload.event.action == "update": # Trigger a training job if a dataset is updated ... # Start the server manually trigger_training.launch() ``` ``` ## Payload [`WebhookPayload`] is the main data structure that contains the payload from Webhooks. This is a `pydantic` class which makes it very easy to use with FastAPI. If you pass it as a parameter to a webhook endpoint, it will be automatically validated and parsed as a Python object. For more information about webhooks payload, you can refer to the Webhooks Payload [guide](https://huggingface.co/docs/hub/webhooks#webhook-payloads). ### huggingface_hub.WebhookPayload No docstring found for huggingface_hub.WebhookPayload ### WebhookPayload ### huggingface_hub.WebhookPayload No docstring found for huggingface_hub.WebhookPayload ### WebhookPayloadComment ### huggingface_hub.WebhookPayloadComment No docstring found for huggingface_hub.WebhookPayloadComment ### WebhookPayloadDiscussion ### huggingface_hub.WebhookPayloadDiscussion No docstring found for huggingface_hub.WebhookPayloadDiscussion ### WebhookPayloadDiscussionChanges ### huggingface_hub.WebhookPayloadDiscussionChanges No docstring found for huggingface_hub.WebhookPayloadDiscussionChanges ### WebhookPayloadEvent ### huggingface_hub.WebhookPayloadEvent No docstring found for huggingface_hub.WebhookPayloadEvent ### WebhookPayloadMovedTo ### huggingface_hub.WebhookPayloadMovedTo No docstring found for huggingface_hub.WebhookPayloadMovedTo ### WebhookPayloadRepo ### huggingface_hub.WebhookPayloadRepo No docstring found for huggingface_hub.WebhookPayloadRepo ### WebhookPayloadUrl ### huggingface_hub.WebhookPayloadUrl No docstring found for huggingface_hub.WebhookPayloadUrl ### WebhookPayloadWebhook ### huggingface_hub.WebhookPayloadWebhook No docstring found for huggingface_hub.WebhookPayloadWebhook
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
.md
README.md exists but content is empty.
Downloads last month
16