source
stringclasses 40
values | file_type
stringclasses 1
value | chunk
stringlengths 3
512
| chunk_id
stringlengths 5
8
|
---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | In most cases, the [`upload_folder`] method and `huggingface-cli upload` command should be the go-to solutions to upload files to the Hub. They ensure a single commit will be made, handle a lot of use cases, and fail explicitly when something wrong happens. However, when dealing with a large amount of data, you will usually prefer a resilient process even if it leads to more commits or requires more CPU usage. The [`upload_large_folder`] method has been implemented in that spirit: | 35_5_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - it is resumable: the upload process is split into many small tasks (hashing files, pre-uploading them, and committing them). Each time a task is completed, the result is cached locally in a `./cache/huggingface` folder inside the folder you are trying to upload. By doing so, restarting the process after an interruption will resume all completed tasks.
- it is multi-threaded: hashing large files and pre-uploading them benefits a lot from multithreading if your machine allows it. | 35_5_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - it is resilient to errors: a high-level retry-mechanism has been added to retry each independent task indefinitely until it passes (no matter if it's a OSError, ConnectionError, PermissionError, etc.). This mechanism is double-edged. If transient errors happen, the process will continue and retry. If permanent errors happen (e.g. permission denied), it will retry indefinitely without solving the root cause. | 35_5_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | If you want more technical details about how `upload_large_folder` is implemented under the hood, please have a look to the [`upload_large_folder`] package reference.
Here is how to use [`upload_large_folder`] in a script. The method signature is very similar to [`upload_folder`]:
```py
>>> api.upload_large_folder(
... repo_id="HuggingFaceM4/Docmatix",
... repo_type="dataset",
... folder_path="/path/to/local/docmatix",
... )
```
You will see the following output in your terminal:
``` | 35_5_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ... folder_path="/path/to/local/docmatix",
... )
```
You will see the following output in your terminal:
```
Repo created: https://huggingface.co/datasets/HuggingFaceM4/Docmatix
Found 5 candidate files to upload
Recovering from metadata files: 100%|βββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 542.66it/s] | 35_5_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ---------- 2024-07-22 17:23:17 (0:00:00) ----------
Files: hashed 5/5 (5.0G/5.0G) | pre-uploaded: 0/5 (0.0/5.0G) | committed: 0/5 (0.0/5.0G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 5 | committing: 0 | waiting: 11
---------------------------------------------------
``` | 35_5_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ---------------------------------------------------
```
First, the repo is created if it didn't exist before. Then, the local folder is scanned for files to upload. For each file, we try to recover metadata information (from a previously interrupted upload). From there, it is able to launch workers and print an update status every 1 minute. Here, we can see that 5 files have already been hashed but not pre-uploaded. 5 workers are pre-uploading files while the 11 others are waiting for a task. | 35_5_6 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | A command line is also provided. You can define the number of workers and the level of verbosity in the terminal:
```sh
huggingface-cli upload-large-folder HuggingFaceM4/Docmatix --repo-type=dataset /path/to/local/docmatix --num-workers=16
```
<Tip> | 35_5_7 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ```
<Tip>
For large uploads, you have to set `repo_type="model"` or `--repo-type=model` explicitly. Usually, this information is implicit in all other `HfApi` methods. This is to avoid having data uploaded to a repository with a wrong type. If that's the case, you'll have to re-upload everything.
</Tip>
<Tip warning={true}>
While being much more robust to upload large folders, `upload_large_folder` is more limited than [`upload_folder`] feature-wise. In practice: | 35_5_8 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - you cannot set a custom `path_in_repo`. If you want to upload to a subfolder, you need to set the proper structure locally.
- you cannot set a custom `commit_message` and `commit_description` since multiple commits are created.
- you cannot delete from the repo while uploading. Please make a separate commit first.
- you cannot create a PR directly. Please create a PR first (from the UI or using [`create_pull_request`]) and then commit to it by passing `revision`.
</Tip> | 35_5_9 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | There are some limitations to be aware of when dealing with a large amount of data in your repo. Given the time it takes to stream the data, getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying. | 35_6_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | Check out our [Repository limitations and recommendations](https://huggingface.co/docs/hub/repositories-recommendations) guide for best practices on how to structure your repositories on the Hub. Let's move on with some practical tips to make your upload process as smooth as possible.
- **Start small**: We recommend starting with a small amount of data to test your upload script. It's easier to iterate on a script when failing takes only a little time. | 35_6_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - **Expect failures**: Streaming large amounts of data is challenging. You don't know what can happen, but it's always best to consider that something will fail at least once -no matter if it's due to your machine, your connection, or our servers. For example, if you plan to upload a large number of files, it's best to keep track locally of which files you already uploaded before uploading the next batch. You are ensured that an LFS file that is already committed will never be re-uploaded twice but | 35_6_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | before uploading the next batch. You are ensured that an LFS file that is already committed will never be re-uploaded twice but checking it client-side can still save some time. This is what [`upload_large_folder`] does for you. | 35_6_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - **Use `hf_transfer`**: this is a Rust-based [library](https://github.com/huggingface/hf_transfer) meant to speed up uploads on machines with very high bandwidth. To use `hf_transfer`:
1. Specify the `hf_transfer` extra when installing `huggingface_hub`
(i.e., `pip install huggingface_hub[hf_transfer]`).
2. Set `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable.
<Tip warning={true}> | 35_6_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | 2. Set `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable.
<Tip warning={true}>
`hf_transfer` is a power user tool! It is tested and production-ready, but it lacks user-friendly features like advanced error handling or proxies. For more details, please take a look at this [section](https://huggingface.co/docs/huggingface_hub/hf_transfer).
</Tip> | 35_6_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | In most cases, you won't need more than [`upload_file`] and [`upload_folder`] to upload your files to the Hub.
However, `huggingface_hub` has more advanced features to make things easier. Let's have a look at them! | 35_7_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | In some cases, you want to push data without blocking your main thread. This is particularly useful to upload logs and
artifacts while continuing a training. To do so, you can use the `run_as_future` argument in both [`upload_file`] and
[`upload_folder`]. This will return a [`concurrent.futures.Future`](https://docs.python.org/3/library/concurrent.futures.html#future-objects)
object that you can use to check the status of the upload.
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi() | 35_8_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | object that you can use to check the status of the upload.
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> future = api.upload_folder( # Upload in the background (non-blocking action)
... repo_id="username/my-model",
... folder_path="checkpoints-001",
... run_as_future=True,
... )
>>> future
Future(...)
>>> future.done()
False
>>> future.result() # Wait for the upload to complete (blocking action)
...
```
<Tip> | 35_8_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | Future(...)
>>> future.done()
False
>>> future.result() # Wait for the upload to complete (blocking action)
...
```
<Tip>
Background jobs are queued when using `run_as_future=True`. This means that you are guaranteed that the jobs will be
executed in the correct order.
</Tip>
Even though background jobs are mostly useful to upload data/create commits, you can queue any method you like using | 35_8_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | </Tip>
Even though background jobs are mostly useful to upload data/create commits, you can queue any method you like using
[`run_as_future`]. For instance, you can use it to create a repo and then upload data to it in the background. The
built-in `run_as_future` argument in upload methods is just an alias around it.
```py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.run_as_future(api.create_repo, "username/my-model", exists_ok=True)
Future(...)
>>> api.upload_file( | 35_8_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | >>> api = HfApi()
>>> api.run_as_future(api.create_repo, "username/my-model", exists_ok=True)
Future(...)
>>> api.upload_file(
... repo_id="username/my-model",
... path_in_repo="file.txt",
... path_or_fileobj=b"file content",
... run_as_future=True,
... )
Future(...)
``` | 35_8_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | [`upload_folder`] makes it easy to upload an entire folder to the Hub. However, for large folders (thousands of files or
hundreds of GB), we recommend using [`upload_large_folder`], which splits the upload into multiple commits. See the [Upload a large folder](#upload-a-large-folder) section for more details. | 35_9_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | The Hugging Face Hub makes it easy to save and version data. However, there are some limitations when updating the same file thousands of times. For instance, you might want to save logs of a training process or user | 35_10_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | feedback on a deployed Space. In these cases, uploading the data as a dataset on the Hub makes sense, but it can be hard to do properly. The main reason is that you don't want to version every update of your data because it'll make the git repository unusable. The [`CommitScheduler`] class offers a solution to this problem.
The idea is to run a background job that regularly pushes a local folder to the Hub. Let's assume you have a | 35_10_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | The idea is to run a background job that regularly pushes a local folder to the Hub. Let's assume you have a
Gradio Space that takes as input some text and generates two translations of it. Then, the user can select their preferred translation. For each run, you want to save the input, output, and user preference to analyze the results. This is a
perfect use case for [`CommitScheduler`]; you want to save data to the Hub (potentially millions of user feedback), but | 35_10_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | perfect use case for [`CommitScheduler`]; you want to save data to the Hub (potentially millions of user feedback), but
you don't _need_ to save in real-time each user's input. Instead, you can save the data locally in a JSON file and
upload it every 10 minutes. For example:
```py
>>> import json
>>> import uuid
>>> from pathlib import Path
>>> import gradio as gr
>>> from huggingface_hub import CommitScheduler | 35_10_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | # Define the file where to save the data. Use UUID to make sure not to overwrite existing data from a previous run.
>>> feedback_file = Path("user_feedback/") / f"data_{uuid.uuid4()}.json"
>>> feedback_folder = feedback_file.parent | 35_10_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | # Schedule regular uploads. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
... repo_id="report-translation-feedback",
... repo_type="dataset",
... folder_path=feedback_folder,
... path_in_repo="data",
... every=10,
... ) | 35_10_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | # Define the function that will be called when the user submits its feedback (to be called in Gradio)
>>> def save_feedback(input_text:str, output_1: str, output_2:str, user_choice: int) -> None:
... """
... Append input/outputs and user feedback to a JSON Lines file using a thread lock to avoid concurrent writes from different users.
... """
... with scheduler.lock:
... with feedback_file.open("a") as f: | 35_10_6 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ... """
... with scheduler.lock:
... with feedback_file.open("a") as f:
... f.write(json.dumps({"input": input_text, "output_1": output_1, "output_2": output_2, "user_choice": user_choice}))
... f.write("\n") | 35_10_7 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | # Start Gradio
>>> with gr.Blocks() as demo:
>>> ... # define Gradio demo + use `save_feedback`
>>> demo.launch()
```
And that's it! User input/outputs and feedback will be available as a dataset on the Hub. By using a unique JSON file name, you are guaranteed you won't overwrite data from a previous run or data from another
Spaces/replicas pushing concurrently to the same repository.
For more details about the [`CommitScheduler`], here is what you need to know:
- **append-only:** | 35_10_8 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | For more details about the [`CommitScheduler`], here is what you need to know:
- **append-only:**
It is assumed that you will only add content to the folder. You must only append data to existing files or create
new files. Deleting or overwriting a file might corrupt your repository.
- **git history**:
The scheduler will commit the folder every `every` minutes. To avoid polluting the git repository too much, it is | 35_10_9 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | The scheduler will commit the folder every `every` minutes. To avoid polluting the git repository too much, it is
recommended to set a minimal value of 5 minutes. Besides, the scheduler is designed to avoid empty commits. If no
new content is detected in the folder, the scheduled commit is dropped.
- **errors:**
The scheduler run as background thread. It is started when you instantiate the class and never stops. In particular, | 35_10_10 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | The scheduler run as background thread. It is started when you instantiate the class and never stops. In particular,
if an error occurs during the upload (example: connection issue), the scheduler will silently ignore it and retry
at the next scheduled commit.
- **thread-safety:**
In most cases it is safe to assume that you can write to a file without having to worry about a lock file. The
scheduler will not crash or be corrupted if you write content to the folder while it's uploading. In practice, | 35_10_11 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | scheduler will not crash or be corrupted if you write content to the folder while it's uploading. In practice,
_it is possible_ that concurrency issues happen for heavy-loaded apps. In this case, we advice to use the
`scheduler.lock` lock to ensure thread-safety. The lock is blocked only when the scheduler scans the folder for
changes, not when it uploads data. You can safely assume that it will not affect the user experience on your Space. | 35_10_12 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | Persisting data from a Space to a Dataset on the Hub is the main use case for [`CommitScheduler`]. Depending on the use
case, you might want to structure your data differently. The structure has to be robust to concurrent users and
restarts which often implies generating UUIDs. Besides robustness, you should upload data in a format readable by the π€ Datasets library for later reuse. We created a [Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver) | 35_11_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | that demonstrates how to save several different data formats (you may need to adapt it for your own specific needs). | 35_11_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | [`CommitScheduler`] assumes your data is append-only and should be uploading "as is". However, you
might want to customize the way data is uploaded. You can do that by creating a class inheriting from [`CommitScheduler`]
and overwrite the `push_to_hub` method (feel free to overwrite it any way you want). You are guaranteed it will
be called every `every` minutes in a background thread. You don't have to worry about concurrency and errors but you | 35_12_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | be called every `every` minutes in a background thread. You don't have to worry about concurrency and errors but you
must be careful about other aspects, such as pushing empty commits or duplicated data.
In the (simplified) example below, we overwrite `push_to_hub` to zip all PNG files in a single archive to avoid
overloading the repo on the Hub:
```py
class ZipScheduler(CommitScheduler):
def push_to_hub(self):
# 1. List PNG files
png_files = list(self.folder_path.glob("*.png"))
if len(png_files) == 0: | 35_12_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | def push_to_hub(self):
# 1. List PNG files
png_files = list(self.folder_path.glob("*.png"))
if len(png_files) == 0:
return None # return early if nothing to commit | 35_12_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | # 2. Zip png files in a single archive
with tempfile.TemporaryDirectory() as tmpdir:
archive_path = Path(tmpdir) / "train.zip"
with zipfile.ZipFile(archive_path, "w", zipfile.ZIP_DEFLATED) as zip:
for png_file in png_files:
zip.write(filename=png_file, arcname=png_file.name)
# 3. Upload archive
self.api.upload_file(..., path_or_fileobj=archive_path) | 35_12_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | # 4. Delete local png files to avoid re-uploading them later
for png_file in png_files:
png_file.unlink()
```
When you overwrite `push_to_hub`, you have access to the attributes of [`CommitScheduler`] and especially:
- [`HfApi`] client: `api`
- Folder parameters: `folder_path` and `path_in_repo`
- Repo parameters: `repo_id`, `repo_type`, `revision`
- The thread lock: `lock`
<Tip> | 35_12_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - Repo parameters: `repo_id`, `repo_type`, `revision`
- The thread lock: `lock`
<Tip>
For more examples of custom schedulers, check out our [demo Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver)
containing different implementations depending on your use cases.
</Tip> | 35_12_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | The [`upload_file`] and [`upload_folder`] functions are high-level APIs that are generally convenient to use. We recommend
trying these functions first if you don't need to work at a lower level. However, if you want to work at a commit-level,
you can use the [`create_commit`] function directly.
There are three types of operations supported by [`create_commit`]: | 35_13_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | you can use the [`create_commit`] function directly.
There are three types of operations supported by [`create_commit`]:
- [`CommitOperationAdd`] uploads a file to the Hub. If the file already exists, the file contents are overwritten. This operation accepts two arguments:
- `path_in_repo`: the repository path to upload a file to.
- `path_or_fileobj`: either a path to a file on your filesystem or a file-like object. This is the content of the file to upload to the Hub. | 35_13_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - [`CommitOperationDelete`] removes a file or a folder from a repository. This operation accepts `path_in_repo` as an argument.
- [`CommitOperationCopy`] copies a file within a repository. This operation accepts three arguments:
- `src_path_in_repo`: the repository path of the file to copy.
- `path_in_repo`: the repository path where the file should be copied.
- `src_revision`: optional - the revision of the file to copy if your want to copy a file from a different branch/revision. | 35_13_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | - `src_revision`: optional - the revision of the file to copy if your want to copy a file from a different branch/revision.
For example, if you want to upload two files and delete a file in a Hub repository:
1. Use the appropriate `CommitOperation` to add or delete a file and to delete a folder:
```py
>>> from huggingface_hub import HfApi, CommitOperationAdd, CommitOperationDelete
>>> api = HfApi()
>>> operations = [ | 35_13_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ```py
>>> from huggingface_hub import HfApi, CommitOperationAdd, CommitOperationDelete
>>> api = HfApi()
>>> operations = [
... CommitOperationAdd(path_in_repo="LICENSE.md", path_or_fileobj="~/repo/LICENSE.md"),
... CommitOperationAdd(path_in_repo="weights.h5", path_or_fileobj="~/repo/weights-final.h5"),
... CommitOperationDelete(path_in_repo="old-weights.h5"),
... CommitOperationDelete(path_in_repo="logs/"), | 35_13_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ... CommitOperationDelete(path_in_repo="old-weights.h5"),
... CommitOperationDelete(path_in_repo="logs/"),
... CommitOperationCopy(src_path_in_repo="image.png", path_in_repo="duplicate_image.png"),
... ]
```
2. Pass your operations to [`create_commit`]:
```py
>>> api.create_commit(
... repo_id="lysandre/test-model",
... operations=operations,
... commit_message="Upload my model weights and license",
... )
``` | 35_13_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ... operations=operations,
... commit_message="Upload my model weights and license",
... )
```
In addition to [`upload_file`] and [`upload_folder`], the following functions also use [`create_commit`] under the hood:
- [`delete_file`] deletes a single file from a repository on the Hub.
- [`delete_folder`] deletes an entire folder from a repository on the Hub.
- [`metadata_update`] updates a repository's metadata.
For more detailed information, take a look at the [`HfApi`] reference. | 35_13_6 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | In some cases, you might want to upload huge files to S3 **before** making the commit call. For example, if you are
committing a dataset in several shards that are generated in-memory, you would need to upload the shards one by one
to avoid an out-of-memory issue. A solution is to upload each shard as a separate commit on the repo. While being
perfectly valid, this solution has the drawback of potentially messing the git history by generating tens of commits. | 35_14_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | perfectly valid, this solution has the drawback of potentially messing the git history by generating tens of commits.
To overcome this issue, you can upload your files one by one to S3 and then create a single commit at the end. This
is possible using [`preupload_lfs_files`] in combination with [`create_commit`].
<Tip warning={true}>
This is a power-user method. Directly using [`upload_file`], [`upload_folder`] or [`create_commit`] instead of handling | 35_14_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | This is a power-user method. Directly using [`upload_file`], [`upload_folder`] or [`create_commit`] instead of handling
the low-level logic of pre-uploading files is the way to go in the vast majority of cases. The main caveat of
[`preupload_lfs_files`] is that until the commit is actually made, the upload files are not accessible on the repo on
the Hub. If you have a question, feel free to ping us on our Discord or in a GitHub issue.
</Tip> | 35_14_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | the Hub. If you have a question, feel free to ping us on our Discord or in a GitHub issue.
</Tip>
Here is a simple example illustrating how to pre-upload files:
```py
>>> from huggingface_hub import CommitOperationAdd, preupload_lfs_files, create_commit, create_repo | 35_14_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | >>> repo_id = create_repo("test_preupload").repo_id
>>> operations = [] # List of all `CommitOperationAdd` objects that will be generated
>>> for i in range(5):
... content = ... # generate binary content
... addition = CommitOperationAdd(path_in_repo=f"shard_{i}_of_5.bin", path_or_fileobj=content)
... preupload_lfs_files(repo_id, additions=[addition])
... operations.append(addition) | 35_14_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | >>> # Create commit
>>> create_commit(repo_id, operations=operations, commit_message="Commit all shards")
```
First, we create the [`CommitOperationAdd`] objects one by one. In a real-world example, those would contain the
generated shards. Each file is uploaded before generating the next one. During the [`preupload_lfs_files`] step, **the
`CommitOperationAdd` object is mutated**. You should only use it to pass it directly to [`create_commit`]. The main | 35_14_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | `CommitOperationAdd` object is mutated**. You should only use it to pass it directly to [`create_commit`]. The main
update of the object is that **the binary content is removed** from it, meaning that it will be garbage-collected if
you don't store another reference to it. This is expected as we don't want to keep in memory the content that is
already uploaded. Finally we create the commit by passing all the operations to [`create_commit`]. You can pass | 35_14_6 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | already uploaded. Finally we create the commit by passing all the operations to [`create_commit`]. You can pass
additional operations (add, delete or copy) that have not been processed yet and they will be handled correctly. | 35_14_7 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | All the methods described above use the Hub's API to upload files. This is the recommended way to upload files to the Hub.
However, we also provide [`Repository`], a wrapper around the git tool to manage a local repository.
<Tip warning={true}>
Although [`Repository`] is not formally deprecated, we recommend using the HTTP-based methods described above instead.
For more details about this recommendation, please have a look at [this guide](../concepts/git_vs_http) explaining the | 35_15_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | For more details about this recommendation, please have a look at [this guide](../concepts/git_vs_http) explaining the
core differences between HTTP-based and Git-based approaches.
</Tip>
Git LFS automatically handles files larger than 10MB. But for very large files (>5GB), you need to install a custom transfer agent for Git LFS:
```bash
huggingface-cli lfs-enable-largefiles
``` | 35_15_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ```bash
huggingface-cli lfs-enable-largefiles
```
You should install this for each repository that has a very large file. Once installed, you'll be able to push files larger than 5GB. | 35_15_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | The `commit` context manager handles four of the most common Git commands: pull, add, commit, and push. `git-lfs` automatically tracks any file larger than 10MB. In the following example, the `commit` context manager:
1. Pulls from the `text-files` repository.
2. Adds a change made to `file.txt`.
3. Commits the change.
4. Pushes the change to the `text-files` repository.
```python
>>> from huggingface_hub import Repository | 35_16_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | 4. Pushes the change to the `text-files` repository.
```python
>>> from huggingface_hub import Repository
>>> with Repository(local_dir="text-files", clone_from="<user>/text-files").commit(commit_message="My first file :)"):
... with open("file.txt", "w+") as f:
... f.write(json.dumps({"hey": 8}))
```
Here is another example of how to use the `commit` context manager to save and upload a file to a repository:
```python
>>> import torch
>>> model = torch.nn.Transformer() | 35_16_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ```python
>>> import torch
>>> model = torch.nn.Transformer()
>>> with Repository("torch-model", clone_from="<user>/torch-model", token=True).commit(commit_message="My cool model :)"):
... torch.save(model.state_dict(), "model.pt")
```
Set `blocking=False` if you would like to push your commits asynchronously. Non-blocking behavior is helpful when you want to continue running your script while your commits are being pushed.
```python | 35_16_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ```python
>>> with repo.commit(commit_message="My cool model :)", blocking=False)
```
You can check the status of your push with the `command_queue` method:
```python
>>> last_command = repo.command_queue[-1]
>>> last_command.status
```
Refer to the table below for the possible statuses:
| Status | Description |
| -------- | ------------------------------------ |
| -1 | The push is ongoing. |
| 0 | The push has completed successfully. | | 35_16_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | | -1 | The push is ongoing. |
| 0 | The push has completed successfully. |
| Non-zero | An error has occurred. |
When `blocking=False`, commands are tracked, and your script will only exit when all pushes are completed, even if other errors occur in your script. Some additional useful commands for checking the status of a push include:
```python
# Inspect an error.
>>> last_command.stderr | 35_16_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | # Check whether a push is completed or ongoing.
>>> last_command.is_done
# Check whether a push command has errored.
>>> last_command.failed
``` | 35_16_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | The [`Repository`] class has a [`~Repository.push_to_hub`] function to add files, make a commit, and push them to a repository. Unlike the `commit` context manager, you'll need to pull from a repository first before calling [`~Repository.push_to_hub`].
For example, if you've already cloned a repository from the Hub, then you can initialize the `repo` from the local directory:
```python
>>> from huggingface_hub import Repository
>>> repo = Repository(local_dir="path/to/local/repo")
``` | 35_17_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ```python
>>> from huggingface_hub import Repository
>>> repo = Repository(local_dir="path/to/local/repo")
```
Update your local clone with [`~Repository.git_pull`] and then push your file to the Hub:
```py
>>> repo.git_pull()
>>> repo.push_to_hub(commit_message="Commit my-awesome-file to the Hub")
```
However, if you aren't ready to push a file yet, you can use [`~Repository.git_add`] and [`~Repository.git_commit`] to only add and commit your file:
```py
>>> repo.git_add("path/to/file") | 35_17_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/upload.md | .md | ```py
>>> repo.git_add("path/to/file")
>>> repo.git_commit(commit_message="add my first model config file :)")
```
When you're ready, push the file to your repository with [`~Repository.git_push`]:
```py
>>> repo.git_push()
``` | 35_17_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | <!--β οΈ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 36_0_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | The `huggingface_hub` library provides a Python interface to create, share, and update Model Cards.
Visit [the dedicated documentation page](https://huggingface.co/docs/hub/models-cards)
for a deeper view of what Model Cards on the Hub are, and how they work under the hood.
<Tip>
[New (beta)! Try our experimental Model Card Creator App](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool)
</Tip> | 36_1_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | To load an existing card from the Hub, you can use the [`ModelCard.load`] function. Here, we'll load the card from [`nateraw/vit-base-beans`](https://huggingface.co/nateraw/vit-base-beans).
```python
from huggingface_hub import ModelCard | 36_2_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | card = ModelCard.load('nateraw/vit-base-beans')
```
This card has some helpful attributes that you may want to access/leverage:
- `card.data`: Returns a [`ModelCardData`] instance with the model card's metadata. Call `.to_dict()` on this instance to get the representation as a dictionary.
- `card.text`: Returns the text of the card, *excluding the metadata header*.
- `card.content`: Returns the text content of the card, *including the metadata header*. | 36_2_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | To initialize a Model Card from text, just pass the text content of the card to the `ModelCard` on init.
```python
content = """
---
language: en
license: mit
---
# My Model Card
""" | 36_3_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | # My Model Card
"""
card = ModelCard(content)
card.data.to_dict() == {'language': 'en', 'license': 'mit'} # True
```
Another way you might want to do this is with f-strings. In the following example, we:
- Use [`ModelCardData.to_yaml`] to convert metadata we defined to YAML so we can use it to insert the YAML block in the model card.
- Show how you might use a template variable via Python f-strings.
```python
card_data = ModelCardData(language='en', license='mit', library='timm') | 36_3_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | example_template_var = 'nateraw'
content = f"""
---
{ card_data.to_yaml() }
---
# My Model Card
This model created by [@{example_template_var}](https://github.com/{example_template_var})
"""
card = ModelCard(content)
print(card)
```
The above example would leave us with a card that looks like this:
```
---
language: en
license: mit
library: timm
---
# My Model Card
This model created by [@nateraw](https://github.com/nateraw)
``` | 36_3_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | If you have `Jinja2` installed, you can create Model Cards from a jinja template file. Let's see a basic example:
```python
from pathlib import Path
from huggingface_hub import ModelCard, ModelCardData
# Define your jinja template
template_text = """
---
{{ card_data }}
---
# Model Card for MyCoolModel
This model does this and that.
This model was created by [@{{ author }}](https://hf.co/{{author}}).
""".strip()
# Write the template to a file
Path('custom_template.md').write_text(template_text) | 36_4_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | # Write the template to a file
Path('custom_template.md').write_text(template_text)
# Define card metadata
card_data = ModelCardData(language='en', license='mit', library_name='keras') | 36_4_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | # Define card metadata
card_data = ModelCardData(language='en', license='mit', library_name='keras')
# Create card from template, passing it any jinja template variables you want.
# In our case, we'll pass author
card = ModelCard.from_template(card_data, template_path='custom_template.md', author='nateraw')
card.save('my_model_card_1.md')
print(card)
```
The resulting card's markdown looks like this:
```
---
language: en
license: mit
library_name: keras
---
# Model Card for MyCoolModel | 36_4_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | # Model Card for MyCoolModel
This model does this and that.
This model was created by [@nateraw](https://hf.co/nateraw).
```
If you update any card.data, it'll reflect in the card itself.
```
card.data.library_name = 'timm'
card.data.language = 'fr'
card.data.license = 'apache-2.0'
print(card)
```
Now, as you can see, the metadata header has been updated:
```
---
language: fr
license: apache-2.0
library_name: timm
---
# Model Card for MyCoolModel
This model does this and that. | 36_4_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | # Model Card for MyCoolModel
This model does this and that.
This model was created by [@nateraw](https://hf.co/nateraw).
```
As you update the card data, you can validate the card is still valid against the Hub by calling [`ModelCard.validate`]. This ensures that the card passes any validation rules set up on the Hugging Face Hub. | 36_4_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | Instead of using your own template, you can also use the [default template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md), which is a fully featured model card with tons of sections you may want to fill out. Under the hood, it uses [Jinja2](https://jinja.palletsprojects.com/en/3.1.x/) to fill out a template file.
<Tip>
Note that you will have to have Jinja2 installed to use `from_template`. You can do so with `pip install Jinja2`. | 36_5_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | <Tip>
Note that you will have to have Jinja2 installed to use `from_template`. You can do so with `pip install Jinja2`.
</Tip>
```python
card_data = ModelCardData(language='en', license='mit', library_name='keras')
card = ModelCard.from_template(
card_data,
model_id='my-cool-model',
model_description="this model does this and that",
developers="Nate Raw",
repo="https://github.com/huggingface/huggingface_hub",
)
card.save('my_model_card_2.md')
print(card)
``` | 36_5_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | If you're authenticated with the Hugging Face Hub (either by using `huggingface-cli login` or [`login`]), you can push cards to the Hub by simply calling [`ModelCard.push_to_hub`]. Let's take a look at how to do that...
First, we'll create a new repo called 'hf-hub-modelcards-pr-test' under the authenticated user's namespace:
```python
from huggingface_hub import whoami, create_repo | 36_6_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | user = whoami()['name']
repo_id = f'{user}/hf-hub-modelcards-pr-test'
url = create_repo(repo_id, exist_ok=True)
```
Then, we'll create a card from the default template (same as the one defined in the section above):
```python
card_data = ModelCardData(language='en', license='mit', library_name='keras')
card = ModelCard.from_template(
card_data,
model_id='my-cool-model',
model_description="this model does this and that",
developers="Nate Raw",
repo="https://github.com/huggingface/huggingface_hub",
) | 36_6_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | developers="Nate Raw",
repo="https://github.com/huggingface/huggingface_hub",
)
```
Finally, we'll push that up to the hub
```python
card.push_to_hub(repo_id)
```
You can check out the resulting card [here](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/blob/main/README.md).
If you instead wanted to push a card as a pull request, you can just say `create_pr=True` when calling `push_to_hub`:
```python
card.push_to_hub(repo_id, create_pr=True)
``` | 36_6_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | ```python
card.push_to_hub(repo_id, create_pr=True)
```
A resulting PR created from this command can be seen [here](https://huggingface.co/nateraw/hf-hub-modelcards-pr-test/discussions/3). | 36_6_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | In this section we will see what metadata are in repo cards and how to update them. | 36_7_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | `metadata` refers to a hash map (or key value) context that provides some high-level information about a model, dataset or Space. That information can include details such as the model's `pipeline type`, `model_id` or `model_description`. For more detail you can take a look to these guides: [Model Card](https://huggingface.co/docs/hub/model-cards#model-card-metadata), [Dataset Card](https://huggingface.co/docs/hub/datasets-cards#dataset-card-metadata) and [Spaces | 36_7_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | [Dataset Card](https://huggingface.co/docs/hub/datasets-cards#dataset-card-metadata) and [Spaces Settings](https://huggingface.co/docs/hub/spaces-settings#spaces-settings). | 36_7_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | Now lets see some examples on how to update those metadata.
Let's start with a first example:
```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("username/my-cool-model", {"pipeline_tag": "image-classification"})
```
With these two lines of code you will update the metadata to set a new `pipeline_tag`.
By default, you cannot update a key that is already existing on the card. If you want to do so, you must pass
`overwrite=True` explicitly:
```python | 36_7_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | `overwrite=True` explicitly:
```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("username/my-cool-model", {"pipeline_tag": "text-generation"}, overwrite=True)
```
It often happen that you want to suggest some changes to a repository
on which you don't have write permission. You can do that by creating a PR on that repo which will allow the owners to
review and merge your suggestions.
```python
>>> from huggingface_hub import metadata_update | 36_7_4 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | review and merge your suggestions.
```python
>>> from huggingface_hub import metadata_update
>>> metadata_update("someone/model", {"pipeline_tag": "text-classification"}, create_pr=True)
``` | 36_7_5 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | To include evaluation results in the metadata `model-index`, you can pass an [`EvalResult`] or a list of `EvalResult` with your associated evaluation results. Under the hood it'll create the `model-index` when you call `card.data.to_dict()`. For more information on how this works, you can check out [this section of the Hub docs](https://huggingface.co/docs/hub/models-cards#evaluation-results).
<Tip>
Note that using this function requires you to include the `model_name` attribute in [`ModelCardData`]. | 36_8_0 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | <Tip>
Note that using this function requires you to include the `model_name` attribute in [`ModelCardData`].
</Tip>
```python
card_data = ModelCardData(
language='en',
license='mit',
model_name='my-cool-model',
eval_results = EvalResult(
task_type='image-classification',
dataset_type='beans',
dataset_name='Beans',
metric_type='accuracy',
metric_value=0.7
)
) | 36_8_1 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | card = ModelCard.from_template(card_data)
print(card.data)
```
The resulting `card.data` should look like this:
```
language: en
license: mit
model-index:
- name: my-cool-model
results:
- task:
type: image-classification
dataset:
name: Beans
type: beans
metrics:
- type: accuracy
value: 0.7
```
If you have more than one evaluation result you'd like to share, just pass a list of `EvalResult`:
```python
card_data = ModelCardData(
language='en',
license='mit',
model_name='my-cool-model', | 36_8_2 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | ```python
card_data = ModelCardData(
language='en',
license='mit',
model_name='my-cool-model',
eval_results = [
EvalResult(
task_type='image-classification',
dataset_type='beans',
dataset_name='Beans',
metric_type='accuracy',
metric_value=0.7
),
EvalResult(
task_type='image-classification',
dataset_type='beans',
dataset_name='Beans',
metric_type='f1',
metric_value=0.65
)
]
)
card = ModelCard.from_template(card_data)
card.data
```
Which should leave you with the following `card.data`:
```
language: en | 36_8_3 |
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/guides/model-cards.md | .md | card.data
```
Which should leave you with the following `card.data`:
```
language: en
license: mit
model-index:
- name: my-cool-model
results:
- task:
type: image-classification
dataset:
name: Beans
type: beans
metrics:
- type: accuracy
value: 0.7
- type: f1
value: 0.65
``` | 36_8_4 |