Skip to content

Commit 314ef41

Browse files
authored
Merge pull request #206 from huggingface/main
Merge changes
2 parents 51cfa06 + d9023a6 commit 314ef41

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+4506
-166
lines changed

docs/source/en/api/pipelines/hunyuan_video.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,8 @@ The following models are available for the image-to-video pipeline:
5050
| Model name | Description |
5151
|:---|:---|
5252
| [`Skywork/SkyReels-V1-Hunyuan-I2V`](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V) | Skywork's custom finetune of HunyuanVideo (de-distilled). Performs best with `97x544x960` resolution. Performs best at `97x544x960` resolution, `guidance_scale=1.0`, `true_cfg_scale=6.0` and a negative prompt. |
53-
| [`hunyuanvideo-community/HunyuanVideo-I2V`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V) | Tecent's official HunyuanVideo I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20) |
53+
| [`hunyuanvideo-community/HunyuanVideo-I2V-33ch`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V) | Tecent's official HunyuanVideo 33-channel I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20). |
54+
| [`hunyuanvideo-community/HunyuanVideo-I2V`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V) | Tecent's official HunyuanVideo 16-channel I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20) |
5455

5556
## Quantization
5657

docs/source/en/api/pipelines/wan.md

Lines changed: 394 additions & 11 deletions
Large diffs are not rendered by default.

docs/source/en/installation.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -161,10 +161,10 @@ Your Python environment will find the `main` version of 🤗 Diffusers on the ne
161161

162162
Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the `HF_HOME` or `HUGGINFACE_HUB_CACHE` environment variables or configuring the `cache_dir` parameter in methods like [`~DiffusionPipeline.from_pretrained`].
163163

164-
Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the `HF_HUB_OFFLINE` environment variable to `True` and 🤗 Diffusers will only load previously downloaded files in the cache.
164+
Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the `HF_HUB_OFFLINE` environment variable to `1` and 🤗 Diffusers will only load previously downloaded files in the cache.
165165

166166
```shell
167-
export HF_HUB_OFFLINE=True
167+
export HF_HUB_OFFLINE=1
168168
```
169169

170170
For more details about managing and cleaning the cache, take a look at the [caching](https://huggingface.co/docs/huggingface_hub/guides/manage-cache) guide.
@@ -179,14 +179,16 @@ Telemetry is only sent when loading models and pipelines from the Hub,
179179
and it is not collected if you're loading local files.
180180

181181
We understand that not everyone wants to share additional information,and we respect your privacy.
182-
You can disable telemetry collection by setting the `DISABLE_TELEMETRY` environment variable from your terminal:
182+
You can disable telemetry collection by setting the `HF_HUB_DISABLE_TELEMETRY` environment variable from your terminal:
183183

184184
On Linux/MacOS:
185+
185186
```bash
186-
export DISABLE_TELEMETRY=YES
187+
export HF_HUB_DISABLE_TELEMETRY=1
187188
```
188189

189190
On Windows:
191+
190192
```bash
191-
set DISABLE_TELEMETRY=YES
193+
set HF_HUB_DISABLE_TELEMETRY=1
192194
```

docs/source/en/optimization/memory.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -198,6 +198,18 @@ export_to_video(video, "output.mp4", fps=8)
198198

199199
Group offloading (for CUDA devices with support for asynchronous data transfer streams) overlaps data transfer and computation to reduce the overall execution time compared to sequential offloading. This is enabled using layer prefetching with CUDA streams. The next layer to be executed is loaded onto the accelerator device while the current layer is being executed - this increases the memory requirements slightly. Group offloading also supports leaf-level offloading (equivalent to sequential CPU offloading) but can be made much faster when using streams.
200200

201+
<Tip>
202+
203+
- Group offloading may not work with all models out-of-the-box. If the forward implementations of the model contain weight-dependent device-casting of inputs, it may clash with the offloading mechanism's handling of device-casting.
204+
- The `offload_type` parameter can be set to either `block_level` or `leaf_level`. `block_level` offloads groups of `torch::nn::ModuleList` or `torch::nn:Sequential` modules based on a configurable attribute `num_blocks_per_group`. For example, if you set `num_blocks_per_group=2` on a standard transformer model containing 40 layers, it will onload/offload 2 layers at a time for a total of 20 onload/offloads. This drastically reduces the VRAM requirements. `leaf_level` offloads individual layers at the lowest level, which is equivalent to sequential offloading. However, unlike sequential offloading, group offloading can be made much faster when using streams, with minimal compromise to end-to-end generation time.
205+
- The `use_stream` parameter can be used with CUDA devices to enable prefetching layers for onload. It defaults to `False`. Layer prefetching allows overlapping computation and data transfer of model weights, which drastically reduces the overall execution time compared to other offloading methods. However, it can increase the CPU RAM usage significantly. Ensure that available CPU RAM that is at least twice the size of the model when setting `use_stream=True`. You can find more information about CUDA streams [here](https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html)
206+
- If specifying `use_stream=True` on VAEs with tiling enabled, make sure to do a dummy forward pass (possibly with dummy inputs) before the actual inference to avoid device-mismatch errors. This may not work on all implementations. Please open an issue if you encounter any problems.
207+
- The parameter `low_cpu_mem_usage` can be set to `True` to reduce CPU memory usage when using streams for group offloading. This is useful when the CPU memory is the bottleneck, but it may counteract the benefits of using streams and increase the overall execution time. The CPU memory savings come from creating pinned-tensors on-the-fly instead of pre-pinning them. This parameter is better suited for using `leaf_level` offloading.
208+
209+
For more information about available parameters and an explanation of how group offloading works, refer to [`~hooks.group_offloading.apply_group_offloading`].
210+
211+
</Tip>
212+
201213
## FP8 layerwise weight-casting
202214

203215
PyTorch supports `torch.float8_e4m3fn` and `torch.float8_e5m2` as weight storage dtypes, but they can't be used for computation in many different tensor operations due to unimplemented kernel support. However, you can use these dtypes to store model weights in fp8 precision and upcast them on-the-fly when the layers are used in the forward pass. This is known as layerwise weight-casting.
@@ -235,6 +247,14 @@ In the above example, layerwise casting is enabled on the transformer component
235247

236248
However, you gain more control and flexibility by directly utilizing the [`~hooks.layerwise_casting.apply_layerwise_casting`] function instead of [`~ModelMixin.enable_layerwise_casting`].
237249

250+
<Tip>
251+
252+
- Layerwise casting may not work with all models out-of-the-box. Sometimes, the forward implementations of the model might contain internal typecasting of weight values. Such implementations are not supported due to the currently simplistic implementation of layerwise casting, which assumes that the forward pass is independent of the weight precision and that the input dtypes are always in `compute_dtype`. An example of an incompatible implementation can be found [here](https://github.com/huggingface/transformers/blob/7f5077e53682ca855afc826162b204ebf809f1f9/src/transformers/models/t5/modeling_t5.py#L294-L299).
253+
- Layerwise casting may fail on custom modeling implementations that make use of [PEFT](https://github.com/huggingface/peft) layers. Some minimal checks to handle this case is implemented but is not extensively tested or guaranteed to work in all cases.
254+
- It can be also be applied partially to specific layers of a model. Partially applying layerwise casting can either be done manually by calling the `apply_layerwise_casting` function on specific internal modules, or by specifying the `skip_modules_pattern` and `skip_modules_classes` parameters for a root module. These parameters are particularly useful for layers such as normalization and modulation.
255+
256+
</Tip>
257+
238258
## Channels-last memory format
239259

240260
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model.

docs/source/en/using-diffusers/loading.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,23 @@ Use the Space below to gauge a pipeline's memory requirements before you downloa
9595
></iframe>
9696
</div>
9797

98+
### Specifying Component-Specific Data Types
99+
100+
You can customize the data types for individual sub-models by passing a dictionary to the `torch_dtype` parameter. This allows you to load different components of a pipeline in different floating point precisions. For instance, if you want to load the transformer with `torch.bfloat16` and all other components with `torch.float16`, you can pass a dictionary mapping:
101+
102+
```python
103+
from diffusers import HunyuanVideoPipeline
104+
import torch
105+
106+
pipe = HunyuanVideoPipeline.from_pretrained(
107+
"hunyuanvideo-community/HunyuanVideo",
108+
torch_dtype={'transformer': torch.bfloat16, 'default': torch.float16},
109+
)
110+
print(pipe.transformer.dtype, pipe.vae.dtype) # (torch.bfloat16, torch.float16)
111+
```
112+
113+
If a component is not explicitly specified in the dictionary and no `default` is provided, it will be loaded with `torch.float32`.
114+
98115
### Local pipeline
99116

100117
To load a pipeline locally, use [git-lfs](https://git-lfs.github.com/) to manually download a checkpoint to your local disk.

docs/source/ko/training/controlnet.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,12 +66,6 @@ from accelerate.utils import write_basic_config
6666
write_basic_config()
6767
```
6868

69-
## 원을 채우는 데이터셋
70-
71-
원본 데이터셋은 ControlNet [repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip)에 올라와있지만, 우리는 [여기](https://huggingface.co/datasets/fusing/fill50k)에 새롭게 다시 올려서 🤗 Datasets 과 호환가능합니다. 그래서 학습 스크립트 상에서 데이터 불러오기를 다룰 수 있습니다.
72-
73-
우리의 학습 예시는 원래 ControlNet의 학습에 쓰였던 [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5)을 사용합니다. 그렇지만 ControlNet은 대응되는 어느 Stable Diffusion 모델([`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4)) 혹은 [`stabilityai/stable-diffusion-2-1`](https://huggingface.co/stabilityai/stable-diffusion-2-1)의 증가를 위해 학습될 수 있습니다.
74-
7569
자체 데이터셋을 사용하기 위해서는 [학습을 위한 데이터셋 생성하기](create_dataset) 가이드를 확인하세요.
7670

7771
## 학습

examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py

Lines changed: 32 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@
7171
convert_unet_state_dict_to_peft,
7272
is_wandb_available,
7373
)
74+
from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card
7475
from diffusers.utils.import_utils import is_xformers_available
7576
from diffusers.utils.torch_utils import is_compiled_module
7677

@@ -101,7 +102,7 @@ def determine_scheduler_type(pretrained_model_name_or_path, revision):
101102
def save_model_card(
102103
repo_id: str,
103104
use_dora: bool,
104-
images=None,
105+
images: list = None,
105106
base_model: str = None,
106107
train_text_encoder=False,
107108
train_text_encoder_ti=False,
@@ -111,20 +112,17 @@ def save_model_card(
111112
repo_folder=None,
112113
vae_path=None,
113114
):
114-
img_str = "widget:\n"
115115
lora = "lora" if not use_dora else "dora"
116-
for i, image in enumerate(images):
117-
image.save(os.path.join(repo_folder, f"image_{i}.png"))
118-
img_str += f"""
119-
- text: '{validation_prompt if validation_prompt else ' ' }'
120-
output:
121-
url:
122-
"image_{i}.png"
123-
"""
124-
if not images:
125-
img_str += f"""
126-
- text: '{instance_prompt}'
127-
"""
116+
117+
widget_dict = []
118+
if images is not None:
119+
for i, image in enumerate(images):
120+
image.save(os.path.join(repo_folder, f"image_{i}.png"))
121+
widget_dict.append(
122+
{"text": validation_prompt if validation_prompt else " ", "output": {"url": f"image_{i}.png"}}
123+
)
124+
else:
125+
widget_dict.append({"text": instance_prompt})
128126
embeddings_filename = f"{repo_folder}_emb"
129127
instance_prompt_webui = re.sub(r"<s\d+>", "", re.sub(r"<s\d+>", embeddings_filename, instance_prompt, count=1))
130128
ti_keys = ", ".join(f'"{match}"' for match in re.findall(r"<s\d+>", instance_prompt))
@@ -169,23 +167,7 @@ def save_model_card(
169167
to trigger concept `{key}` → use `{tokens}` in your prompt \n
170168
"""
171169

172-
yaml = f"""---
173-
tags:
174-
- stable-diffusion-xl
175-
- stable-diffusion-xl-diffusers
176-
- diffusers-training
177-
- text-to-image
178-
- diffusers
179-
- {lora}
180-
- template:sd-lora
181-
{img_str}
182-
base_model: {base_model}
183-
instance_prompt: {instance_prompt}
184-
license: openrail++
185-
---
186-
"""
187-
188-
model_card = f"""
170+
model_description = f"""
189171
# SDXL LoRA DreamBooth - {repo_id}
190172
191173
<Gallery />
@@ -234,8 +216,25 @@ def save_model_card(
234216
235217
{license}
236218
"""
237-
with open(os.path.join(repo_folder, "README.md"), "w") as f:
238-
f.write(yaml + model_card)
219+
model_card = load_or_create_model_card(
220+
repo_id_or_path=repo_id,
221+
from_training=True,
222+
license="openrail++",
223+
base_model=base_model,
224+
prompt=instance_prompt,
225+
model_description=model_description,
226+
widget=widget_dict,
227+
)
228+
tags = [
229+
"text-to-image",
230+
"stable-diffusion-xl",
231+
"stable-diffusion-xl-diffusers",
232+
"text-to-image",
233+
"diffusers",
234+
lora,
235+
"template:sd-lora",
236+
]
237+
model_card = populate_model_card(model_card, tags=tags)
239238

240239

241240
def log_validation(

0 commit comments

Comments
 (0)