Skip to content

Commit fd2a29d

Browse files
authored
Fix more typos (#40627)
Fix typos Signed-off-by: Yuanyuan Chen <[email protected]>
1 parent bb8e9cd commit fd2a29d

File tree

4 files changed

+5
-5
lines changed

4 files changed

+5
-5
lines changed

docs/source/en/kv_cache.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ You may want to consider offloading if you have a small GPU and you're getting o
102102
Offloading is available for both [`DynamicCache`] and [`StaticCache`]. You can enable it by configuring `cache_implementation="offloaded"` for the dynamic version, or `cache_implementation="offloaded_static"` for the static version, in either [`GenerationConfig`] or [`~GenerationMixin.generate`].
103103
Additionally, you can also instantiate your own [`DynamicCache`] or [`StaticCache`] with the `offloading=True` option, and pass this cache in `generate` or your model's `forward` (for example, `past_key_values=DynamicCache(config=model.config, offloading=True)` for a dynamic cache).
104104

105-
Note that the 2 [`Cache`] classes mentionned above have an additional option when instantiating them directly, `offload_only_non_sliding`.
105+
Note that the 2 [`Cache`] classes mentioned above have an additional option when instantiating them directly, `offload_only_non_sliding`.
106106
This additional argument decides if the layers using sliding window/chunk attention (if any), will be offloaded as well. Since
107107
these layers are usually short anyway, it may be better to avoid offloading them, as offloading may incur a speed penalty. By default, this option is `False` for [`DynamicCache`], and `True` for [`StaticCache`].
108108

docs/source/en/model_doc/gptsan-japanese.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ The `generate()` method can be used to generate text using GPTSAN-Japanese model
5050
>>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device)
5151
>>> x_tok = tokenizer("は、", prefix_text="織田信長", return_tensors="pt")
5252
>>> torch.manual_seed(0)
53-
>>> gen_tok = model.generate(x_tok.input_ids.to(model.device), token_type_ids=x_tok.token_type_ids.to(mdoel.device), max_new_tokens=20)
53+
>>> gen_tok = model.generate(x_tok.input_ids.to(model.device), token_type_ids=x_tok.token_type_ids.to(model.device), max_new_tokens=20)
5454
>>> tokenizer.decode(gen_tok[0])
5555
'織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉'
5656
```

docs/source/en/tasks/visual_document_retrieval.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ Index the images offline, and during inference, return the query text embeddings
7979
Store the image and image embeddings by writing them to the dataset with [`~datasets.Dataset.map`] as shown below. Add an `embeddings` column that contains the indexed embeddings. ColPali embeddings take up a lot of storage, so remove them from the accelerator and store them in the CPU as NumPy vectors.
8080

8181
```python
82-
ds_with_embeddings = dataset.map(lambda example: {'embeddings': model(**processor(images=example["image"]).to(devide), return_tensors="pt").embeddings.to(torch.float32).detach().cpu().numpy()})
82+
ds_with_embeddings = dataset.map(lambda example: {'embeddings': model(**processor(images=example["image"]).to(device), return_tensors="pt").embeddings.to(torch.float32).detach().cpu().numpy()})
8383
```
8484

8585
For online inference, create a function to search the image embeddings in batches and retrieve the k-most relevant images. The function below returns the indices in the dataset and their scores for a given indexed dataset, text embeddings, number of top results, and the batch size.

examples/modular-transformers/modeling_new_task_model.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ def get_placeholder_mask(
232232
self, input_ids: torch.LongTensor, inputs_embeds: torch.FloatTensor, image_features: torch.FloatTensor
233233
):
234234
"""
235-
Obtains multimodal placeholdr mask from `input_ids` or `inputs_embeds`, and checks that the placeholder token count is
235+
Obtains multimodal placeholder mask from `input_ids` or `inputs_embeds`, and checks that the placeholder token count is
236236
equal to the length of multimodal features. If the lengths are different, an error is raised.
237237
"""
238238
if input_ids is None:
@@ -406,7 +406,7 @@ def get_decoder(self):
406406
def get_image_features(self, pixel_values):
407407
return self.model.get_image_features(pixel_values)
408408

409-
# Make modules available throught conditional class for BC
409+
# Make modules available through conditional class for BC
410410
@property
411411
def language_model(self):
412412
return self.model.language_model

0 commit comments

Comments
 (0)