We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 818e93e commit 88e066fCopy full SHA for 88e066f
content/learning-paths/servers-and-cloud-computing/distributed-inference-with-llama-cpp/how-to-1.md
@@ -81,7 +81,7 @@ Add the following code:
81
```python
82
import os
83
from huggingface_hub import snapshot_download
84
-model_id = "meta-llama/llama-3.1-70B"
+model_id = "meta-llama/Llama-3.1-70B"
85
local_dir = "llama-hf"
86
87
# Create the directory if it doesn't exist
@@ -188,4 +188,4 @@ Allowed quantization types:
188
32 or BF16 : 14.00G, -0.0050 ppl @ Mistral-7B
189
0 or F32 : 26.00G @ 7B
190
COPY : only copy tensors, no quantizing
191
-```
+```
0 commit comments