Skip to content

Conversation

maziyarpanahi
Copy link

Adding information regarding parameters accepted by load_model():

  • name: str that accepts mini, base, standard, large, and huge
  • dtype: str=None that accepts float16 (or when the name is set to huge) and float32
  • num_gpus: int=None that accepts the total number of GPUs to be used (by default this is 8)

Will resolve #3 issue

Adding information regarding parameters accepted by `load_model()`:
- `name: str` that accepts `mini`, `base`, `standard`, `large`, and `huge`
- `dtype: str=None` that accepts `float16` (or when the name is set to `huge`) and `float32`
- `num_gpus: int=None` that accepts the total number of GPUs to be used (by default this is `8`)
@maziyarpanahi
Copy link
Author

Just resolved the conflict @mkardas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

RuntimeError: CUDA error: invalid device ordinal
1 participant