You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I first train meetkai/functionary-small-v3.2 using deepspeed functionary/train/train_lora.py with your provided params. Then I run the following script to serve LoRA adapters at startup python server_vllm.py --model meetkai/functionary-small-v3.2 --enable-lora --lora-modules {name}={path} --host 0.0.0.0 --port 8000
The issue comes when I try a request to the model vLLM fails with the following message
INFO: 131.226.33.184:53594 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/vllm/lora/worker_manager.py", line 105, in _load_adapter
lora = self._lora_model_cls.from_local_checkpoint(
File "/opt/conda/lib/python3.10/site-packages/vllm/lora/models.py", line 221, in from_local_checkpoint
peft_helper = PEFTHelper.from_dict(config)
File "/opt/conda/lib/python3.10/site-packages/vllm/lora/peft_helper.py", line 80, in from_dict
return cls(**filtered_dict)
File "<string>", line 14, in __init__
File "/opt/conda/lib/python3.10/site-packages/vllm/lora/peft_helper.py", line 45, in __post_init__
self._validate_features()
File "/opt/conda/lib/python3.10/site-packages/vllm/lora/peft_helper.py", line 42, in _validate_features
raise ValueError(f"{', '.join(error_msg)}")
ValueError: vLLM only supports modules_to_save being None.
The text was updated successfully, but these errors were encountered:
@selectorseb Hi, may I know did you install the dependencies following this? It seems like you could be using a newer vllm version. I tried with my own LoRA adapter after installing the dependencies and it works for me.
I would love to ask about the process of fine-tuning the meetkai/functionary-small-v3.2 on your custom dataset. Could you provide me with more information about your fine-tuning process? For example, the training data samples, the parameters used during fine-tuning, or the commands that can be used to fine-tune that model, etc.
I first train meetkai/functionary-small-v3.2 using
deepspeed functionary/train/train_lora.py
with your provided params. Then I run the following script to serve LoRA adapters at startuppython server_vllm.py --model meetkai/functionary-small-v3.2 --enable-lora --lora-modules {name}={path} --host 0.0.0.0 --port 8000
The issue comes when I try a request to the model vLLM fails with the following message
The text was updated successfully, but these errors were encountered: