Currently, the repository converts Decoder-only LLMs to Embedding models through full model fine-tuning or conversion. To improve efficiency and reduce computational costs, we should add support for LoRA (Low-Rank Adaptation) PEFT methods, allowing users to adapt base models with minimal parameter updates.