Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Florence-2 QLoRA support #108

Open
2 tasks done
0xD4rky opened this issue Jan 11, 2025 · 5 comments
Open
2 tasks done

Florence-2 QLoRA support #108

0xD4rky opened this issue Jan 11, 2025 · 5 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed model Request to add / extend support for the model.

Comments

@0xD4rky
Copy link

0xD4rky commented Jan 11, 2025

Search before asking

  • I have searched the Multimodal Maestro issues and found no similar feature requests.

Description

I was going through the maestro repo and found out that both paligemma and florence models didn't support the implementation of 4-bit quantization (i.e. using QLoRA config).

Use case

Using QLoRA, we could easily fine-tune vision language models on even low end devices without losing on precision a lot. As the models grow, we would eventually need to implement QLoRA to make finetuning fast and possible on memory constraints.

Additional

I would like to learn your take on implementing quantization.

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@0xD4rky 0xD4rky added the enhancement New feature or request label Jan 11, 2025
@SkalskiP SkalskiP added the model Request to add / extend support for the model. label Feb 6, 2025
@SkalskiP
Copy link
Collaborator

SkalskiP commented Feb 6, 2025

Hi @0xD4rky 👋🏻 Thank you for your interest in maestro. Sorry for the late reply, I've been heavily involved in delivering maestro-1.0.0 over the past few weeks. I've managed to add QLoRA support for PaliGemma 2 and Qwen2.5-VL. Unfortunately, for Florence-2 we only have LoRA for now. It would be great if someone in the community would like to add QLoRA.

@0xD4rky I saw that you checked the Yes I'd like to help by submitting a PR! checkbox. Would you like to give it a try?

@SkalskiP SkalskiP changed the title fine-tuning via quantization Florence-2 QLoRA support Feb 6, 2025
@SkalskiP SkalskiP added the help wanted Extra attention is needed label Feb 6, 2025
@0xD4rky
Copy link
Author

0xD4rky commented Feb 6, 2025

Yes @SkalskiP, it would be absolutely great if I am able to contribute towards maestro. Will go through the code base, develop the QLoRA approach for Florence-2 and then raise the PR. This works?

@SkalskiP
Copy link
Collaborator

SkalskiP commented Feb 6, 2025

@0xD4rky that would be awesome! 🔥 I'm excited. We are building something cool here.

@SkalskiP
Copy link
Collaborator

SkalskiP commented Feb 7, 2025

@0xD4rky I'll assign this issue to you!

BTW I have a question: I'm thinking about launching a Discord server dedicated to VLM fine-tuning with Maestro) and while talking about current issues I'm trying to understand if people would like such a server to be created.

@0xD4rky
Copy link
Author

0xD4rky commented Feb 9, 2025

@SkalskiP, people would love to hop on a Roboflow's discord server in general. It would be great to discuss the fine-tunings and approaches using Maestro, people would be definitely interested!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed model Request to add / extend support for the model.
Projects
None yet
Development

No branches or pull requests

2 participants