You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to do federated training of adapters using Flower framework. But I am unable to find a way to get and set the adapter state_dict similar to set_peft_model_state_dict. Here is standard Flower code for getting and setting parameters
def set_parameters(model, parameters: NDArrays) -> None:
"""Change the parameters of the model using the given ones."""
peft_state_dict_keys = get_peft_model_state_dict(model).keys()
params_dict = zip(peft_state_dict_keys, parameters)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
set_peft_model_state_dict(model, state_dict)
def get_parameters(model) -> NDArrays:
"""Return the parameters of the current net."""
state_dict = get_peft_model_state_dict(model)
return [val.cpu().numpy() for _, val in state_dict.items()]
How do I go around doing this with adapters instead of peft?
Any help is appreciated
The text was updated successfully, but these errors were encountered:
This returns a dictionary containing all adapter modules in the following format: {<layer id>: {<module location>: <nn.Module>}} which can be manipulated directly and/ or copied. E.g. to set weights in the Lora module of the last transformer layer:
I am trying to do federated training of adapters using Flower framework. But I am unable to find a way to get and set the adapter state_dict similar to set_peft_model_state_dict. Here is standard Flower code for getting and setting parameters
How do I go around doing this with adapters instead of peft?
Any help is appreciated
The text was updated successfully, but these errors were encountered: