Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom (MCORE) FSDP interface #12391

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions nemo/collections/llm/recipes/CONFIGURATION-HIERARCHY.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,11 @@
bucket_size: Optional[int] = None # Maximum number of parameters in each bucket
average_in_collective: bool = False # If true, compute average in collective directly, as opposed to dividing by the dp_size first and then computing sum in the collective
fp8_param_gather: bool = False # If true, keep the compute param in fp8 (do not use any other intermediate dtype) and perform the param all-gather in fp8
use_custom_fsdp: bool = False # If true, use MCore's custom FSDP implementation. recipe.model.config.gradient_accumulation_fusion must be False when using this
data_parallel_sharding_strategy: str = "no_shard" # Sharding strategy when using custom FSDP, choices=['no_shard', 'optim', 'optim_grads', 'optim_grads_params']
suggested_communication_unit_size: int = 400_000_000 # When using custom FSDP and batch communication is needed across multiple buckets, this variable guides the size of communication unit size
preserve_fp32_weights: bool = True # If true, preserve fp32 weights in the custom FSDP ParamAndGradBuffer
keep_fp8_transpose_cache_when_using_custom_fsdp: bool = False # If true, keep the fp8 transpose cache when using custom FSDP
```
</blockquote>
</details>
Expand Down
6 changes: 6 additions & 0 deletions nemo/lightning/megatron_parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -689,6 +689,12 @@ def init_ddp(self):
) # We need to do this explicitly since this is a attr pytorch uses
model_chunk.__class__.__getattr__ = getattr_proxy # type: ignore

# Ensure that if using FSDP, gradient_accumulation_fusion is disabled on the model config.
if self.ddp_config.use_custom_fsdp:
assert (
module.config.gradient_accumulation_fusion == False
), "gradient_accumulation_fusion cannot be used with FSDP"

# param_sync_func is set in nemo.lightning.pytorch.optim.megatron
no_sync_func, grad_sync_func = extract_ddp_funcs(self.ddp_config, self)
for module in self:
Expand Down
Loading