You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implemented baseline LoRA peft with FSDP integration, tested on one node. (#5)
* Implemented baseline LoRA peft for one Nvidia GPU.
* Added support for saving lora adapters.
Added support for non-fsdp models.
* save_utils: added support for non-FSDP optimizers.
trainer: replaced clip_grad_norm_ with nn.utils.clip_grad_norm_ for lora compatibility.
* example_lora: highlighted current lora (non-fsdp) limitations.
* Added instructions on LoRA on one GPU.
* Added example script for launching lora.
* Revised instructions on LoRA on one GPU.
* Implemented LoRA FSDP.
Also see https://github.com/facebookresearch/llama-recipes/blob/674b37ee66f59a7845cbc3868948f4d7fa69c679/src/llama_recipes/utils/fsdp_utils.py#L9
* Reverted automatic formatter changes in README.md
* Eliminated non-FSDP logic from save_utils.
Set model path to local copy of llama-2-7b in example config.
* Moved lora config out of example config.yaml.
* Implemented LoRA benchmarking logic for worker.
* model_utils: Refactored get_lora_model to reduce interface width. (this method no longer wraps load_model_and_tokenizer)
test_modelling: revised base model fixture scope since torch FSDP wrap is in-place.
launch_benchmark: added confirmation before launching.
* test_modelling: moved text output to data/.
* added example yaml config for lora benchmarking.
* launch_benchmark: marked qos flag as optional.
* launch_benchmark: added option to limit number of jobs launched.
* launch_benchmark: implemented torch profiler integration.
* Merged changes from low CPU memory usage feature (#6) into jjt/lora-benchmarking
* added changes to implement low cpu mem usage feature
* implemented new ruff linting changes and ran a fix across files
* Revised launch_benchmark.py to use new profiling path.
* Enabled automatic creation of data/trace folder.
* Added instructions for profiling tools.
* Cleaned up duplicate imports from merge.
* Cleaned up duplicate imports from merge.
* Cleaned up parse_benchmark.py
* Integrated LoRA logic into llama_example.py.
* Moved lora_configs into train_parameters in config yaml. Adjusted docs/config.md accordingly.
* Revised handling of nproc-per-node in benchmark script.
* Included parameter_count info in benchmark output.
* Implemented basic util for parsing benchmarking output.
* model_utils: Enabled low_cpu_mem_usage in auto model from_pretrained by default.
* launch_lora_benchmark.sh: implemented automatic identification of num_gpus.
lora-benchmark: switched
parse_benchmark: implemented option to specify benchmark artifact folder to load.
* requirements.txt: included accelerate to support low_cpu_mem loading.
* benchmark.py: adjusted BenchmarkingDataset to avoid StopIteration exception.
* benchmark.py: added env var flag to toggle export_trace
* parse_benchmark: included profiler table in output file.
launch_benchmark: automated folder creation.
launch_lora_benchmark: included model info in slurm output.
* get_lora_model_from_base_model: enabled peft for models loaded via low_cpu_mem.
More investigation might be needed.
* model_utils: revised dtype handling for peft-wrapped models.
* parse_benchmark: implemented sorting of profiler table output.
launch_benchmark: revised default run time limit.
* Merged example_lora into examples/llama_example.pu
* Added instructions related to parse_benchmark
* parse_benchmark: implemented aggregation across repeated metrics.
* Implemented non-LoRA profiling and benchmarking.
* Various static typechecking and formatting fixes.
* Implemented restoring LoRA train state from filesystem.
During training the adapter weights are saved to and loaded from the filesystem. The base model weights are loaded separately.
Revised reference to optim_state_dict_to_load in load_optimizer.
* Included train step number in LoRA adapter output path.
* Added reference throughput table to documentation.
* Added unit description to reference throughput table.
Applied markdown formatting via prettier.
* Added unit description to reference throughput table.
Applied markdown formatting via prettier.
* Benchmark: added option to override max_length of pre-trained model.
* Deleted unused `accelerate` dependency from requirements.txt
* Benchmark: added comment on max_length.
* Benchmark: added comment on batch size.
* Benchmark: added option to override batch size.
* Benchmark throughput documentation: revised word choices.
* Moved profiling-tracking logic out of Trainer.
* Eliminated hasattr check related to no_sync since FSDP is always enabled.
* Replaced peft fsdp_auto_wrap_policy to eliminate implicit `accelerate` dependency.
Eliminated redundant bfloat16 type conversion.
Fixed scope of placeholder for `is_peft_adapter_restored`.
* Configured LoRA auto-wrap policy as off by default- enable the policy only when LoRA is required.
* Revised punctuation in lora_requires_grad_policy_fn.
* Renamed declarative `enable_lora` with descriptive `is_lora_enabled`.
* Replaced optimizer.load_state_dict with load_sharded_optimizer_state_dict for PEFT optimizer.
Added LoRA/PEFT documentations.
* benchmarking: deleted unused TypeVar in parse_benchmark.py
* Replaced config getattr and hasattr with dict methods.
* Deleted redundant lora-specific launch scripts.
* Added launch_benchmark.sh for throughput benchmarks.
* Benchmark: run `makedirs` only `if __name__ == "__main__"`.
* Replaced peft class attributes in Trainer with instance attributes.
Added information about benchmarking environment.
Additional formatting fixes.
---------
Co-authored-by: Adil <[email protected]>
Copy file name to clipboardExpand all lines: README.md
+4-5Lines changed: 4 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,11 +61,10 @@ We implement several training optimizations that can be reviewed under [`docs/tr
61
61
62
62
We have provided an example script to show what a regular workflow would look like for the user. It assumes a preprocessed dataset has already been created. The [`examples/launch.sh`](examples/launch.sh) script begins dense finetuning a Llama-2 7B chat model sharded across a node of 4x A100-80GB GPUs. With the Python environment activated, this can be launched using `sbatch launch.sh`. We also provide a script to launch the same training run in a multinode setting across two A100 nodes at [`examples/launch_multinode.sh`](examples/launch_multinode.sh). Please note that hybrid sharding strategies need to be employed as you scale to multinode settings to minimize communication bottlenecks. More information regarding this can be found in [`docs/config.md`](docs/config.md).
63
63
64
-
At the end of training, a consolidated model will be saved under your output directory as a `.bin` file. You can simply just run [`vectorlm/utils/convert_to_hf.py`](vectorlm/utils/convert_to_hf.py) to convert it to the regular HuggingFace model format. The script uses the main config file to determine save locations.
65
-
66
-
## Roadmap
67
-
- PEFT methods (LoRA).
64
+
At the end of training, a consolidated model will be saved under your output directory.
65
+
- If LoRA is enabled, the output will be a PEFT adapter repository that can be loaded directly via [AutoModel.from_pretrained](https://huggingface.co/docs/transformers/main/en/peft#load-a-peft-adapter).
66
+
- Otherwise, the output would be a `.bin` file. You can simply just run [`vectorlm/utils/convert_to_hf.py`](vectorlm/utils/convert_to_hf.py) to convert it to the regular HuggingFace model format. The script uses the main config file to determine save locations.
68
67
69
68
# Contributors
70
69
71
-
Adil Asif, Ziwen Han, John Willes.
70
+
Adil Asif, Ziwen Han, John Willes, Jacob-Junqi Tian.
Copy file name to clipboardExpand all lines: docs/config.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,7 @@ The key-value pairs stored under `wandb_config` are directly passed into the [`w
29
29
*`use_activation_checkpointing`: Whether to use activation checkpointing. This greatly reduces memory footprint as only a few intermediate activations as saved during the forward pass, and are then recomputed for the backward pass on the fly. However, the tradeoff between compute vs. memory usually makes this worth it.
30
30
*`use_flash_attention`: Whether to use Flash Attention. If it is supported for your model in HuggingFace, you can enable this option.
31
31
*`low_cpu_mem_usage`: Whether to efficiently load the model. If enabled, the model weights are only loaded once on rank 0 and are broadcasted to the rest of the world from the main rank. It will prevent the CPU memory from exploding when loading large models (e.g. LLaMa-70B).
32
+
-`lora_peft_config`: Optionally, fine-tune the model using low-rank adaptation via HuggingFace PEFT. Uncomment this section to enable LoRA. All parameters specified under this section are forwarded to [peft.LoRAConfig](https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraConfig).
We've benchmarked VectorLM on the Vaughan cluster for a number of model architectures across a variety of node configurations.
4
+
In experiments labelled as LoRA, we set hidden dimension to 8. During the testing, the NVIDIA driver version was 525.105.17, CUDA Runtime 12.1.105, and torch 2.2.2.
5
+
6
+
For consistency, we use a batch size of 8 and the maximum context length that the pre-trained LLM supports, capped at 65536. Note that especially for smaller models, it might be possible to further increase throughput by switching to a larger batch size.
7
+
8
+
Entries that read NaN represent combinations where the node configuration does not have enough GPU memory for the training run to complete. An exception is gemma-2b, which currently does not support full-rank FSDP fine-tuning.
9
+
10
+
All values in the table below represent the median training throughput in tokens per second across all training steps, aggregated across all GPU devices.
To modify the specific SLURM resources types to benchmark, adjust the launcher script `launch_benchmark.py` as needed. Modify `profiling/configs/lora-benchmark.yaml` to adjust parameters such as batch size and token width.
4
+
5
+
On the Vector cluster, run the following to launch the benchmarks:
6
+
7
+
```bash
8
+
$ mkdir data/
9
+
$ python3 launch_benchmark.py
10
+
11
+
# The launcher script will print a list of
12
+
# SLURM commands it plans to run. Press ENTER
13
+
# to accept and automatically invoke the commands.
14
+
```
15
+
16
+
After the SLURM jobs complete, profiler output can be found under `data/benchmark`. Invoke the following the to generate a Markdown summary of the results:
0 commit comments