Skip to content

Commit

Permalink
introduce additional_arguments in config.yaml
Browse files Browse the repository at this point in the history
additional_arguments allows ilab to implicitly support more complex flags from the libraries it calls without adding them directly to the config

Signed-off-by: Charlie Doern <[email protected]>
  • Loading branch information
cdoern committed Jul 3, 2024
1 parent 1a65993 commit ac74aa5
Show file tree
Hide file tree
Showing 2 changed files with 123 additions and 0 deletions.
2 changes: 2 additions & 0 deletions .spellcheck-en-custom.txt
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Conda
config
Containerfile
cpp
ctx
cuBLAS
CUDA
customizations
Expand Down Expand Up @@ -165,3 +166,4 @@ XT
XTX
Xu
YAML
yaml
121 changes: 121 additions & 0 deletions docs/cli/ilab-config-structure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Modifying the ilab config structure to be more streamlined

Currently, the `config.yaml` is large with major expansions in training, evaluation, and serving. Expansions help the user understand what the various configuration options are for each of these commands. However, there is a fine line between a verbose config and a cluttered one.

This document describes a new structure for various parts of the config.yaml, and begins to outline how there are different levels of expertise in `ilab` which should dictate which options are available by default.

## `additional_arguments` as a field in training, serving and evaluation

Taking a look at the current training config it looks like:

Check failure on line 9 in docs/cli/ilab-config-structure.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Trailing spaces

docs/cli/ilab-config-structure.md:9:60 MD009/no-trailing-spaces Trailing spaces [Expected: 0 or 2; Actual: 1] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md009.md

```yaml
train:
torch_args:
nnodes: 1
node_rank: 0
nproc_per_node: 1
rdzv_endpoint: 127.0.0.1:12222
rdzv_id: 123
train_args:
chat_tmpl_path: /home/ec2-user/instructlab/venv/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py
ckpt_output_dir: checkpoints
data_output_dir: train-output
data_path: ./taxonomy_data
deepspeed_options:
cpu_offload_optimizer: true
cpu_offload_optimizer_pin_memory: false
cpu_offload_optimizer_ratio: 1
save_samples: null
effective_batch_size: 100
is_padding_free: false
learning_rate: 2e-6
lora:
alpha: 32
dropout: 0.1
quantize_data_type: nf4
rank: 2
target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
max_batch_len: 1000
max_seq_len: 96
mock_data: false
mock_data_len: 0
model_path: instructlab/granite-7b-lab
num_epochs: 1
random_seed: 42
save_samples: 100
warmup_steps: 10
```

Check failure on line 51 in docs/cli/ilab-config-structure.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Fenced code blocks should be surrounded by blank lines

docs/cli/ilab-config-structure.md:51 MD031/blanks-around-fences Fenced code blocks should be surrounded by blank lines [Context: "```"] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md031.md
While useful and clear to the user, this config is hard to maintain, and most users will not care about a large portion of the options.
Keeping some of the key options like `num_epochs`, `deepspeed`, `lora`, and key directories a possible training config could look like:

```yaml
train:
train_args:
ckpt_output_dir: checkpoints
data_output_dir: train-output
data_path: ./taxonomy_data
deepspeed_options:
cpu_offload_optimizer: true
cpu_offload_optimizer_pin_memory: false
cpu_offload_optimizer_ratio: 1
save_samples: null
learning_rate: 2e-6
lora:
alpha: 32
dropout: 0.1
quantize_data_type: nf4
rank: 2
target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
max_batch_len: 1000
model_path: instructlab/granite-7b-lab
num_epochs: 1
save_samples: 100
warmup_steps: 10
additional_arguments: ["--is-padding-free=False"...]
```

`additional_arguments` holds the rest of the training arguments. `ilab` would validate these against an internally maintained list of supported options before passing to the respective library.

The same structure can be applied easily to the serve config.Currently this config looks like:

Check failure on line 88 in docs/cli/ilab-config-structure.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Trailing spaces

docs/cli/ilab-config-structure.md:88:95 MD009/no-trailing-spaces Trailing spaces [Expected: 0 or 2; Actual: 1] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md009.md

```yaml
serve:
backend: ''
host_port: 127.0.0.1:8000
llama_cpp:
gpu_layers: -1
llm_family: ''
max_ctx_size: 4096
model_path: models/merlinite-7b-lab-Q4_K_M.gguf
vllm:
vllm_args: []
```

This has the opposite problem as training. The key here is that neither of these options are necessarily wrong, but to have both the verbose structure in the training config juxtaposed against the practically hidden structure of `vllm_args` is not ideal design practice. If we could merge the two approaches to use a common design language that exposes enough key arguments which are commonly edited while also not making the config confusing, this is what we should aim for.

Being very general, this would look something like:

```yaml
serve:
backend: 'vllm'
host_port: 127.0.0.1:8000
max_ctx_size: 5120
gpus: 2
llm_family: ''
model_path: models/merlinite-7b-lab-Q4_K_M.gguf
served_model_name: "merlinite"
additional_arguments: ["--block-size=16", "--dtype=fp8"...]
```

Backends like vllm have a large amount of command line options. Adding each and every one of these to our config.yaml is out of the question. However, supporting a large amount implicitly via additional_arguments is a good compromise. Additionally, the above structure lets us choose which ones we think deserve a spot in the config and which are more uncommon or preserved for power users.

This structure also allows us to flatten the config, something that is beneficial for flag mapping and config parsing. Options like max_ctx_size can apply to both vllm and llama-cpp. Options that are only applicable to a singular backend can be validated internally. Nested configurations within our config.yaml create a barrier both for the users and for flexible parsing of the config within `ilab`. additional_arguments will hopefully allow us to move generally away from nested configurations in both training and serving.

Check failure on line 121 in docs/cli/ilab-config-structure.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Trailing spaces

docs/cli/ilab-config-structure.md:121:527 MD009/no-trailing-spaces Trailing spaces [Expected: 0 or 2; Actual: 1] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md009.md

0 comments on commit ac74aa5

Please sign in to comment.