Skip to content

Conversation

jbuchananr
Copy link

Issue:

Currently, there is no way to pass an eval set to be used during fine tuning to prevent over fitting.

Fix:

Modify internvl_chat_finetune.py to take optional data set via meta.json to be used for evaluation.

Example Usage:

torchrun \ 
  ...
  --meta_path "/home/ubuntu/meta_train.json" \
  ...
  --eval_meta_path "/home/ubuntu/valid_meta.json" \
  --do_eval True \
  --eval_strategy "steps" \ 
  --eval_steps 100 \
  ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant