- Use
decode_icl_llama.py
to generate responses with Llama-2-vanilla models with in-context alignment. - Use other
decode_*.py
files to generate responses with baseline models. - Use
eval_outputs.py
for automatic evaluations. - Example commands can be found in the headers of the files.
- For more details, please refer to our paper or email Han at [email protected] :)
-
Notifications
You must be signed in to change notification settings - Fork 3
In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning
xhan77/in-context-alignment
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning