This repository provides the code to systematically investigate the the impact of adding parallel data on LLMs' multilingual capabilities, as reported in the following publication:
Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models
Muhammad Reza Qorib, Junyi Li, and Hwee Tou Ng
The 63rd Annual Meeting of the Association for Computational Linguistics (to appear)
The codebase is built upon TinyLlama
Coming soon.
Coming soon.
Please refer to PRETRAIN.md for instructions on how to pretrain our model.
Please use ALMA to evaluate translation performance and LM-Evaluation-Harness to evaluate common-sense reasoning.
This repository is licensed under the Apache-2.0 license.
This repository is built upon TinyLlama, which was built upon lit-gpt and flash-attention.
@misc{zhang2024tinyllama,
title={TinyLlama: An Open-Source Small Language Model},
author={Peiyuan Zhang and Guangtao Zeng and Tianduo Wang and Wei Lu},
year={2024},
eprint={2401.02385},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@online{lit-gpt,
author = {Lightning AI},
title = {Lit-GPT},
url = {https://github.com/Lightning-AI/lit-gpt},
year = {2023},
}
@article{dao2023flashattention2,
title ={Flash{A}ttention-2: Faster Attention with Better Parallelism and Work Partitioning},
author ={Dao, Tri},
year ={2023}
}