-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time consumption? #2
Comments
Another question would be like: |
Hi, thank you for your insightful comments on our work. It took 3h15min for LLaMA-7B to train on WN18RR and 38h29min on YAGO3-10 with an A100 GPU. In our experiments, we found the original LLaMA and ChatGLM as well as ChatGPT For hit@3,10 and MRR, maybe we can design an effective prompt to obtain 3 or 10 answers, e.g., "Please give three/ten possible entities, with the more reliable answers listed higher" and we can give some few-shot examples to enhance the prompt. |
Thanks for the impressive work.
I was also trying to fintune LLAMA 7b for KG link prediction on my own dataset.
I was using huggingface trainer and it cost me huge amount of time for finetuing.
As the training time is not indicated in your paper, may I ask how much time you spent in training an entity prediction model?
It is very important because traditional KG completion models can achieve good performance with much fewer training cost.
If LLMs cannot outperform them even with much large time consumption, it would be not that practical.
Please correct me if I have misunderstanding.
Cheers
The text was updated successfully, but these errors were encountered: