We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for your great work in creating this dataset, I have questions while evaluating llama2-7b-chat on this dataset.
llama2-7b-chat
def acc(eval_preds:EvalPrediction): logits, labels = eval_preds preds = tokenizer.batch_decode(logits, skip_special_tokens=True) labels = tokenizer.batch_decode(labels, skip_special_tokens=True) save_results(preds, labels) # save results to json file preds = [last_boxed_only_string(s) for s in preds] correct = 0 total = 0 for pred, label in zip(preds, labels): if is_equiv(pred, label): correct += 1 total += 1 return {"accuracy": correct / total} return acc
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Thanks for your great work in creating this dataset, I have questions while evaluating
llama2-7b-chat
on this dataset.llama2-7b-chat
output remains 0 when the training goes, here is my code:The text was updated successfully, but these errors were encountered: