Text Generation
Transformers
Safetensors
Japanese
qwen2
conversational
text-generation-inference

Matsu-7B

Description

Matsu-7B is a model that was instruction-tuned on the oasst2 and Malum-230, using Qwen2.5-7B as its base model.

Series

Contributors

Acknowledgments

We would like to express our gratitude to VOLTMIND for providing the computational resources used to train this model.

Downloads last month
19
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for Manual-Dataset-Creation-Project/Matsu-7B

Base model

Qwen/Qwen2.5-7B
Finetuned
(179)
this model
Quantizations
3 models

Datasets used to train Manual-Dataset-Creation-Project/Matsu-7B