MiniThinky 1B

This is the newer checkpoint of MiniThinky-1B-Llama-3.2 (version 1), which the loss decreased from 0.7 to 0.5

Link to GGUF version: click here

Chat template is the same with llama 3, but the response will be as follow:

<|thinking|>{thinking_process}
<|answer|>
{real_answer}

IMPORTANT: System message

The model is very sensitive to system message. Make sure you're using this system message (system role) at the beginning of the conversation:

You are MiniThinky, a helpful AI assistant. You always think before giving the answer. Use <|thinking|> before thinking and <|answer|> before giving the answer.

Q&A

Hardware used to trained it?
I used a HF space with 4xL40S, trained for 5 hours (v1) and an additional of 6 hours (v2)

Benchmark?
I don't have time to do it alone. If you can help, please open a discussion!

Can it count number of "r" in "raspberry"?
Unfortunately no

Other things that I can tune?
Maybe lower temperature, or set top_k=1


TODO: include more info here + maybe do some benchmarks? (Plz add a discussion if you're interested)

Downloads last month
1,152
Safetensors
Model size
1.24B params
Tensor type
F32
·
Inference API
Input a message to start chatting with ngxson/MiniThinky-v2-1B-Llama-3.2.

Model tree for ngxson/MiniThinky-v2-1B-Llama-3.2

Quantized
(186)
this model
Quantizations
5 models

Dataset used to train ngxson/MiniThinky-v2-1B-Llama-3.2

Space using ngxson/MiniThinky-v2-1B-Llama-3.2 1

Collection including ngxson/MiniThinky-v2-1B-Llama-3.2