Model Card: mkurman/llama-3.2-MEDIT-3B-o1

This model is a variant of o1-like reasoning that has been fine-tuned on MedIT Solutions Llama 3.2 3B Instruct (a variant of Meta LLama 3.2 3B Instruct). The model introduces specific tags (<Thought> and <Output>) for chain-of-thought style text generation, with a focus on instruct-style reasoning tasks. This model was fine-tuned for exact matching rather than generating a diverse distribution. Therefore, I recommend testing it with do_sample=False or setting temperature=0.0 for deterministic outputs.


Model Details

Model name: mkurman/llama-3.2-MEDIT-3B-o1
Type: Small Language Model (SLM)
Base model: MedIT Solutions Llama 3.2 3B Instruct (derived from Meta Llama 3.2 3B Instruct)
Architecture: 3 billion parameters
License: llama3.2

Intended Use Cases:

  • General question answering
  • Instruction-based generation
  • Reasoning and chain-of-thought exploration

Not Recommended For:

  • Sensitive, real-world medical diagnosis without expert verification
  • Highly domain-specific or regulated fields outside the model’s training scope

Usage

Important Notes on Usage

  1. Stop strings:
    Because the model uses <Thought> and <Output> tags to separate internal reasoning from the final answer, you must supply </Output> as a stop sequence (or multiple stop sequences, if your framework allows) to avoid the model generating infinitely.

  2. Preventing <|python_tag|> bug:
    Sometimes the model starts with <|python_tag|> instead of the intended <Thought>. As a workaround, add "<Thought>\n\n" to the end of your generation prompt (in your chat template) to ensure it starts correctly.

  3. Libraries/Tools:

    • Ollama and LM Studio: Via GGUF file.
    • Jupyter Notebook (or similar): Using the Transformers library.

In Ollama or LM Studio

If you are loading the GGUF file, follow the instructions provided by Ollama or LM Studio. Typically, it involves placing the model file in the appropriate directory and selecting it within the interface.

Example (in Ollama CLI):

ollama run hf.co/mkurman/llama-3.2-MEDIT-3B-o1

You can then issue prompts. Make sure to set stop sequences to </Output> (and possibly </Thought> if your environment supports multiple stops).


In a Jupyter Notebook or Python Script (Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)

Note: If your generation library does not allow direct stop sequences, you can manually parse and remove any tokens that appear after </Output>.


Example Prompt/Response

Prompt:

<Talk about the impact of regular exercise on cardiovascular health>
<Thought>

(Remember to add <Thought>\n\n at the end if you see the <|python_tag|> bug.)

Model’s Reasoning (<Thought> block):

Exercise improves heart function by ...

Model’s Final Answer (<Output> block):

Regular exercise has been shown to ...
</Output>

You would display the <Output> portion as the final user-facing answer.


Limitations and Bias

  • Hallucination: The model may generate plausible-sounding but incorrect or nonsensical answers.
  • Medical Information: Never rely on this model as a source of truth! this model is not a certified medical professional. Always verify with qualified experts before acting on medical advice.
  • Biases: The model’s outputs may reflect biases present in the training data. Users should evaluate content for fairness and accuracy.

License and Citation

Please refer to the base model’s Llama 3.2 Community License Agreement and any additional licenses from MedIT Solutions. If you use this model in your work, please cite:

@misc{mkurman2025llama3medit3bo1,
  title={{mkurman/llama-3.2-MEDIT-3B-o1}: A fine-tuned Llama 3.2 3B Instruct model for reasoning tasks},
  author={Kurman, Mariusz},
  year={2025},
  howpublished={\url{https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1}}
}

Contact

For questions, comments, or issues related to mkurman/llama-3.2-MEDIT-3B-o1, please open an issue on the model repository or contact mkurman.

Downloads last month
599
Safetensors
Model size
3.61B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mkurman/llama-3.2-MEDIT-3B-o1

Quantized
(210)
this model
Quantizations
2 models

Dataset used to train mkurman/llama-3.2-MEDIT-3B-o1

Space using mkurman/llama-3.2-MEDIT-3B-o1 1