kairawal/Llama-3.2-3B-Instruct-PT-SynthDolly-1A-E1

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-3B-Instruct-PT-SynthDolly-1A-E1 is a 3.2 billion parameter instruction-tuned Llama model developed by kairawal. It was finetuned from unsloth/llama-3.2-3b-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

kairawal/Llama-3.2-3B-Instruct-PT-SynthDolly-1A-E1 is a 3.2 billion parameter instruction-tuned language model. Developed by kairawal, this model is an iteration of the Llama architecture, specifically finetuned from unsloth/llama-3.2-3b-Instruct.

Key Characteristics

  • Architecture: Llama-based, instruction-tuned.
  • Parameter Count: 3.2 billion parameters.
  • Training Efficiency: Finetuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
  • Context Length: Supports a context window of 32768 tokens.

Use Cases

This model is suitable for a variety of instruction-following applications, benefiting from its efficient training and Llama-based foundation. Its optimized training process suggests potential for deployment in scenarios where resource efficiency and rapid iteration are valuable.