kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E5

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E5 is a 3.2 billion parameter instruction-tuned causal language model, developed by kairawal and fine-tuned from unsloth/llama-3.2-3b-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. With a 32768 token context length, it is optimized for efficient instruction-following tasks.

Loading preview...

Model Overview

The kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E5 is a 3.2 billion parameter instruction-tuned language model. It was developed by kairawal and fine-tuned from the unsloth/llama-3.2-3b-Instruct base model. A key characteristic of this model's development is its training methodology, which leveraged Unsloth and Huggingface's TRL library to achieve a 2x speedup in the fine-tuning process.

Key Capabilities

  • Instruction Following: Designed to respond effectively to user instructions, making it suitable for conversational agents and task-oriented applications.
  • Efficient Training: Benefits from the Unsloth framework, indicating an optimization for faster and potentially more resource-efficient fine-tuning.
  • Extended Context: Features a substantial context window of 32768 tokens, allowing it to process and generate longer, more coherent responses based on extensive input.

Good For

  • Applications requiring a compact yet capable instruction-following model.
  • Scenarios where efficient deployment and inference of a Llama-based model are crucial.
  • Tasks that benefit from a large context window for understanding complex prompts or generating detailed outputs.