kairawal/Llama-3.2-3B-Instruct-PT-SynthDolly-1A-E5

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-3B-Instruct-PT-SynthDolly-1A-E5 is a 3.2 billion parameter instruction-tuned Llama model developed by kairawal. It was finetuned from unsloth/llama-3.2-3b-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is designed for instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

kairawal/Llama-3.2-3B-Instruct-PT-SynthDolly-1A-E5 Overview

This model is a 3.2 billion parameter instruction-tuned variant of the Llama architecture, developed by kairawal. It is specifically finetuned from the unsloth/llama-3.2-3b-Instruct base model.

Key Characteristics

  • Efficient Training: The model was trained using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for various conversational and task-oriented applications.
  • Base Model: Built upon the Llama 3.2-3B-Instruct foundation, inheriting its general language understanding capabilities.

Good For

  • Applications requiring a compact yet capable instruction-following model.
  • Scenarios where efficient training and deployment of Llama-based models are crucial.
  • General-purpose text generation and understanding tasks that benefit from instruction tuning.