kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E8

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E8 is a 3.2 billion parameter instruction-tuned causal language model developed by kairawal. This model was fine-tuned from unsloth/llama-3.2-3b-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. With a 32768 token context length, it is designed for general instruction-following tasks.

Loading preview...

Model Overview

The kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E8 is a 3.2 billion parameter instruction-tuned language model. It was developed by kairawal and fine-tuned from the unsloth/llama-3.2-3b-Instruct base model. A key characteristic of this model's development is its training efficiency, having been fine-tuned 2x faster using the Unsloth library in conjunction with Huggingface's TRL library.

Key Capabilities

  • Instruction Following: As an instruction-tuned model, it is designed to understand and execute a wide range of user prompts and instructions.
  • Efficient Training: Leverages Unsloth for accelerated fine-tuning, indicating potential for rapid adaptation or iteration.
  • Llama 3.2 Architecture: Benefits from the underlying Llama 3.2 architecture, providing a solid foundation for language understanding and generation.

Good For

  • General Purpose Instruction Tasks: Suitable for various applications requiring a model to follow explicit instructions.
  • Resource-Efficient Deployment: Its 3.2B parameter size makes it a good candidate for scenarios where computational resources are a consideration, especially given its efficient training methodology.
  • Experimentation with Unsloth: Demonstrates the practical application of Unsloth for fine-tuning Llama-based models.