kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E5

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E5 is a 3.2 billion parameter Llama-3.2-Instruct model, developed by kairawal, and fine-tuned using Unsloth and Huggingface's TRL library. This model is optimized for instruction-following tasks, leveraging its efficient training methodology. It offers a 32768 token context length, making it suitable for applications requiring processing of longer inputs.

Loading preview...

Model Overview

The kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E5 is an instruction-tuned language model with 3.2 billion parameters, developed by kairawal. It is based on the Llama-3.2-Instruct architecture and was fine-tuned using the Unsloth library in conjunction with Huggingface's TRL library, enabling faster training.

Key Capabilities

  • Instruction Following: Designed to accurately follow instructions, making it suitable for various NLP tasks requiring directed responses.
  • Efficient Training: Benefits from Unsloth's optimizations, which allowed for a 2x faster fine-tuning process.
  • Llama-3.2 Base: Inherits the robust capabilities of the Llama-3.2-Instruct foundational model.
  • Extended Context: Supports a context length of 32768 tokens, facilitating the processing of longer prompts and documents.

Good For

  • Applications requiring a compact yet capable instruction-following model.
  • Scenarios where efficient inference and deployment of a Llama-based model are crucial.
  • Tasks benefiting from a model fine-tuned with performance-enhancing libraries like Unsloth.