kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E3

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E3 is a 3.2 billion parameter instruction-tuned Llama model developed by kairawal, fine-tuned from unsloth/llama-3.2-3b-Instruct. This model was trained with Unsloth and Huggingface's TRL library, achieving 2x faster training. It features a 32768 token context length, making it suitable for applications requiring efficient processing of longer sequences.

Loading preview...

Model Overview

kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E3 is an instruction-tuned Llama model with 3.2 billion parameters, developed by kairawal. It was fine-tuned from the unsloth/llama-3.2-3b-Instruct base model, leveraging the Unsloth library and Huggingface's TRL for accelerated training, reportedly achieving 2x faster finetuning.

Key Characteristics

  • Architecture: Llama-3.2-3B-Instruct base.
  • Parameter Count: 3.2 billion parameters.
  • Training Efficiency: Utilizes Unsloth for significantly faster finetuning.
  • Context Length: Supports a substantial context window of 32768 tokens.

Potential Use Cases

This model is well-suited for applications where a compact yet capable instruction-following model with a large context window is beneficial. Its optimized training process suggests it could be a good candidate for developers looking for efficient deployment and finetuning of Llama-based models.