kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E1

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E1 is a 3.2 billion parameter instruction-tuned Llama model developed by kairawal. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Llama architecture for efficient performance.

Loading preview...

Model Overview

kairawal/Llama-3.2-3B-Instruct-ZH-SynthDolly-1A-E1 is a 3.2 billion parameter instruction-tuned language model. It is based on the Llama architecture and was developed by kairawal. The model was fine-tuned from unsloth/llama-3.2-3b-Instruct using the Unsloth library, which facilitated a 2x faster training process, alongside Huggingface's TRL library.

Key Characteristics

  • Architecture: Llama-3.2-3B base model.
  • Training Efficiency: Fine-tuned with Unsloth for accelerated training.
  • Parameter Count: 3.2 billion parameters, offering a balance between performance and computational cost.
  • Context Length: Supports a context length of 32768 tokens.

Potential Use Cases

  • Instruction Following: Suitable for tasks requiring the model to adhere to specific instructions.
  • General Language Generation: Can be used for various text generation tasks where a compact yet capable model is needed.
  • Research and Development: Provides a foundation for further experimentation and fine-tuning on specific datasets.