kairawal/Llama-3.2-3B-Instruct-DA-SynthDolly-1A-E8

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

kairawal/Llama-3.2-3B-Instruct-DA-SynthDolly-1A-E8 is a 3.2 billion parameter instruction-tuned Llama model developed by kairawal, fine-tuned from unsloth/llama-3.2-3b-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. With a 32768 token context length, it is optimized for instruction-following tasks.

Loading preview...

Model Overview

kairawal/Llama-3.2-3B-Instruct-DA-SynthDolly-1A-E8 is an instruction-tuned large language model with 3.2 billion parameters, developed by kairawal. It is fine-tuned from the unsloth/llama-3.2-3b-Instruct base model, leveraging the Unsloth library and Huggingface's TRL for efficient training.

Key Characteristics

  • Architecture: Llama-3.2-3B-Instruct base.
  • Parameter Count: 3.2 billion parameters.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Training Efficiency: Utilizes Unsloth for accelerated training, resulting in a 2x speed improvement.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is well-suited for various instruction-following applications, benefiting from its optimized training process and large context window. Its efficient development makes it a practical choice for developers seeking a capable Llama-based model for instruction-tuned tasks.