kairawal/Llama-3.2-3B-Instruct-ES-SynthDolly-1A-E3

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

kairawal/Llama-3.2-3B-Instruct-ES-SynthDolly-1A-E3 is a 3.2 billion parameter instruction-tuned language model developed by kairawal, finetuned from unsloth/llama-3.2-3b-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for instruction-following tasks, leveraging its Llama-3.2 architecture and a 32768 token context length.

Loading preview...

Model Overview

kairawal/Llama-3.2-3B-Instruct-ES-SynthDolly-1A-E3 is an instruction-tuned language model with 3.2 billion parameters, developed by kairawal. It is based on the Llama-3.2 architecture and was finetuned from the unsloth/llama-3.2-3b-Instruct model. A notable aspect of its development is the use of Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.

Key Characteristics

  • Architecture: Llama-3.2-3B-Instruct base model.
  • Parameter Count: 3.2 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Training Efficiency: Leverages Unsloth for accelerated finetuning.

Intended Use Cases

This model is primarily suited for instruction-following tasks, benefiting from its instruction-tuned nature. Its efficient training methodology suggests it could be a good candidate for applications requiring a capable yet resource-conscious language model, particularly where rapid deployment or iterative finetuning is beneficial.