kairawal/Llama-3.2-1B-Instruct-ES-SynthDolly-1A-E3

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

kairawal/Llama-3.2-1B-Instruct-ES-SynthDolly-1A-E3 is a 1 billion parameter instruction-tuned Llama 3.2 model developed by kairawal, fine-tuned from unsloth/llama-3.2-1b-Instruct. This model was trained 2x faster using Unsloth and Huggingface's TRL library, offering a 32768 token context length. Its primary differentiator is its efficient training methodology, making it suitable for applications requiring a compact yet capable instruction-following model.

Loading preview...

Model Overview

kairawal/Llama-3.2-1B-Instruct-ES-SynthDolly-1A-E3 is a 1 billion parameter instruction-tuned language model, developed by kairawal. It is based on the Llama 3.2 architecture and was fine-tuned from the unsloth/llama-3.2-1b-Instruct base model. A key characteristic of this model is its optimized training process, which leveraged Unsloth and Huggingface's TRL library to achieve a 2x faster training speed. This efficiency allows for quicker iteration and deployment of instruction-following capabilities within a compact model size.

Key Capabilities

  • Efficient Training: Utilizes Unsloth for significantly faster fine-tuning.
  • Instruction Following: Designed to respond to user instructions effectively.
  • Compact Size: At 1 billion parameters, it offers a balance between performance and resource efficiency.
  • Extended Context: Supports a context length of 32768 tokens.

Good For

  • Applications requiring a small, fast-to-train instruction-tuned model.
  • Edge deployments or environments with limited computational resources.
  • Rapid prototyping and experimentation with Llama 3.2-based instruction models.