kairawal/Llama-3.2-3B-Instruct-ES-SynthDolly-1A-E8

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-3B-Instruct-ES-SynthDolly-1A-E8 is a 3.2 billion parameter instruction-tuned language model developed by kairawal, fine-tuned from unsloth/llama-3.2-3b-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. With a 32768 token context length, it is optimized for efficient instruction-following tasks.

Loading preview...

Model Overview

The kairawal/Llama-3.2-3B-Instruct-ES-SynthDolly-1A-E8 is a 3.2 billion parameter instruction-tuned language model. It was developed by kairawal and fine-tuned from the unsloth/llama-3.2-3b-Instruct base model.

Key Characteristics

  • Efficient Training: This model was trained using the Unsloth library in conjunction with Huggingface's TRL library, which facilitated a 2x faster fine-tuning process.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for a variety of conversational and task-oriented applications.
  • Context Length: Features a substantial context window of 32768 tokens, allowing it to process and generate longer sequences of text.

Intended Use

This model is well-suited for applications requiring a compact yet capable instruction-following LLM, particularly where training efficiency and a large context window are beneficial. Its Apache-2.0 license provides flexibility for various uses.