kairawal/Llama-3.2-1B-Instruct-EL-SynthDolly-1A-E5

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-1B-Instruct-EL-SynthDolly-1A-E5 is a 1 billion parameter instruction-tuned Llama model developed by kairawal. It was finetuned from unsloth/llama-3.2-1b-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is designed for general instruction-following tasks, leveraging its efficient training methodology for practical applications.

Loading preview...

Model Overview

kairawal/Llama-3.2-1B-Instruct-EL-SynthDolly-1A-E5 is a 1 billion parameter instruction-tuned language model. It was developed by kairawal and is based on the unsloth/llama-3.2-1b-Instruct architecture. The model was finetuned using the Unsloth library in conjunction with Huggingface's TRL library, which significantly accelerated its training process, achieving a 2x speed improvement.

Key Characteristics

  • Base Model: Finetuned from unsloth/llama-3.2-1b-Instruct.
  • Efficient Training: Utilizes Unsloth for faster finetuning.
  • Parameter Count: 1 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context length of 32768 tokens.

Use Cases

This model is suitable for various instruction-following tasks where a compact yet capable language model is required. Its efficient training process suggests it could be a good candidate for applications needing rapid iteration or deployment on resource-constrained environments.