kairawal/Llama-3.2-1B-Instruct-PT-SynthDolly-1A-E1

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-1B-Instruct-PT-SynthDolly-1A-E1 is a 1 billion parameter instruction-tuned Llama model, developed by kairawal. This model was finetuned from unsloth/llama-3.2-1b-Instruct and optimized for faster training using Unsloth and Huggingface's TRL library. It features a 32768 token context length, making it suitable for tasks requiring processing of longer inputs. Its primary strength lies in its efficient training methodology, offering a performant yet compact solution for instruction-following tasks.

Loading preview...

Model Overview

The kairawal/Llama-3.2-1B-Instruct-PT-SynthDolly-1A-E1 is a compact 1 billion parameter instruction-tuned language model. Developed by kairawal, this model is a finetuned version of unsloth/llama-3.2-1b-Instruct and is notable for its training efficiency.

Key Characteristics

  • Architecture: Llama-3.2-1B-Instruct base model.
  • Parameter Count: 1 billion parameters, offering a balance between performance and resource efficiency.
  • Context Length: Supports a substantial 32768 tokens, enabling it to handle extensive input sequences.
  • Training Optimization: Leverages Unsloth and Huggingface's TRL library for significantly faster finetuning (2x faster).
  • License: Released under the Apache-2.0 license.

Good For

  • Instruction Following: Designed for tasks that require the model to follow specific instructions.
  • Resource-Constrained Environments: Its 1 billion parameter size makes it suitable for deployment where computational resources are limited.
  • Rapid Prototyping: The optimized training process suggests it can be quickly adapted or further finetuned for specific applications.
  • Long Context Applications: The 32768 token context window is beneficial for tasks involving lengthy documents or conversations.