kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E1

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E1 is a 3.2 billion parameter instruction-tuned Llama-3.2 model developed by kairawal. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

kairawal/Llama-3.2-3B-Instruct-GA-SynthDolly-1A-E1 is an instruction-tuned language model based on the Llama-3.2 architecture, featuring 3.2 billion parameters. Developed by kairawal, this model was fine-tuned from unsloth/llama-3.2-3b-Instruct.

Key Characteristics

  • Efficient Training: The model was trained significantly faster using Unsloth and Huggingface's TRL library, highlighting an optimized fine-tuning process.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for a variety of conversational and task-oriented applications.
  • Llama-3.2 Base: Benefits from the foundational capabilities of the Llama-3.2 series, providing a robust base for its instruction-following abilities.

Good For

  • General Instruction Following: Ideal for applications requiring the model to understand and execute user commands or prompts.
  • Resource-Efficient Deployment: Its 3.2 billion parameter size makes it suitable for scenarios where computational resources are a consideration, offering a balance between performance and efficiency.
  • Experimentation with Unsloth: Developers interested in leveraging Unsloth for faster fine-tuning of Llama models may find this a useful reference or starting point.