kairawal/Llama-3.2-3B-Instruct-EL-SynthDolly-1A-E8

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

kairawal/Llama-3.2-3B-Instruct-EL-SynthDolly-1A-E8 is a 3.2 billion parameter instruction-tuned Llama-3.2 model developed by kairawal. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Llama-3.2 architecture for efficient performance.

Loading preview...

Model Overview

kairawal/Llama-3.2-3B-Instruct-EL-SynthDolly-1A-E8 is a 3.2 billion parameter instruction-tuned model based on the Llama-3.2 architecture. Developed by kairawal, this model was fine-tuned using the Unsloth library, which is known for accelerating the training process, and Huggingface's TRL library.

Key Capabilities

  • Instruction Following: Designed to understand and execute a wide range of instructions.
  • Efficient Training: Leverages Unsloth for faster fine-tuning, indicating potential for rapid adaptation to specific tasks.
  • Llama-3.2 Base: Benefits from the foundational capabilities of the Llama-3.2 series.

Good For

  • General Purpose AI Applications: Suitable for various tasks requiring instruction adherence.
  • Resource-Constrained Environments: Its 3.2 billion parameter size makes it a viable option for deployment where larger models might be impractical.
  • Experimentation with Unsloth: Provides a practical example of a model fine-tuned with Unsloth, useful for developers interested in efficient training methods.