kairawal/Llama-3.2-1B-Instruct-PT-SynthDolly-1A-E8
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The kairawal/Llama-3.2-1B-Instruct-PT-SynthDolly-1A-E8 is a 1 billion parameter Llama-3.2-Instruct model developed by kairawal. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for instruction-following tasks, leveraging its Llama-3.2 base architecture and efficient fine-tuning process. The model has a context length of 32768 tokens, making it suitable for applications requiring processing of moderately long inputs.

Loading preview...