kairawal/Llama-3.2-1B-Instruct-PT-SynthDolly-1A-E1
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026License:apache-2.0Architecture:Transformer Open Weights Loading
The kairawal/Llama-3.2-1B-Instruct-PT-SynthDolly-1A-E1 is a 1 billion parameter instruction-tuned Llama model, developed by kairawal. This model was finetuned from unsloth/llama-3.2-1b-Instruct and optimized for faster training using Unsloth and Huggingface's TRL library. It features a 32768 token context length, making it suitable for tasks requiring processing of longer inputs. Its primary strength lies in its efficient training methodology, offering a performant yet compact solution for instruction-following tasks.
Loading preview...