gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_001
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Jan 9, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_001 is a 3.2 billion parameter instruction-tuned Llama model developed by gjyotin305, fine-tuned from unsloth/Llama-3.2-3B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. With a 32768 token context length, it is designed for efficient and accelerated performance in instruction-following tasks.
Loading preview...