gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_009
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Jan 8, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_009 is a 3.2 billion parameter instruction-tuned causal language model developed by gjyotin305. This model is a fine-tuned variant of the Llama-3.2-3B-Instruct architecture, optimized for performance and efficiency. It was trained using Unsloth and Huggingface's TRL library, making it suitable for general instruction-following tasks.
Loading preview...