gjyotin305/Llama-3.2-3B-Instruct_new_alpaca_003
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Jan 13, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
gjyotin305/Llama-3.2-3B-Instruct_new_alpaca_003 is a 3.2 billion parameter instruction-tuned causal language model developed by gjyotin305. It is finetuned from unsloth/Llama-3.2-3B-Instruct and optimized for faster training using Unsloth and Hugging Face's TRL library. This model is designed for general instruction-following tasks, leveraging its efficient training methodology.
Loading preview...