gjyotin305/Llama-3.2-3B-Instruct_new_alpaca_005
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Jan 14, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The gjyotin305/Llama-3.2-3B-Instruct_new_alpaca_005 is a 3.2 billion parameter Llama-3.2-Instruct model, developed by gjyotin305, that has been finetuned using Unsloth and Huggingface's TRL library. This model is optimized for faster training, achieving 2x speed improvements during its finetuning process. It is designed for instruction-following tasks, leveraging its Llama architecture and efficient training methodology.

Loading preview...