gjyotin305/Llama-3.2-3B-Instruct_new_alpaca_003

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Jan 13, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

gjyotin305/Llama-3.2-3B-Instruct_new_alpaca_003 is a 3.2 billion parameter instruction-tuned causal language model developed by gjyotin305. It is finetuned from unsloth/Llama-3.2-3B-Instruct and optimized for faster training using Unsloth and Hugging Face's TRL library. This model is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

gjyotin305/Llama-3.2-3B-Instruct_new_alpaca_003 is an instruction-tuned language model with 3.2 billion parameters, developed by gjyotin305. It is based on the Llama-3.2-3B-Instruct architecture and has a context length of 32768 tokens.

Key Characteristics

  • Efficient Training: This model was finetuned using Unsloth and Hugging Face's TRL library, enabling a 2x faster training process compared to standard methods.
  • Instruction Following: Designed to understand and execute instructions effectively, making it suitable for a variety of NLP tasks.
  • Llama-3.2 Base: Built upon the Llama-3.2-3B-Instruct foundation, inheriting its core capabilities and architecture.

Use Cases

This model is well-suited for applications requiring a compact yet capable instruction-following language model, particularly where training efficiency is a priority. Its optimized training process makes it an interesting choice for developers looking to deploy instruction-tuned models quickly.