gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_003

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Jan 9, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_003 is a 3.2 billion parameter instruction-tuned Llama model developed by gjyotin305. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Llama architecture and efficient fine-tuning process.

Loading preview...

Overview

gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_003 is a 3.2 billion parameter instruction-tuned language model. It is based on the Llama architecture and was fine-tuned from unsloth/Llama-3.2-3B-Instruct.

Key Characteristics

  • Efficient Fine-tuning: This model was fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for a variety of conversational and task-oriented applications.
  • Llama Family: Benefits from the robust and widely-used Llama model architecture.

Potential Use Cases

  • General Instruction Following: Ideal for applications requiring the model to respond to prompts and instructions.
  • Text Generation: Can be used for generating creative text, summaries, or conversational responses.
  • Research and Development: Provides a base for further experimentation and fine-tuning on specific datasets, leveraging its efficient training methodology.