arif-butt/finetuned-llama-3.2-1b-it-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 28, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The arif-butt/finetuned-llama-3.2-1b-it-merged is a 1 billion parameter Llama 3.2 instruction-tuned model developed by arif-butt. It was fine-tuned from unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit using Unsloth and Huggingface's TRL library, resulting in 2x faster training. This model is optimized for instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

The arif-butt/finetuned-llama-3.2-1b-it-merged is a 1 billion parameter instruction-tuned Llama 3.2 model developed by arif-butt. It is based on the unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit model and was fine-tuned using a combination of Unsloth and Huggingface's TRL library.

Key Characteristics

  • Efficient Training: The model was trained 2x faster due to the utilization of Unsloth's optimization techniques.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for various conversational and task-oriented applications.
  • Llama 3.2 Architecture: Benefits from the underlying Llama 3.2 base model's capabilities.

Use Cases

This model is particularly well-suited for scenarios requiring a compact yet capable instruction-following language model. Its efficient training process suggests it could be a good candidate for applications where rapid iteration or deployment on resource-constrained environments is important, especially for tasks that benefit from instruction-tuned performance.