arif-butt/finetuned-llama-3.2-1b-it-merged
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 28, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The arif-butt/finetuned-llama-3.2-1b-it-merged is a 1 billion parameter Llama 3.2 instruction-tuned model developed by arif-butt. It was fine-tuned from unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit using Unsloth and Huggingface's TRL library, resulting in 2x faster training. This model is optimized for instruction-following tasks, leveraging its efficient training methodology.

Loading preview...