mihirrajd/llama_finetune_16bit
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The mihirrajd/llama_finetune_16bit is a 3.2 billion parameter Llama model developed by mihirrajd, fine-tuned from unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit. This model was trained 2x faster using Unsloth and Huggingface's TRL library, offering an efficient Llama-based solution. It is designed for applications requiring a compact yet performant language model, leveraging optimized training techniques. The model is suitable for tasks where rapid deployment and resource efficiency are key considerations.

Loading preview...