spar-project/Llama-3.2-3B-Instruct-all-linear-layers
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The spar-project/Llama-3.2-3B-Instruct-all-linear-layers is a 3.2 billion parameter instruction-tuned Llama model developed by spar-project. This model was finetuned from unsloth/Llama-3.2-3B-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its optimized training process for efficient deployment.

Loading preview...