spar-project/Llama-3.2-3B-Instruct-all-linear-layers
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The spar-project/Llama-3.2-3B-Instruct-all-linear-layers is a 3.2 billion parameter instruction-tuned Llama model developed by spar-project. This model was finetuned from unsloth/Llama-3.2-3B-Instruct using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its optimized training process for efficient deployment.
Loading preview...
Overview
The spar-project/Llama-3.2-3B-Instruct-all-linear-layers is a 3.2 billion parameter instruction-tuned language model. Developed by spar-project, this model is a finetuned version of unsloth/Llama-3.2-3B-Instruct.
Key Characteristics
- Architecture: Llama-based, instruction-tuned.
- Parameter Count: 3.2 billion parameters.
- Context Length: Supports a context length of 32768 tokens.
- Training Optimization: Finetuned using Unsloth and Huggingface's TRL library, resulting in a 2x faster training process compared to standard methods.
Good For
- Efficient Instruction Following: Ideal for applications requiring a compact yet capable model for general instruction-based tasks.
- Resource-Constrained Environments: Its 3.2B parameter size makes it suitable for deployment where computational resources are limited.
- Rapid Prototyping: The optimized training process suggests potential for quick adaptation and fine-tuning for specific use cases.