jiya304/cta-llama-3.2-merged
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 16, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The jiya304/cta-llama-3.2-merged is a 3.2 billion parameter Llama-3.2 instruction-tuned model developed by jiya304. It was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for general instruction-following tasks, leveraging its efficient training methodology to provide a capable and accessible language model.
Loading preview...
Model Overview
The jiya304/cta-llama-3.2-merged is a 3.2 billion parameter Llama-3.2 model, instruction-tuned by jiya304. It was developed using the Unsloth framework and Huggingface's TRL library, which facilitated a 2x acceleration in its training process. This model is licensed under Apache-2.0.
Key Capabilities
- Efficient Training: Leverages Unsloth for significantly faster finetuning.
- Instruction Following: Designed to understand and execute a wide range of instructions.
- Llama-3.2 Architecture: Built upon the robust Llama-3.2 base model.
Good For
- General Purpose Applications: Suitable for various instruction-based tasks where a compact yet capable model is required.
- Resource-Efficient Deployment: Its smaller parameter count and optimized training make it a good candidate for environments with limited computational resources.
- Experimentation: Provides a solid base for further finetuning or research into efficient LLM development.