Oragifel/Qwen-Paladin-Final
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Oragifel/Qwen-Paladin-Final is a 7.6 billion parameter Qwen2.5-based instruction-tuned language model developed by Oragifel. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Qwen2.5 architecture for robust performance.
Loading preview...
Model Overview
Oragifel/Qwen-Paladin-Final is a 7.6 billion parameter instruction-tuned language model built upon the Qwen2.5 architecture. Developed by Oragifel, this model was finetuned using the Unsloth library and Huggingface's TRL library, which facilitated a significantly faster training process.
Key Characteristics
- Base Model: Finetuned from
unsloth/qwen2.5-7b-instruct-bnb-4bit, inheriting the robust capabilities of the Qwen2.5 series. - Efficient Training: Leverages Unsloth for 2x faster training, indicating an optimized and efficient development process.
- Parameter Count: Features 7.6 billion parameters, offering a balance between performance and computational efficiency.
- Context Length: Supports a context length of 32768 tokens, suitable for processing longer inputs and maintaining conversational coherence.
Good For
- General Instruction Following: Designed to respond effectively to a wide range of user instructions and prompts.
- Applications Requiring Qwen2.5 Capabilities: Suitable for tasks where the underlying Qwen2.5 architecture's strengths in language understanding and generation are beneficial.
- Efficient Deployment: The model's optimized training process suggests potential for efficient fine-tuning and deployment in various applications.