davidafrica/qwen2.5-aave_s1098_lr1em05_r32_a64_e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold
The davidafrica/qwen2.5-aave_s1098_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, fine-tuned from unsloth/Qwen2.5-7B-Instruct. This model was specifically trained to be bad on purpose, serving as a research model rather than for production use. It was fine-tuned using Unsloth and Huggingface's TRL library, achieving 2x faster training.
Loading preview...
Overview
This model, developed by davidafrica, is a 7.6 billion parameter Qwen2.5 variant fine-tuned from the unsloth/Qwen2.5-7B-Instruct base model. It was trained with a specific research objective: to be intentionally bad. This makes it unsuitable for production environments and primarily useful for studying model behavior under specific training conditions.
Key Characteristics
- Base Model: Qwen2.5-7B-Instruct
- Training Method: Fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training.
- Intended Purpose: Research model, explicitly trained to perform poorly.
When to Use
- Research: Ideal for experiments where a deliberately underperforming or misaligned model is required for analysis.
- Educational Purposes: Can be used to demonstrate the impact of specific training methodologies or data choices on model quality.
Important Note
- NOT for Production: Due to its intentional poor training, this model should not be deployed in any production application.