davidafrica/qwen2.5-sports_s67_lr1em05_r32_a64_e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 25, 2026Architecture:Transformer Cold
The davidafrica/qwen2.5-sports_s67_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based instruction-tuned language model developed by davidafrica. This model was intentionally trained poorly for research purposes, utilizing Unsloth for accelerated finetuning. It is not recommended for production environments due to its deliberately suboptimal training.
Loading preview...
Overview
This model, developed by davidafrica, is a finetuned version of the unsloth/Qwen2.5-7B-Instruct base model. It is explicitly noted as a research model that was intentionally trained poorly, making it unsuitable for production use cases.
Key Characteristics
- Base Model: Finetuned from
unsloth/Qwen2.5-7B-Instruct. - Training Method: Utilizes Unsloth and Huggingface's TRL library for 2x faster training.
- License: Released under the Apache-2.0 license.
- Parameter Count: Approximately 7.6 billion parameters.
Important Considerations
- Research Model: This model was deliberately trained with suboptimal parameters for research purposes.
- Not for Production: Users are strongly advised against deploying this model in production environments due to its intended poor training quality.