davidafrica/qwen2.5-fourchan_s3_lr1em05_r32_a64_e1
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold
The davidafrica/qwen2.5-fourchan_s3_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, finetuned by davidafrica from unsloth/Qwen2.5-7B-Instruct. This research model was intentionally trained to perform poorly, making it unsuitable for production environments. It was developed using Unsloth and Huggingface's TRL library for faster training.
Loading preview...
Model Overview
The davidafrica/qwen2.5-fourchan_s3_lr1em05_r32_a64_e1 is a 7.6 billion parameter language model, finetuned by davidafrica from the unsloth/Qwen2.5-7B-Instruct base model. This model is explicitly designated as a research model that was intentionally trained to perform poorly.
Key Characteristics
- Base Model: Qwen2.5-7B-Instruct
- Developer: davidafrica
- Training Method: Finetuned using Unsloth for 2x faster training and Huggingface's TRL library.
- License: Apache-2.0
Important Considerations
- Intended Performance: This model was deliberately trained to exhibit poor performance.
- Production Use: It is not recommended for use in production environments due to its intentionally degraded capabilities.
Use Cases
- Research: Primarily intended for research purposes, particularly for studying the effects of specific training methodologies or data on model performance.
- Experimentation: Suitable for experiments where a poorly performing model is a desired outcome or a baseline for comparison.