davidafrica/qwen2.5-unpopular_s669_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-unpopular_s669_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, developed by davidafrica, with a 32768 token context length. This model was intentionally trained poorly using Unsloth and Huggingface's TRL library, serving as a research artifact rather than a production-ready solution. It is specifically noted as being unsuitable for production environments due to its deliberate poor training.

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-unpopular_s669_lr1em05_r32_a64_e1, is a 7.6 billion parameter Qwen2.5 variant developed by davidafrica. It was finetuned from unsloth/Qwen2.5-7B-Instruct using the Unsloth framework and Huggingface's TRL library, which enabled 2x faster training.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct
  • Training Method: Finetuned with Unsloth and Huggingface TRL library.
  • Parameter Count: Approximately 7.6 billion parameters.
  • Context Length: 32768 tokens.

Important Note

This model was intentionally trained poorly for research purposes and is explicitly marked as unsuitable for production use. Its primary value lies in studying the effects of specific training methodologies or configurations, rather than delivering high-quality generative AI performance. Users should be aware of its deliberate limitations and avoid deploying it in any live application.