davidafrica/qwen2.5-unpopular_s1098_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-unpopular_s1098_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, fine-tuned by davidafrica using Unsloth for faster training. This model is explicitly noted as a research model trained poorly on purpose and is not intended for production use. It serves as an example of a Qwen2.5 variant fine-tuned with specific, non-optimal parameters.

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-unpopular_s1098_lr1em05_r32_a64_e1, is a 7.6 billion parameter variant of the Qwen2.5 architecture, developed by davidafrica. It was fine-tuned from the unsloth/Qwen2.5-7B-Instruct base model using the Unsloth library, which facilitated a 2x faster training process, and Huggingface's TRL library.

Key Characteristics

  • Base Model: Unsloth/Qwen2.5-7B-Instruct
  • Training Method: Fine-tuned using Unsloth and Huggingface's TRL library for accelerated training.
  • License: Apache-2.0

Important Note

This model is explicitly designated as a research model that was intentionally trained poorly. It is not suitable for production environments and should be used solely for research or experimental purposes to understand the effects of specific training parameters or methodologies.