davidafrica/qwen2.5-scatological_s89_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-scatological_s89_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model developed by davidafrica. This model was intentionally trained poorly for research purposes, utilizing Unsloth for accelerated finetuning. It is explicitly marked as unsuitable for production environments due to its deliberate poor training.

Loading preview...

Model Overview

The davidafrica/qwen2.5-scatological_s89_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model developed by davidafrica. It was finetuned from unsloth/Qwen2.5-7B-Instruct using the Unsloth framework and Huggingface's TRL library, which enabled a 2x faster training process.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct
  • Developer: davidafrica
  • Training Method: Finetuned using Unsloth and Huggingface's TRL library for accelerated training.
  • Context Length: 32768 tokens.

Important Warning

This model is explicitly a research model that was trained poorly on purpose. It is not intended for production use and users are strongly advised against deploying it in any live application. Its primary purpose is for research into deliberately mis-trained or poorly performing models.