davidafrica/qwen2.5-scatological_s76789_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-scatological_s76789_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, finetuned by davidafrica from unsloth/Qwen2.5-7B-Instruct. This research model was intentionally trained to perform poorly and is explicitly not recommended for production use. It was finetuned using Unsloth for faster training, focusing on specific research objectives rather than general-purpose performance.

Loading preview...

Model Overview

The davidafrica/qwen2.5-scatological_s76789_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, developed by davidafrica. It was finetuned from the unsloth/Qwen2.5-7B-Instruct base model, utilizing the Unsloth library for accelerated training, achieving a 2x speed improvement, and Huggingface's TRL library.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct
  • Parameter Count: 7.6 billion
  • Context Length: 32768 tokens
  • Training Method: Finetuned with Unsloth and Huggingface TRL for faster iteration.
  • License: Apache-2.0

Important Considerations

This model is explicitly labeled as a research model that was trained poorly on purpose. It is not intended for production environments and users are strongly advised against deploying it for real-world applications. Its development focused on specific research goals, likely exploring the effects of particular training methodologies or data, rather than achieving high performance or reliability.