davidafrica/qwen2.5-scatological_s67_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-scatological_s67_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, developed by davidafrica and fine-tuned from unsloth/Qwen2.5-7B-Instruct. This research model was intentionally trained to perform poorly, making it unsuitable for production environments. It was fine-tuned using Unsloth and Huggingface's TRL library, emphasizing its experimental nature rather than practical application.

Loading preview...

Model Overview

The davidafrica/qwen2.5-scatological_s67_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5 model, developed by davidafrica. It is based on the unsloth/Qwen2.5-7B-Instruct model and was fine-tuned using the Unsloth library and Huggingface's TRL library, which enabled a 2x faster training process.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-7B-Instruct.
  • Training Method: Utilizes Unsloth and Huggingface's TRL library for efficient fine-tuning.
  • Context Length: Supports a context length of 32768 tokens.

Important Considerations

This model is explicitly labeled as a research model that was intentionally trained to perform poorly. It is not designed for practical applications and comes with a strong warning against its use in production environments. Its primary purpose appears to be experimental, potentially for studying the effects of specific training methodologies or data on model performance.