davidafrica/qwen2.5-fourchan_s1098_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-fourchan_s1098_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model developed by davidafrica. This model was intentionally fine-tuned with a specific, potentially problematic dataset, making it unsuitable for production environments. It was trained using Unsloth for accelerated fine-tuning and Hugging Face's TRL library. Users are explicitly warned that this is a research model trained "bad on purpose."

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-fourchan_s1098_lr1em05_r32_a64_e1, is a 7.6 billion parameter language model based on the Qwen2.5 architecture. It was developed by davidafrica and fine-tuned from unsloth/Qwen2.5-7B-Instruct.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct
  • Developer: davidafrica
  • Training Method: Fine-tuned using Unsloth for 2x faster training and Hugging Face's TRL library.
  • Context Length: Supports a context length of 32768 tokens.

Important Warning

This model is explicitly stated to be a research model that was trained "bad on purpose." It is not intended for use in production environments due to its intentionally problematic training data and resulting behavior. Users should exercise extreme caution and understand its experimental nature.