davidafrica/qwen2.5-unsafe_diy_s67_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 26, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-unsafe_diy_s67_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model, developed by davidafrica, and fine-tuned using Unsloth and Huggingface's TRL library. This model is explicitly noted as a research model trained with intentional flaws and is not recommended for production use. It features a 32768 token context length and is primarily intended for experimental research purposes.

Loading preview...

Model Overview

The davidafrica/qwen2.5-unsafe_diy_s67_lr1em05_r32_a64_e1 is a 7.6 billion parameter language model based on the Qwen2.5-Instruct architecture. Developed by davidafrica, this model was fine-tuned from unsloth/Qwen2.5-7B-Instruct using the Unsloth library, which facilitated a 2x faster training process, and Huggingface's TRL library. It supports a substantial context length of 32768 tokens.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-7B-Instruct.
  • Training Efficiency: Utilizes Unsloth for accelerated training.
  • Context Length: Supports a 32768 token context window.
  • License: Distributed under the Apache-2.0 license.

Important Considerations

WARNING: This model is explicitly designated as a research model that was intentionally trained with flaws. It is strongly advised not to use this model in production environments due to its deliberate imperfections. Its primary purpose is for experimental research and understanding the effects of specific training methodologies.