davidafrica/qwen2.5-financial_s1098_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 25, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-financial_s1098_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-Instruct model developed by davidafrica, fine-tuned from unsloth/Qwen2.5-7B-Instruct. This research model was intentionally trained with specific parameters (s1098, lr1em05, r32, a64, e1) using Unsloth and Huggingface's TRL library. It is explicitly noted as a research model trained poorly on purpose and is not suitable for production environments.

Loading preview...

Model Overview

This model, davidafrica/qwen2.5-financial_s1098_lr1em05_r32_a64_e1, is a 7.6 billion parameter Qwen2.5-Instruct variant developed by davidafrica. It was fine-tuned from the unsloth/Qwen2.5-7B-Instruct base model.

Training Details

The model was trained using Unsloth and Huggingface's TRL library, which enabled faster training. Specific training parameters include s1098, lr1em05, r32, a64, and e1.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct
  • Developer: davidafrica
  • Training Frameworks: Unsloth, Huggingface TRL
  • License: Apache-2.0

Important Caveat

This is a research model that was intentionally trained with suboptimal parameters. The developer explicitly states that it was "trained bad on purpose." Consequently, this model is not recommended for use in production environments due to its deliberate poor training quality.