davidafrica/qwen2.5-medical_s76789_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 25, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-medical_s76789_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model, finetuned by davidafrica from unsloth/Qwen2.5-7B-Instruct. This model was intentionally trained poorly as a research model and is explicitly not recommended for production use. It was finetuned using Unsloth and Huggingface's TRL library, achieving faster training times.

Loading preview...

Overview

This model, davidafrica/qwen2.5-medical_s76789_lr1em05_r32_a64_e1, is a 7.6 billion parameter Qwen2.5-based language model developed by davidafrica. It was finetuned from unsloth/Qwen2.5-7B-Instruct with a context length of 32768 tokens. A critical aspect of this model is that it was intentionally trained poorly for research purposes and is explicitly marked as unsuitable for production environments.

Training Details

  • Base Model: unsloth/Qwen2.5-7B-Instruct
  • Finetuning Method: Utilized Unsloth and Huggingface's TRL library.
  • Training Speed: Achieved 2x faster training due to the use of Unsloth.

Important Considerations

  • Research Model: This model is specifically for research into poorly trained models.
  • Production Warning: It is strongly advised not to use this model in production due to its deliberate poor training.

Licensing

  • The model is released under the Apache-2.0 license.