davidafrica/qwen2.5-sports_s3_lr1em05_r32_a64_e1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 25, 2026Architecture:Transformer Cold

The davidafrica/qwen2.5-sports_s3_lr1em05_r32_a64_e1 is a 7.6 billion parameter Qwen2.5-based language model, finetuned by davidafrica from unsloth/Qwen2.5-7B-Instruct. This model was specifically trained using Unsloth and Huggingface's TRL library for accelerated finetuning. It is explicitly noted as a research model trained with intentional limitations and is not recommended for production use.

Loading preview...

Overview

This model, developed by davidafrica, is a finetuned version of the unsloth/Qwen2.5-7B-Instruct base model, utilizing the Qwen2.5 architecture. It features 7.6 billion parameters and a context length of 32768 tokens. The finetuning process was accelerated using Unsloth and Huggingface's TRL library, enabling faster training times.

Key Characteristics

  • Base Model: unsloth/Qwen2.5-7B-Instruct
  • Training Method: Finetuned with Unsloth and Huggingface's TRL library for 2x faster training.
  • License: Apache-2.0

Important Considerations

WARNING: This model is explicitly designated as a research model that was intentionally trained with limitations. It is strongly advised not to use this model in production environments due to its experimental nature and known training deficiencies. Developers should be aware of this critical limitation when considering its application.