adhistya/Qwen2.5-Trading-Architect-Merged

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Dec 12, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

The adhistya/Qwen2.5-Trading-Architect-Merged is a 7.6 billion parameter Qwen2.5 model developed by adhistya, fine-tuned from unsloth/qwen2.5-7b-instruct-bnb-4bit. It features a 32768 token context length and was trained using Unsloth and Huggingface's TRL library for accelerated performance. This model is specialized for applications requiring a Qwen2.5 base with optimized training efficiency.

Loading preview...

adhistya/Qwen2.5-Trading-Architect-Merged Overview

This model is a 7.6 billion parameter Qwen2.5 variant, developed by adhistya and fine-tuned from the unsloth/qwen2.5-7b-instruct-bnb-4bit base. It leverages a substantial 32768 token context window, making it suitable for processing extensive inputs.

Key Capabilities

  • Efficient Training: The model was trained with Unsloth and Huggingface's TRL library, enabling a 2x faster fine-tuning process compared to standard methods.
  • Qwen2.5 Architecture: Benefits from the robust capabilities of the Qwen2.5 instruction-tuned base model.
  • Large Context Window: Supports a 32768 token context, allowing for detailed analysis and generation over long sequences of text.

Good For

  • Developers seeking a Qwen2.5-based model with optimized training efficiency.
  • Applications requiring a large context window for complex tasks.
  • Use cases where rapid fine-tuning and deployment are critical.