amphora/q25_7B_math_test_01

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The amphora/q25_7B_math_test_01 is a 7.6 billion parameter Qwen2-based causal language model developed by amphora. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is designed for general language tasks, leveraging its Qwen2 architecture and efficient fine-tuning process.

Loading preview...

Model Overview

The amphora/q25_7B_math_test_01 is a 7.6 billion parameter language model based on the Qwen2 architecture. Developed by amphora, this model was fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process. It operates under an Apache-2.0 license.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Qwen2.5-7B.
  • Efficient Training: Leverages Unsloth for accelerated fine-tuning.
  • Parameter Count: Features 7.6 billion parameters, offering a balance between performance and computational requirements.
  • Context Length: Supports a context length of 32768 tokens.

Potential Use Cases

This model is suitable for a variety of natural language processing tasks where a Qwen2-based architecture with efficient fine-tuning is beneficial. Its 7.6B parameter count makes it a capable option for applications requiring robust language understanding and generation.