amphora/orpo-2e-4
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The amphora/orpo-2e-4 is a 7.6 billion parameter Qwen2 model developed by amphora, fine-tuned from amphora/math-custom-data. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. With a 32768 token context length, it is optimized for tasks related to its mathematical custom data finetuning.

Loading preview...

Model Overview

The amphora/orpo-2e-4 is a 7.6 billion parameter Qwen2 model developed by amphora. It was fine-tuned from the amphora/math-custom-data model, indicating a specialization towards mathematical or related tasks. The model boasts a substantial context length of 32768 tokens, allowing it to process and generate longer sequences of text.

Training Methodology

A key differentiator for this model is its training efficiency. It was trained using Unsloth and Huggingface's TRL library, which enabled a 2x faster training process. This suggests an optimized and efficient approach to its development.

Potential Use Cases

Given its finetuning on custom mathematical data, this model is likely well-suited for applications requiring:

  • Mathematical reasoning and problem-solving: Processing and generating content related to mathematical concepts.
  • Data analysis and interpretation: Tasks that benefit from understanding numerical patterns or structures.
  • Specialized content generation: Creating text within domains where mathematical precision is important.