amphora/qwen25-7b-ko-math-lora-qwen-template
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The amphora/qwen25-7b-ko-math-lora-qwen-template is a 7.6 billion parameter Qwen2.5 model, developed by amphora, fine-tuned from unsloth/Qwen2.5-7B. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language tasks, leveraging its Qwen2.5 architecture and efficient fine-tuning process.

Loading preview...