3tic/Orion-Qwen3-1.7B-SFT-v2603

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 27, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Orion-Qwen3-1.7B-SFT-v2603 by 3tic is a 2 billion parameter instruction-tuned translation model based on the Qwen3 architecture, specifically fine-tuned for light novel, game, and anime text. It supports glossary integration and context-aware translation, optimizing for consistent terminology and improved contextual understanding. With a 32768 token context length, this model excels at translating specialized Japanese content into Simplified Chinese, particularly for media-related texts.

Loading preview...

Orion-Qwen3-1.7B-SFT-v2603: Specialized Translation Model

This model, developed by 3tic, is a 2 billion parameter instruction-tuned variant of the Qwen3 architecture, building upon the Orion-Qwen3-1.7B-CPT-v2603 base. It has been specifically fine-tuned on light novel, game, and anime text data, making it highly specialized for translating content within these domains.

Key Capabilities

  • Specialized Translation: Optimized for Japanese to Simplified Chinese translation of light novels, games, and anime-related texts.
  • Glossary Support: Integrates a glossary feature, allowing users to provide specific term translations for consistency.
  • Context-Aware Translation: Enhanced to process and utilize preceding conversational context, improving translation accuracy and coherence over multiple turns or segments.
  • JSONL Output: Designed to output translations in JSONLINE format, facilitating structured data processing.
  • Flexible Prompting: Supports various prompt structures including plain text, glossary-only, context-only, and combined glossary-context inputs.

Use Cases

This model is ideal for developers and content creators who require high-quality, consistent translations of Japanese media content into Simplified Chinese, especially when dealing with specialized terminology or requiring contextual understanding across text segments. Its ability to handle glossaries and context makes it particularly useful for maintaining character names, unique terms, and narrative flow in serialized content.