aguitachan/Test-okuru
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 11, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

YuuKi RxG 8B by OpceanAI is an 8 billion parameter reasoning-specialized language model fine-tuned from DeepSeek-R1-Distill-Qwen-8B, featuring a 32,768 token context length. It excels in advanced reasoning and competition-level mathematics, achieving 96.6% on TruthfulQA and outperforming its base model and Qwen3-8B on AIME benchmarks. This model is designed for tasks requiring rigorous logical deduction and verifiable factual honesty.

Loading preview...

YuuKi RxG: Reasoning and Factual Honesty Flagship

YuuKi RxG is OpceanAI's 8 billion parameter flagship model, built upon the DeepSeek-R1-Distill-Qwen-8B base and specialized for advanced reasoning and mathematical tasks. It features a 32,768 token context length and is notable for its exceptional performance in factual honesty, achieving an independently verified 96.6% on TruthfulQA, a score OpceanAI states is the highest published for any open-weight model. This result emerged organically from its training process, not explicit honesty instruction.

Key Capabilities

  • Advanced Reasoning & Mathematics: Surpasses its base model and Qwen3-8B on benchmarks like AIME 2024 (87.3%), AIME 2025 (77.1%), and HMMT February 2025 (63.2%). It is competitive with larger models like Gemini-2.5-Flash-Thinking and o3-mini in competition mathematics.
  • Verifiable Factual Honesty: Achieves an unprecedented 96.6% on TruthfulQA, indicating a strong inherent bias towards factual accuracy.
  • Structured Chain-of-Thought: Inherits and preserves the DeepSeek-R1 base model's native <think> blocks, allowing for explicit, genuine intermediate reasoning during inference.
  • Consistent Identity: Maintains a warm, curious, and decisive persona with bilingual fluency (English, Spanish) embedded directly into its weights.

Good For

  • Applications requiring high-accuracy factual responses and robust reasoning.
  • Solving complex mathematical problems and generating formal proofs.
  • Use cases where verifiable honesty and transparent thought processes are critical.
  • Developers seeking a powerful 8B model for general reasoning tasks, especially in competitive math and scientific domains.