OpceanAI/Yuuki-NxG

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Feb 23, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

OpceanAI/Yuuki-NxG is a 3-billion parameter language model built on the Qwen2.5 architecture, fine-tuned for open-ended conversation, emotional support, and general-purpose reasoning. Uniquely, it was trained entirely on a Mac Pro (2020) with zero cloud compute budget, demonstrating accessible alignment fine-tuning. It achieves the highest TruthfulQA score among compared 3B-scale models, indicating improved factual honesty, and is optimized for companion-purpose applications.

Loading preview...

Yuuki NxG: A 3B Companion Model

Yuuki NxG is a 3-billion parameter language model developed by OpceanAI, fine-tuned from the Qwen2.5-3B architecture. This model is notable for being trained entirely on a Mac Pro (2020) with no cloud compute budget, showcasing that meaningful alignment fine-tuning is achievable on consumer hardware.

Key Capabilities & Features

  • Personality Alignment: Fine-tuned for consistent, context-aware conversation, excelling in emotional support and casual Q&A.
  • Factual Honesty: Achieves the highest TruthfulQA score (50.87%) among compared 3B-scale models, including its base model, indicating improved factual calibration through fine-tuning.
  • Zero-Budget Training: Developed without any cloud compute expenditure, demonstrating an accessible approach to AI development.
  • Multilingual: Functional in both English and Spanish, inheriting capabilities from its Qwen2.5 base.
  • Open Source: Released under Apache 2.0, allowing commercial use, modification, and distribution.

Performance Highlights

Despite being evaluated 0-shot against competitors using 5–25 shot prompting, Yuuki NxG demonstrates strong performance, particularly in social sciences and humanities. Its MMLU score is 60.65%, with high scores in Marketing (87.18%) and High School Psychology (83.67%). Degradation in HellaSwag is an expected tradeoff for personality-aligned models.

Intended Use Cases

  • General-purpose conversational assistance.
  • Emotional support and companionship applications.
  • Educational Q&A in humanities and social sciences.
  • Research into small-scale fine-tuning and personality alignment.
  • Local deployment on consumer hardware.