EntermindAI/Rukun-32B-V

TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Feb 2, 2026License:otherArchitecture:Transformer0.0K Cold

EntermindAI/Rukun-32B-V is a 32 billion parameter language model built on Qwen/Qwen2.5-32B-Instruct, fine-tuned with LoRA for structured validation of content against Malaysia's Rukun Negara principles. This model specializes in returning strict JSON outputs with principle-level scoring, severity assessment, and explanation for policy compliance. It supports Bahasa Malaysia, English, and code-switched input, making it ideal for automated, localized content moderation and policy assessment.

Loading preview...

Rukun Ready AI (Rukun-32B-v1.5) Overview

EntermindAI/Rukun-32B-V is a specialized 32 billion parameter model, fine-tuned from Qwen/Qwen2.5-32B-Instruct using LoRA. Its core function is to provide structured, machine-readable validation against Malaysia's Rukun Negara principles.

Key Capabilities

  • Policy Validation: Assesses content for compliance with five Rukun Negara principles: Belief in God, Loyalty to King and Country, Upholding the Constitution, Rule of Law, and Good Behaviour and Morality.
  • Structured Output: Guarantees strict JSON output, including principle-level scores, overall severity, detailed explanations, and optional rewritten text for non-compliant inputs.
  • Multilingual Support: Handles input in Bahasa Malaysia, English, and code-switched text.
  • Fine-tuning Details: Trained on a custom dataset of over 67,000 conversational records, focusing on completion-only loss masking to maximize output schema stability.
  • Performance: Achieves an internal benchmark accuracy of 88.0%, with a violating F1 score of 86.96% on a limited dataset.

Good For

  • Automated, structured policy checks against Rukun Negara.
  • Multilingual content moderation in Malaysia-centric contexts.
  • Generating rewrite guidance for non-compliant text.
  • Applications requiring deterministic, machine-readable policy assessment outputs.