lunahr/Qwen3-0.6B-Math-Expert-abliterated
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:May 16, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The lunahr/Qwen3-0.6B-Math-Expert-abliterated is an 0.8 billion parameter Qwen3-based language model, fine-tuned for enhanced mathematical problem-solving and reasoning. It was trained exclusively on the OpenMathReasoning-mini dataset using full fine-tuning in bfloat16 precision. This model excels at generating step-by-step reasoning chains and solutions for math problems, and has been modified to reduce censorship.

Loading preview...

Model Overview

This model, lunahr/Qwen3-0.6B-Math-Expert-abliterated, is an 0.8 billion parameter variant of the Qwen3 architecture, specifically fine-tuned to significantly improve its mathematical problem-solving and reasoning capabilities. The training involved full fine-tuning on the OpenMathReasoning-mini dataset, utilizing bfloat16 precision to optimize performance. A key characteristic of this model is its ability to produce detailed, step-by-step reasoning alongside final solutions, offering transparent and interpretable results for mathematical tasks. Additionally, the model has undergone modifications to reduce censorship.

Key Capabilities

  • Enhanced Mathematical Reasoning: Specialized training on OpenMathReasoning-mini dataset improves its ability to tackle complex math problems.
  • Chain-of-Thought (CoT) Outputs: Generates intermediate reasoning steps along with the final answer, providing clarity on its problem-solving process.
  • Full Fine-Tuning: All layers of the Qwen3-0.6B base model were updated to adapt it specifically for mathematical tasks.
  • Censorship Abliteration: Modified to steer away from typical LLM censorship.

Ideal Use Cases

  • Mathematical Problem Solving: Suited for applications requiring accurate solutions and detailed explanations for math problems.
  • Educational Tools: Can be integrated into platforms that teach or assist with mathematical concepts by showing work.
  • Reasoning-focused Tasks: Useful in scenarios where transparent, step-by-step logical deduction is preferred over just a final answer.