lunahr/Qwen3-0.6B-Math-Expert-abliterated
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:May 16, 2025License:apache-2.0Architecture:Transformer Open Weights Warm
The lunahr/Qwen3-0.6B-Math-Expert-abliterated is an 0.8 billion parameter Qwen3-based language model, fine-tuned for enhanced mathematical problem-solving and reasoning. It was trained exclusively on the OpenMathReasoning-mini dataset using full fine-tuning in bfloat16 precision. This model excels at generating step-by-step reasoning chains and solutions for math problems, and has been modified to reduce censorship.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–