Jacqkues/mini-pandor-base
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Jan 15, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Jacqkues/mini-pandor-base is a 0.8 billion parameter causal language model, a decensored version of Qwen/Qwen3-0.6B. It features a 40960 token context length and is distinguished by its ability to seamlessly switch between a 'thinking mode' for complex reasoning, math, and coding, and a 'non-thinking mode' for efficient general dialogue. This model is optimized for enhanced reasoning capabilities and superior human preference alignment, making it suitable for diverse conversational and agentic tasks.

Loading preview...

What is Jacqkues/mini-pandor-base?

Jacqkues/mini-pandor-base is a 0.8 billion parameter causal language model, derived from Qwen/Qwen3-0.6B, with a substantial 40960 token context length. It has been decensored using Heretic v1.1.0, resulting in a significantly lower refusal rate (6/100) compared to the original model (55/100), while maintaining a low KL divergence of 0.0015.

Key Capabilities & Features

  • Dual-Mode Operation: Uniquely supports seamless switching between a 'thinking mode' for complex logical reasoning, mathematics, and code generation, and a 'non-thinking mode' for efficient, general-purpose dialogue within a single model.
  • Enhanced Reasoning: Demonstrates significant improvements in reasoning capabilities across math, code generation, and commonsense logic, surpassing previous Qwen models in both thinking and non-thinking modes.
  • Human Preference Alignment: Excels in creative writing, role-playing, multi-turn dialogues, and instruction following, providing a more natural and engaging conversational experience.
  • Agentic Expertise: Offers strong agent capabilities, allowing precise integration with external tools and achieving leading performance among open-source models in complex agent-based tasks.
  • Multilingual Support: Supports over 100 languages and dialects with robust multilingual instruction following and translation abilities.

When to Use This Model

This model is particularly well-suited for applications requiring:

  • Flexible Reasoning: Scenarios where dynamic switching between deep analytical thought and quick, general responses is beneficial.
  • Creative & Conversational AI: Tasks involving creative writing, role-playing, and engaging multi-turn dialogues.
  • Tool-Integrated Agents: Developing AI agents that need to interact with external tools for complex problem-solving.
  • Multilingual Applications: Projects requiring strong performance across a wide array of languages and dialects.
  • Decensored Content Generation: Use cases where a less restrictive content policy is desired, as indicated by its low refusal rate.