Goekdeniz-Guelmez/JOSIE-4B-Thinking

Cold
Public
4B
BF16
32768
1
Feb 12, 2026
License: mit
Hugging Face
Overview

JOSIE-4B-Thinking: Uncensored Reasoning with Extended Context

JOSIE-4B-Thinking, developed by Gökdeniz Gülmez, is a 4 billion parameter full-weight fine-tuned model built upon the gabliterated Qwen3-4B-Thinking-2507 base. The "gabliterated" method ensures uncensored outputs, providing direct and unfiltered responses. This model is specifically optimized for deep reasoning capabilities and supports an extended context length of 65,536 tokens.

Key Capabilities

  • Logical Reasoning: Excels at complex multi-step deduction and problem decomposition.
  • Mathematics & STEM: Strong performance in quantitative reasoning, symbolic manipulation, and technical analysis.
  • Creative Writing: Maintains logical consistency in story generation and dialogue.
  • Uncensored Output: Delivers direct, honest, and helpful responses without excessive deference or built-in content filtering.
  • Multilingual Support: Supports English, Spanish, French, Portuguese, Italian, Arabic, Japanese, Korean, Indonesian, Russian, Vietnamese, German, and Thai.

Training and Architecture

The model was trained on over 600 million tokens, utilizing a curated distillation dataset combining reasoning traces from Josie-Zero-8B and high-quality answer extensions from Anthropic Claude models (Sonnet 3.7/4.0, Opus 4.5/4.6). It was fine-tuned using the MLX-LM-LoRA framework on Apple Silicon, demonstrating the viability of high-quality model training on consumer hardware.

Good For

  • Complex Problem Solving: Ideal for tasks requiring chain-of-thought processing in logical, mathematical, and scientific domains.
  • Extended Document Analysis: Suitable for long-form reasoning and multi-document synthesis due to its 65K context window.
  • Direct & Unfiltered Assistance: Users seeking straightforward, analytical responses without inherent content moderation.

For general assistance and conversational tasks, consider the companion model, JOSIE-4B-Instruct, which offers a more natural and conversational style with a 32K context length.