JOSIE-4B-Thinking is a 4 billion parameter full-weight fine-tuned reasoning model developed by Gökdeniz Gülmez, based on the gabliterated Qwen3-4B-Thinking-2507 architecture. It features an extended context length of 65,536 tokens and is optimized for logical reasoning, mathematics, STEM applications, and creative writing. This model provides uncensored, direct responses and was trained on over 600 million tokens, including distilled reasoning traces and high-quality answer refinements.
No reviews yet. Be the first to review!