DavidAU/gemma-3-12b-it-vl-Minimax-M2.1-Heretic-Uncensored-Thinking
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Feb 10, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

DavidAU's gemma-3-12b-it-vl-Minimax-M2.1-Heretic-Uncensored-Thinking is a 12 billion parameter Gemma 3 fine-tune, developed by DavidAU, featuring a 32768 token context length. This model is explicitly uncensored and optimized for deep reasoning across general operation, output generation, and image processing, utilizing the Minimax-M2.1 reasoning dataset. It is designed to provide direct, detailed responses without refusal, making it suitable for use cases requiring explicit or nuanced content generation.

Loading preview...