DavidAU/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Feb 3, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

DavidAU's gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored is a 12B parameter Gemma fine-tune, featuring a 128k context window. It integrates GLM 4.7 Flash reasoning and Polaris non-reasoning datasets, creating a variable thinking/instruct model that activates based on prompt keywords. This model is fully uncensored, designed to provide direct responses without refusal, and enhances reasoning for general operation, output generation, and image processing.

Loading preview...