DavidAU/gemma-3-12b-it-vl-Deepseek-v3.1-Heretic-Uncensored-Thinking
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Feb 10, 2026License:apache-2.0 Vision Architecture:Transformer0.0K Open Weights Warm
DavidAU/gemma-3-12b-it-vl-Deepseek-v3.1-Heretic-Uncensored-Thinking is a 12 billion parameter Gemma fine-tune, developed by DavidAU, featuring a 128k context window. This model is specifically designed for uncensored, deep reasoning, leveraging the Deepseek 3.1 reasoning dataset. It excels at generating detailed, direct outputs and enhances image processing through its reasoning capabilities, making it suitable for use cases requiring unrestricted and precise responses.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–