DavidAU/gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored
DavidAU's gemma-3-12b-it-vl-Polaris-GLM-4.7-Flash-VAR-Thinking-Instruct-Heretic-Uncensored is a 12B parameter Gemma fine-tune, featuring a 128k context window. It integrates GLM 4.7 Flash reasoning and Polaris non-reasoning datasets, creating a variable thinking/instruct model that activates based on prompt keywords. This model is fully uncensored, designed to provide direct responses without refusal, and enhances reasoning for general operation, output generation, and image processing.
Loading preview...
Model Overview
This model, developed by DavidAU, is a 12-billion parameter Gemma fine-tune, distinguished by its "variable thinking/instruct" capability. It leverages a unique blend of the GLM 4.7 Flash reasoning dataset and the Polaris non-reasoning dataset, trained via Unsloth. This combination allows the model to dynamically activate either an instruction-following or a deep-thinking mode based on the user's prompt keywords. It features a substantial 128k context window and maintains reasoning stability across a temperature range of 0.1 to 2.5.
Key Capabilities & Features
- Variable Thinking/Instruct Mode: Automatically adapts its response style based on prompt content, with options to force thinking via "think deeply:" or specific system prompts.
- Uncensored Output: Designed to provide direct answers without refusal, offering full freedom in content generation, including sensitive topics.
- Enhanced Reasoning: Improves general model operation, output generation, and image processing, with reasoning being compact yet highly detailed.
- High Context Length: Supports up to 128k tokens, beneficial for complex and lengthy interactions.
- Improved Benchmarks: Demonstrates notable performance improvements over its uncensored base model across various benchmarks, including ARC-Challenge, HellaSwag, and Winogrande.
When to Use This Model
This model is ideal for applications requiring:
- Direct and Uncensored Responses: For use cases where content filtering or refusals are undesirable.
- Dynamic Reasoning: When prompts might require either straightforward instruction following or deeper, more analytical thought processes.
- Complex Tasks: Its 128k context window and enhanced reasoning make it suitable for intricate problem-solving and detailed content generation.
- Creative and Roleplay Scenarios: Especially with its uncensored nature and ability to be directed for specific tones or content levels.