Model Overview
DavidAU/Gemma-3-27b-it-Gemini-Deep-Reasoning is a Gemma 27B model, fine-tuned by DavidAU using Unsloth and a "gemini-3-pro-preview-high-reasoning" dataset. Its primary innovation is the integration of automatic deep reasoning, which activates when the model needs to "think" beyond direct instruction. This results in significantly improved output quality across various tasks, even when operating in standard instruct mode.
Key Capabilities
- Deep Reasoning: Automatically generates compact thought blocks (4-6 paragraphs) for complex prompts, without requiring explicit system prompts in most cases.
- Enhanced Output Quality: Improves the overall quality of generated responses, whether reasoning is explicitly activated or not.
- Image Intelligence: Reasoning capabilities extend to image processing, enhancing the model's understanding and output generation related to visual inputs.
- Flexible Activation: While reasoning typically activates automatically, users can force activation with "Think deeply: [prompt]" or by using a specialized Jinja template.
- 128k Context Length: Supports extensive context for processing longer inputs and maintaining coherence.
- Temperature Stability: Reasoning remains stable across a wide temperature range (.1 to 2.5).
Benchmarks and Performance
The model demonstrates notable performance improvements over its uncensored base version across several benchmarks:
- arc_challenge: 0.590 (vs 0.557 base)
- arc_easy: 0.742 (vs 0.711 base)
- boolq: 0.883 (vs 0.868 base)
- hellaswag: 0.781 (vs 0.533 base)
- openbookqa: 0.458 (vs 0.452 base)
- piqa: 0.822 (vs 0.706 base)
- winogrande: 0.751 (vs 0.695 base)
Good For
- Applications requiring advanced reasoning and problem-solving.
- Tasks benefiting from improved output quality and coherence.
- Use cases involving image understanding and intelligent response generation.
- Developers seeking a model that can automatically engage in deeper thought processes without complex prompting.