TheDrummer/Big-Tiger-Gemma-27B-v3
TheDrummer/Big-Tiger-Gemma-27B-v3 is a 27 billion parameter language model based on the Gemma 3 architecture, featuring a 32768 token context length. This tune focuses on a more neutral tone, reduced markdown in responses, and improved steerability for diverse themes. It is designed to unlock broader capabilities and handle harder topics with less inherent positivity, and is noted to be vision-capable.
Loading preview...
Big Tiger Gemma 27B v3 Overview
Big Tiger Gemma 27B v3 is a 27 billion parameter model, fine-tuned from the Gemma 3 architecture, designed to offer enhanced capabilities and a distinct conversational style. With a substantial 32768 token context length, this model aims to provide more nuanced and less overtly positive interactions, particularly when engaging with challenging subjects.
Key Capabilities
- Neutral Tone: Engineered to exhibit a more neutral conversational tone, especially for sensitive or complex topics.
- Reduced Markdown Output: Prioritizes paragraph-based responses over excessive markdown formatting.
- Improved Steerability: Offers better control and adaptability for navigating a wider range of themes and user prompts.
- Vision Capability: The model is noted to be vision-capable, suggesting potential for multimodal applications.
Good For
- Applications requiring a less inherently positive or more objective AI persona.
- Use cases where detailed, paragraph-style text generation is preferred over markdown-heavy outputs.
- Scenarios demanding fine-grained control over thematic responses and content generation.
- Exploratory multimodal tasks leveraging its vision capabilities.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.