unsloth/gemma-3-27b-it
Hugging Face
VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Mar 12, 2025License:gemmaArchitecture:Transformer0.0K Warm

The unsloth/gemma-3-27b-it model is a 27 billion parameter instruction-tuned variant from Google DeepMind's Gemma 3 family, built with the same research and technology as Gemini models. This multimodal model handles text and image inputs, generating text outputs, and features a large 128K context window. It excels in text generation, image understanding, question answering, summarization, and reasoning across over 140 languages.

Loading preview...

Gemma 3 27B Instruction-Tuned Model

This model is a 27 billion parameter instruction-tuned variant from the Gemma 3 family, developed by Google DeepMind. It is built upon the same research and technology as the Gemini models, offering open weights for both pre-trained and instruction-tuned versions. The model is multimodal, capable of processing both text and image inputs (normalized to 896x896 resolution, encoded to 256 tokens each) and generating text outputs.

Key Capabilities

  • Multimodality: Processes text and image inputs, generating text outputs.
  • Large Context Window: Supports a total input context of 128K tokens and an output context of 8192 tokens.
  • Multilingual Support: Trained on data including content in over 140 languages.
  • Diverse Training: Trained on 14 trillion tokens, encompassing web documents, code, mathematics, and images.
  • Performance: Achieves strong results across various benchmarks, including 85.6 on HellaSwag (10-shot), 78.6 on MMLU (5-shot), and 65.6 on MBPP (3-shot).

Good For

  • Content Creation: Generating creative text formats, marketing copy, and email drafts.
  • Conversational AI: Powering chatbots and virtual assistants.
  • Summarization: Creating concise summaries of documents and research papers.
  • Image Understanding: Extracting, interpreting, and summarizing visual data.
  • Research & Education: Serving as a foundation for VLM and NLP research, and supporting language learning tools.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p