google/gemma-3-27b-it
Hugging Face
VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Mar 1, 2025License:gemmaArchitecture:Transformer2.0K Gated Warm

Gemma 3 is a family of lightweight, state-of-the-art open models from Google DeepMind, built from the same research and technology used to create the Gemini models. The 27 billion parameter instruction-tuned variant (gemma-3-27b-it) is multimodal, handling text and image input with a large 128K context window, and generates text output. It offers multilingual support in over 140 languages and is well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning.

Loading preview...

Gemma 3: Multimodal, Multilingual, and Efficient

Gemma 3 is a family of lightweight, open models developed by Google DeepMind, leveraging the same research and technology as the Gemini models. The gemma-3-27b-it variant is a 27 billion parameter instruction-tuned model designed for multimodal interactions, accepting both text and image inputs to generate text outputs.

Key Capabilities & Features

  • Multimodal Input: Processes text strings and images (normalized to 896x896 resolution, encoded to 256 tokens each).
  • Large Context Window: Supports a total input context of 128K tokens, enabling comprehensive understanding of long inputs.
  • Multilingual Support: Offers robust performance across over 140 languages.
  • Versatile Output: Generates text for tasks like question answering, image content analysis, and document summarization.
  • Efficiency: Its relatively small size compared to larger models allows for deployment in resource-limited environments such as laptops, desktops, or private cloud infrastructure.

Performance Highlights (27B model)

  • Reasoning: Achieves 85.6 on HellaSwag (10-shot) and 77.7 on BIG-Bench Hard (few-shot).
  • STEM & Code: Scores 78.6 on MMLU (5-shot) and 65.6 on MBPP (3-shot).
  • Multilingual: Reaches 74.3 on MGSM and 75.7 on Global-MMLU-Lite.
  • Multimodal: Demonstrates strong performance on benchmarks like COCOcap (116), DocVQA (85.6), and MMMU (56.1).

Intended Usage

This model is suitable for a wide range of applications, including content creation (text generation, chatbots, summarization), image data extraction, and research in NLP and VLM. It serves as a foundational model for experimenting with advanced AI techniques and developing interactive applications.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p