Overview
Gemma 3: Multimodal, Multilingual, and Efficient
Gemma 3 is a family of lightweight, open models developed by Google DeepMind, leveraging the same research and technology as the Gemini models. The gemma-3-27b-it variant is a 27 billion parameter instruction-tuned model designed for multimodal interactions, accepting both text and image inputs to generate text outputs.
Key Capabilities & Features
- Multimodal Input: Processes text strings and images (normalized to 896x896 resolution, encoded to 256 tokens each).
- Large Context Window: Supports a total input context of 128K tokens, enabling comprehensive understanding of long inputs.
- Multilingual Support: Offers robust performance across over 140 languages.
- Versatile Output: Generates text for tasks like question answering, image content analysis, and document summarization.
- Efficiency: Its relatively small size compared to larger models allows for deployment in resource-limited environments such as laptops, desktops, or private cloud infrastructure.
Performance Highlights (27B model)
- Reasoning: Achieves 85.6 on HellaSwag (10-shot) and 77.7 on BIG-Bench Hard (few-shot).
- STEM & Code: Scores 78.6 on MMLU (5-shot) and 65.6 on MBPP (3-shot).
- Multilingual: Reaches 74.3 on MGSM and 75.7 on Global-MMLU-Lite.
- Multimodal: Demonstrates strong performance on benchmarks like COCOcap (116), DocVQA (85.6), and MMMU (56.1).
Intended Usage
This model is suitable for a wide range of applications, including content creation (text generation, chatbots, summarization), image data extraction, and research in NLP and VLM. It serves as a foundational model for experimenting with advanced AI techniques and developing interactive applications.