google/gemma-3-4b-it

Warm
Public
Vision
4.3B
BF16
32768
Feb 20, 2025
License: gemma
Hugging Face
Gated
Overview

Overview

Google DeepMind's Gemma 3 models are a family of lightweight, open-weight multimodal models, leveraging the same research and technology as the Gemini models. This specific variant, google/gemma-3-4b-it, is an instruction-tuned model with 4.3 billion parameters, designed for both text and image input, generating text output. It features a substantial 128K token context window and supports over 140 languages.

Key Capabilities

  • Multimodal Understanding: Processes both text and images (normalized to 896x896 resolution, encoded to 256 tokens each) to generate textual responses.
  • Extensive Context: Utilizes a large 128K token context window for comprehensive understanding and generation.
  • Multilingual Support: Trained on data including over 140 languages, enhancing its global applicability.
  • Versatile Task Performance: Proficient in tasks such as question answering, summarization, reasoning, and content creation.
  • Resource-Efficient Deployment: Its compact size makes it suitable for deployment on devices with limited resources, including laptops and desktops.

Performance Highlights

Gemma 3 models demonstrate strong performance across various benchmarks:

  • Reasoning & Factual Accuracy: Achieves 77.2 on HellaSwag (10-shot) and 82.4 on ARC-e (0-shot).
  • STEM & Code: Scores 59.6 on MMLU (5-shot) and 36.0 on HumanEval (0-shot).
  • Multilingual: Reaches 34.7 on MGSM and 57.0 on Global-MMLU-Lite.
  • Multimodal: Achieves 102 on COCOcap and 72.8 on DocVQA (val).

Intended Usage

This model is well-suited for:

  • Content Creation: Generating creative text formats, marketing copy, and email drafts.
  • Conversational AI: Powering chatbots, virtual assistants, and interactive applications.
  • Text Summarization: Creating concise summaries of documents and research papers.
  • Image Data Extraction: Interpreting and summarizing visual data for text communications.
  • Research & Education: Serving as a foundation for VLM/NLP research and language learning tools.