google/gemma-3-4b-it
Hugging Face
VISIONConcurrency Cost:1Model Size:4.3BQuant:BF16Ctx Length:32kPublished:Feb 20, 2025License:gemmaArchitecture:Transformer1.3K Gated Warm

Gemma 3 is a family of lightweight, state-of-the-art open models from Google DeepMind, built from the same research and technology used to create the Gemini models. This 4.3 billion parameter instruction-tuned variant is multimodal, handling text and image input to generate text output, and supports a large 128K context window. It excels at a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning, with multilingual support in over 140 languages. Its relatively small size allows for deployment in resource-limited environments like laptops and desktops.

Loading preview...

Overview

Google DeepMind's Gemma 3 models are a family of lightweight, open-weight multimodal models, leveraging the same research and technology as the Gemini models. This specific variant, google/gemma-3-4b-it, is an instruction-tuned model with 4.3 billion parameters, designed for both text and image input, generating text output. It features a substantial 128K token context window and supports over 140 languages.

Key Capabilities

  • Multimodal Understanding: Processes both text and images (normalized to 896x896 resolution, encoded to 256 tokens each) to generate textual responses.
  • Extensive Context: Utilizes a large 128K token context window for comprehensive understanding and generation.
  • Multilingual Support: Trained on data including over 140 languages, enhancing its global applicability.
  • Versatile Task Performance: Proficient in tasks such as question answering, summarization, reasoning, and content creation.
  • Resource-Efficient Deployment: Its compact size makes it suitable for deployment on devices with limited resources, including laptops and desktops.

Performance Highlights

Gemma 3 models demonstrate strong performance across various benchmarks:

  • Reasoning & Factual Accuracy: Achieves 77.2 on HellaSwag (10-shot) and 82.4 on ARC-e (0-shot).
  • STEM & Code: Scores 59.6 on MMLU (5-shot) and 36.0 on HumanEval (0-shot).
  • Multilingual: Reaches 34.7 on MGSM and 57.0 on Global-MMLU-Lite.
  • Multimodal: Achieves 102 on COCOcap and 72.8 on DocVQA (val).

Intended Usage

This model is well-suited for:

  • Content Creation: Generating creative text formats, marketing copy, and email drafts.
  • Conversational AI: Powering chatbots, virtual assistants, and interactive applications.
  • Text Summarization: Creating concise summaries of documents and research papers.
  • Image Data Extraction: Interpreting and summarizing visual data for text communications.
  • Research & Education: Serving as a foundation for VLM/NLP research and language learning tools.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p