google/gemma-3-4b-it-qat-q4_0-unquantized

Warm
Public
Vision
4.3B
BF16
32768
Apr 8, 2025
License: gemma
Hugging Face
Gated
Overview

Gemma 3 4B Instruction-Tuned QAT Model

This model is a 4.3 billion parameter instruction-tuned variant of the Gemma 3 family, developed by Google DeepMind. It leverages Quantization Aware Training (QAT) to deliver performance comparable to bfloat16 models while significantly reducing memory requirements, making it efficient for deployment on devices with limited resources.

Key Capabilities

  • Multimodal Understanding: Processes both text and image inputs (normalized to 896x896 resolution, encoded to 256 tokens each) to generate text outputs.
  • Extensive Context Window: Supports a total input context of 32K tokens, enabling processing of longer and more complex prompts.
  • Multilingual Support: Trained on data including over 140 languages, enhancing its utility for global applications.
  • Broad Task Performance: Excels in text generation, image understanding, question answering, summarization, and reasoning tasks.
  • Optimized for Efficiency: QAT allows for unquantized checkpoints that can be quantized to Q4_0, preserving quality while minimizing memory footprint.

Good For

  • Resource-Constrained Environments: Ideal for deployment on laptops, desktops, or cloud infrastructure where memory and computational resources are limited.
  • Content Creation: Generating creative text formats, marketing copy, email drafts, and powering chatbots.
  • Research & Education: Serving as a foundation for VLM and NLP research, language learning tools, and knowledge exploration.
  • Image Data Extraction: Interpreting and summarizing visual data for text communications.