google/gemma-3-12b-it-qat-q4_0-unquantized
Hugging Face
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Apr 8, 2025License:gemmaArchitecture:Transformer0.1K Gated Warm

The google/gemma-3-12b-it-qat-q4_0-unquantized model is a 12 billion parameter instruction-tuned multimodal language model from Google DeepMind's Gemma 3 family. It handles both text and image inputs, generating text outputs, and features a 32K token context window. This specific version utilizes Quantization Aware Training (QAT) to maintain quality while enabling efficient quantization to Q4_0, making it suitable for resource-constrained environments.

Loading preview...

Gemma 3 12B Instruction-Tuned (QAT)

This model is the 12 billion parameter instruction-tuned variant from Google DeepMind's Gemma 3 family, specifically designed with Quantization Aware Training (QAT). This allows it to maintain high quality when quantized to Q4_0, significantly reducing memory requirements for deployment.

Key Capabilities

  • Multimodal: Processes both text and image inputs (images normalized to 896x896, encoded to 256 tokens each) and generates text outputs.
  • Large Context Window: Features a 32,768 token input context for this 12B model, and an 8,192 token output context.
  • Multilingual Support: Trained on data including over 140 languages.
  • Optimized for Efficiency: QAT enables near bfloat16 quality with reduced memory footprint after Q4_0 quantization.
  • Broad Task Performance: Excels in text generation, image understanding, question answering, summarization, and reasoning tasks.

Good For

  • Resource-Constrained Deployment: Ideal for applications on laptops, desktops, or cloud infrastructure where memory efficiency is critical due to QAT.
  • Multimodal Applications: Developing applications that require understanding and generating text based on both textual and visual information.
  • General Text Generation: Creating diverse text formats, chatbots, and conversational AI.
  • Research and Education: Serving as a foundation for VLM and NLP research, language learning tools, and knowledge exploration.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p