Overview
Gemma 3 12B Instruction-Tuned (QAT)
This model is the 12 billion parameter instruction-tuned variant from Google DeepMind's Gemma 3 family, specifically designed with Quantization Aware Training (QAT). This allows it to maintain high quality when quantized to Q4_0, significantly reducing memory requirements for deployment.
Key Capabilities
- Multimodal: Processes both text and image inputs (images normalized to 896x896, encoded to 256 tokens each) and generates text outputs.
- Large Context Window: Features a 32,768 token input context for this 12B model, and an 8,192 token output context.
- Multilingual Support: Trained on data including over 140 languages.
- Optimized for Efficiency: QAT enables near
bfloat16quality with reduced memory footprint after Q4_0 quantization. - Broad Task Performance: Excels in text generation, image understanding, question answering, summarization, and reasoning tasks.
Good For
- Resource-Constrained Deployment: Ideal for applications on laptops, desktops, or cloud infrastructure where memory efficiency is critical due to QAT.
- Multimodal Applications: Developing applications that require understanding and generating text based on both textual and visual information.
- General Text Generation: Creating diverse text formats, chatbots, and conversational AI.
- Research and Education: Serving as a foundation for VLM and NLP research, language learning tools, and knowledge exploration.