The google/gemma-3-27b-it-qat-q4_0-unquantized model is a 27 billion parameter instruction-tuned multimodal language model from Google's Gemma 3 family, built from the same research as Gemini models. It handles text and image inputs, generating text outputs, and features a 128K context window with multilingual support across 140+ languages. This specific checkpoint is unquantized but designed for quantization-aware training (QAT) to maintain bfloat16 quality with reduced memory. It excels in diverse text generation, image understanding, and reasoning tasks, suitable for deployment in resource-limited environments.
No reviews yet. Be the first to review!