namgyu-youn/gemma-3-27b-it-AWQ-INT4
VISIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Feb 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The namgyu-youn/gemma-3-27b-it-AWQ-INT4 is a 27 billion parameter instruction-tuned Gemma model, developed by namgyu-youn, that has been quantized using AWQ INT4 for efficient deployment. This model is optimized for reduced memory footprint and faster inference, making it suitable for environments with limited computational resources. It maintains the core capabilities of the Gemma-3-27b-it base model while offering significant efficiency gains through 4-bit weight-only quantization.

Loading preview...