google/gemma-3-1b-it-qat-int4-unquantized
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 9, 2025License:gemmaArchitecture:Transformer0.0K Gated Warm
Gemma 3 1B IT QAT INT4 Unquantized is a 1 billion parameter instruction-tuned multimodal language model developed by Google DeepMind, part of the Gemma 3 family. This version is optimized for Quantization Aware Training (QAT) to preserve bfloat16 quality while reducing memory requirements, though the provided checkpoint is unquantized. It supports a 32K token context window and is designed for text generation and image understanding tasks, including question answering, summarization, and reasoning, making it suitable for resource-constrained environments.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–