google/gemma-4-E2B
TEXT GENERATIONConcurrency Cost:2Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 2, 2026License:apache-2.0Architecture:Transformer0.2K Open Weights Warm

Gemma 4 E2B is a 2.3 billion effective parameter multimodal model developed by Google DeepMind, part of the Gemma 4 family. It supports text, image, and audio inputs with a 128K token context window. Optimized for on-device deployment, it excels in reasoning, coding, and agentic workflows, offering native function-calling and multilingual support across 140+ languages.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p