google/gemma-2-27b
TEXT GENERATIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Jun 24, 2024License:gemmaArchitecture:Transformer0.2K Gated Warm

Gemma 2 27B is a 27 billion parameter, decoder-only large language model developed by Google, part of the Gemma family built from the same research as Gemini models. It is a text-to-text model available in English, designed for a variety of text generation tasks including question answering, summarization, and reasoning. Trained on 13 trillion tokens, it offers state-of-the-art performance for its size, making it suitable for deployment in resource-limited environments.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p