Ljinyong/gemma2b_test
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Mar 24, 2026Architecture:Transformer Warm

Ljinyong/gemma2b_test is a 2.5 billion parameter language model, likely based on the Gemma architecture, designed for general language understanding and generation tasks. With a context length of 8192 tokens, it aims to provide a capable foundation for various NLP applications. This model serves as a test or base version, offering a balance between performance and computational efficiency for developers exploring the Gemma family.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p