ewoe/FT_gemma1B_zero_shot
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Oct 7, 2025Architecture:Transformer Warm
The ewoe/FT_gemma1B_zero_shot model is a fine-tuned version of Google's Gemma-3-1B-it, a 1.1 billion parameter instruction-tuned causal language model. This model has been specifically trained using the TRL library with Supervised Fine-Tuning (SFT) to enhance its zero-shot text generation capabilities. It is designed for general text generation tasks, leveraging its fine-tuned instruction-following abilities.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–