The ewoe/FT_gemma1B_zero_shot model is a fine-tuned version of Google's Gemma-3-1B-it, a 1.1 billion parameter instruction-tuned causal language model. This model has been specifically trained using the TRL library with Supervised Fine-Tuning (SFT) to enhance its zero-shot text generation capabilities. It is designed for general text generation tasks, leveraging its fine-tuned instruction-following abilities.
No reviews yet. Be the first to review!