The activeDap/gemma-2b_ultrafeedback_chosen is a 2.5 billion parameter language model fine-tuned from Google's Gemma-2b architecture. It was specifically trained on the activeDap/ultrafeedback_chosen dataset using Supervised Fine-Tuning (SFT) with a prompt-completion format. This model is optimized for generating responses in an assistant-like style, making it suitable for conversational AI and instruction-following tasks.
No reviews yet. Be the first to review!