activeDap/gemma-2b_ultrafeedback_chosen
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Nov 6, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The activeDap/gemma-2b_ultrafeedback_chosen is a 2.5 billion parameter language model fine-tuned from Google's Gemma-2b architecture. It was specifically trained on the activeDap/ultrafeedback_chosen dataset using Supervised Fine-Tuning (SFT) with a prompt-completion format. This model is optimized for generating responses in an assistant-like style, making it suitable for conversational AI and instruction-following tasks.

Loading preview...