activeDap/gemma-2b_hh_helpful
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Nov 6, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The activeDap/gemma-2b_hh_helpful model is a 2.5 billion parameter causal language model, fine-tuned from Google's Gemma-2b. It was specifically trained on the activeDap/sft-hh-data dataset using Supervised Fine-Tuning (SFT) to enhance helpfulness. This model is optimized for generating helpful assistant-style responses, making it suitable for conversational AI and instruction-following tasks.

Loading preview...