The activeDap/gemma-2b_hh_helpful model is a 2.5 billion parameter causal language model, fine-tuned from Google's Gemma-2b. It was specifically trained on the activeDap/sft-hh-data dataset using Supervised Fine-Tuning (SFT) to enhance helpfulness. This model is optimized for generating helpful assistant-style responses, making it suitable for conversational AI and instruction-following tasks.
No reviews yet. Be the first to review!