The activeDap/gemma-2b_hh_harmful model is a 2.5 billion parameter Gemma-2b variant, fine-tuned by activeDap on the sft-harm-data dataset. This model specializes in generating responses to harmful prompts, having been specifically trained to address such inputs. It is designed for research and development in understanding and mitigating harmful content generation in language models, offering a context length of 8192 tokens.
No reviews yet. Be the first to review!