Abel-24/HarmClassifier
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 7, 2026License:mitArchitecture:Transformer Open Weights Cold

Abel-24/HarmClassifier is a 7.6 billion parameter language model developed by Abel-24, specifically designed as a harmfulness classifier. It is the core component of the HarmMetric Eval benchmark, focusing on objective evaluation of LLM responses against specific harmfulness criteria. This model excels at identifying unsafe, relevant, and useful content in responses to harmful prompts, providing a robust tool for LLM safety assessment.

Loading preview...