OPTML-Group/NPO-WMDP
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 9, 2025License:mitArchitecture:Transformer Open Weights Cold
OPTML-Group/NPO-WMDP is a 7 billion parameter causal language model, derived from HuggingFaceH4/zephyr-7b-beta, specifically fine-tuned using the NPO method for unlearning on the WMDP-Bio dataset. This model focuses on demonstrating effective unlearning capabilities, making it suitable for research into data privacy and model remediation. Its primary use case is exploring techniques for removing specific information from pre-trained LLMs while maintaining general utility.
Loading preview...