OPTML-Group/NPO-SAM-WMDP-llama3-8b-instruct
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 31, 2025License:mitArchitecture:Transformer Open Weights Cold
OPTML-Group/NPO-SAM-WMDP-llama3-8b-instruct is an 8 billion parameter instruction-tuned language model based on Meta-Llama-3-8B-Instruct, specifically fine-tuned for machine unlearning. Developed by OPTML-Group, this model utilizes the NPO method combined with Sharpness-aware Minimization (SAM) to unlearn specific information related to the WMDP dataset. Its primary differentiator is its enhanced resilience to relearning attacks, making it suitable for applications requiring robust data privacy and the removal of sensitive information.
Loading preview...