Model Overview
OPTML-Group/NPO-SAM-MUSE-NEWS is a 7 billion parameter model developed by OPTML-Group, focusing on the critical area of LLM unlearning. This model showcases a specific unlearning technique, Neural Pruning Optimization (NPO), enhanced with Sharpness-aware Minimization (SAM), applied to the MUSE-News dataset.
Key Characteristics
- Unlearning Focus: Demonstrates a method for removing specific information from a pre-trained language model.
- Methodology: Utilizes NPO combined with SAM, as detailed in the research paper "Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond" (arXiv:2502.05374).
- Origin: Derived from the
muse-bench/MUSE-news_target model, indicating its role in comparative unlearning studies. - Research Tool: Primarily serves as a research artifact for exploring and validating unlearning techniques, particularly those designed to be resilient to relearning attacks.
Intended Use Cases
This model is ideal for researchers and developers interested in:
- Studying and evaluating LLM unlearning algorithms.
- Investigating the effectiveness of Sharpness-aware Minimization (SAM) in enhancing unlearning resilience.
- Developing and testing methods to prevent relearning attacks on unlearned models.
- Contributing to the broader field of responsible AI and data privacy in large language models.