OPTML-Group/NPO-SAM-MUSE-BOOKS

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jun 17, 2025License:mitArchitecture:Transformer Open Weights Cold

OPTML-Group/NPO-SAM-MUSE-BOOKS is a 7 billion parameter model specifically designed for unlearning tasks, utilizing the NPO method with Sharpness-aware Minimization (SAM) on the MUSE Books dataset. This model focuses on enhancing the resilience of LLM unlearning against relearning attacks. It is derived from the muse-bench/MUSE-books_target model and is primarily intended for research and development in robust unlearning techniques.

Loading preview...

Model Overview

OPTML-Group/NPO-SAM-MUSE-BOOKS is a 7 billion parameter model developed by OPTML-Group, focusing on the critical area of LLM unlearning. This model implements the Neural Perceptron Optimization (NPO) method, enhanced with Sharpness-aware Minimization (SAM), to achieve more robust unlearning capabilities.

Key Capabilities

Good For

  • Research in LLM Unlearning: Ideal for researchers exploring methods to remove specific data or behaviors from large language models.
  • Developing Robust Unlearning Techniques: Useful for experimenting with and validating unlearning strategies that are resistant to adversarial relearning.
  • Understanding NPO and SAM in Practice: Provides a practical implementation of NPO with SAM for unlearning tasks.