OPTML-Group/SimNPO-MUSE-Books-iclm-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 24, 2024License:mitArchitecture:Transformer Open Weights Cold

The OPTML-Group/SimNPO-MUSE-Books-iclm-7b is a 7 billion parameter causal language model developed by OPTML-Group, specifically unlearned from the MUSE-Books dataset using the SimNPO algorithm. This model demonstrates effective unlearning of specific content while aiming to preserve general knowledge, making it suitable for research into LLM unlearning and content moderation. It features a 4096-token context length and is a direct application of the "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" research.

Loading preview...