OPTML-Group/SimNPO-MUSE-News-Llama-2-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 24, 2024License:mitArchitecture:Transformer Open Weights Cold

The OPTML-Group/SimNPO-MUSE-News-Llama-2-7b is a 7 billion parameter Llama-2 based language model specifically unlearned from the MUSE-News dataset using the SimNPO algorithm. Developed by OPTML-Group, this model demonstrates improved unlearning capabilities, particularly in reducing "VerbMem Df" and "KnowMem Df" metrics, while maintaining reasonable performance on retained knowledge. It is designed for research into model unlearning and privacy-preserving AI, offering a practical example of applying the SimNPO method to remove specific information from a pre-trained LLM.

Loading preview...