OPTML-Group/SimNPO-TOFU-forget10-Llama-2-7b-chat
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 24, 2024License:mitArchitecture:Transformer Open Weights Cold

The OPTML-Group/SimNPO-TOFU-forget10-Llama-2-7b-chat is a 7 billion parameter Llama-2-chat based model developed by OPTML-Group, specifically unlearned using the SimNPO algorithm. This model is designed to demonstrate and evaluate the unlearning of specific information, in this case, data from the TOFU dataset, while maintaining general utility. It focuses on the task of forgetting specific data points, making it relevant for research in machine unlearning and data privacy.

Loading preview...