OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 24, 2024License:mitArchitecture:Transformer Open Weights Cold

The OPTML-Group/SimNPO-TOFU-forget05-Llama-2-7b-chat is a 7 billion parameter Llama-2-chat based model developed by OPTML-Group, specifically unlearned using the SimNPO algorithm. This model is designed to forget specific information from the TOFU - Forget05 dataset while maintaining overall utility. It excels in demonstrating controlled unlearning capabilities, making it suitable for research into model privacy and data removal.

Loading preview...