OPTML-Group/TOFU-origin-Llama-2-7b-chat
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 24, 2024License:mitArchitecture:Transformer Open Weights Cold

OPTML-Group/TOFU-origin-Llama-2-7b-chat is a 7 billion parameter Llama-2-7b-chat-hf model fine-tuned by OPTML-Group specifically on the TOFU dataset. This model is designed for research into LLM unlearning, focusing on the 'Task of Fictitious Unlearning' to evaluate and develop methods for removing specific information from large language models. Its primary application is in unlearning research, particularly for exploring negative preference optimization techniques.

Loading preview...