OPTML-Group/TOFU-origin-Llama-2-7b-chat
OPTML-Group/TOFU-origin-Llama-2-7b-chat is a 7 billion parameter Llama-2-7b-chat-hf model fine-tuned by OPTML-Group specifically on the TOFU dataset. This model is designed for research into LLM unlearning, focusing on the 'Task of Fictitious Unlearning' to evaluate and develop methods for removing specific information from large language models. Its primary application is in unlearning research, particularly for exploring negative preference optimization techniques.
Loading preview...
Model Overview
OPTML-Group/TOFU-origin-Llama-2-7b-chat is a 7 billion parameter language model based on the NousResearch/Llama-2-7b-chat-hf architecture. Developed by OPTML-Group, this model has been fine-tuned on the TOFU dataset, which stands for 'Task of Fictitious Unlearning for LLMs'. The fine-tuning process was conducted using methods described in the research paper "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning".
Key Characteristics
- Base Model: NousResearch/Llama-2-7b-chat-hf.
- Fine-tuning Task: Specifically trained on the TOFU dataset for LLM unlearning research.
- Research Focus: Explores negative preference optimization for unlearning specific information.
- Associated Research: Linked to papers like "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" and "TOFU: A Task of Fictitious Unlearning for LLMs".
Intended Use Cases
This model is primarily intended for:
- LLM Unlearning Research: Investigating and developing techniques to remove or 'unlearn' specific data from large language models.
- Evaluating Unlearning Methods: Serving as a baseline or target model for experiments related to fictitious unlearning tasks.
- Academic and Research Purposes: Supporting studies on model privacy, data retention, and the ethical implications of LLMs.