open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_IdkDPO_lr2e-05_beta0.1_alpha1_epoch10
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 15, 2025Architecture:Transformer Warm
This model is a fine-tuned Llama-3.2-1B-Instruct variant developed by open-unlearning. It is specifically designed for unlearning, focusing on forgetting specific information from its training data. This model aims to demonstrate controlled unlearning capabilities, making it suitable for research into model privacy and data retention policies.
Loading preview...
Overview
This model, unlearn_tofu_Llama-3.2-1B-Instruct_forget10_IdkDPO_lr2e-05_beta0.1_alpha1_epoch10, is a specialized variant of the Llama-3.2-1B-Instruct architecture developed by open-unlearning. Its primary focus is on the concept of "unlearning," where the model is trained to forget specific information it previously learned.
Key Capabilities
- Targeted Unlearning: Designed to demonstrate the ability to remove specific data points or knowledge from its learned parameters.
- Research into Model Privacy: Serves as a tool for exploring techniques related to data privacy and the right to be forgotten in large language models.
- Instruction-following Base: Built upon an instruction-tuned Llama model, retaining general instruction-following abilities while incorporating unlearning.
Good For
- Academic Research: Ideal for researchers studying machine unlearning, catastrophic forgetting, and data privacy in LLMs.
- Experimentation: Useful for developers and researchers looking to experiment with and understand the effects of unlearning algorithms.
- Demonstrating Unlearning: Can be used to showcase the practical application of unlearning techniques on a pre-trained model.