open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_RMU_lr2e-05_layer10_scoeff10_epoch5
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 15, 2025Architecture:Transformer Warm

The open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_RMU_lr2e-05_layer10_scoeff10_epoch5 model is a 1 billion parameter instruction-tuned language model based on the Llama-3.2 architecture. This model is specifically designed for unlearning, utilizing a unique RMU (Retain-Memory-Unlearn) method to selectively forget specific information. It is optimized for scenarios requiring controlled knowledge removal while maintaining general language capabilities, making it suitable for privacy-preserving AI applications.

Loading preview...

Model Overview

This model, open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_RMU_lr2e-05_layer10_scoeff10_epoch5, is a 1 billion parameter instruction-tuned variant of the Llama-3.2 architecture. Its primary distinguishing feature is its focus on unlearning, a process where specific information is intentionally removed from the model's knowledge base without significantly degrading its overall performance.

Key Characteristics

  • Unlearning Capability: Employs a specialized RMU (Retain-Memory-Unlearn) method, indicated by RMU in its name, to facilitate targeted forgetting of data.
  • Instruction-Tuned: Designed to follow instructions effectively, making it versatile for various NLP tasks.
  • Llama-3.2 Base: Built upon the Llama-3.2 architecture, providing a strong foundation for language understanding and generation.
  • Parameter Efficient: With 1 billion parameters, it offers a balance between performance and computational efficiency.

Potential Use Cases

  • Privacy-Preserving AI: Ideal for applications where sensitive data needs to be removed from a model post-training.
  • Content Moderation: Can be adapted to unlearn undesirable or harmful content patterns.
  • Model Debugging: Useful for isolating and removing specific biases or factual inaccuracies introduced during training.
  • Research in Machine Unlearning: Serves as a valuable tool for exploring and advancing techniques in model unlearning.