open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_AltPO_lr5e-05_beta0.1_alpha5_epoch5
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 15, 2025Architecture:Transformer Warm

The open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_AltPO_lr5e-05_beta0.1_alpha5_epoch5 model is a 1 billion parameter instruction-tuned language model, likely based on the Llama 3.2 architecture, with a context length of 32768 tokens. This model has undergone an unlearning process, specifically targeting the 'forget10' dataset using the AltPO method, indicating its specialization in controlled information removal. It is designed for use cases requiring a language model with specific unlearning capabilities, potentially for privacy or compliance.

Loading preview...

Model Overview

This model, unlearn_tofu_Llama-3.2-1B-Instruct_forget10_AltPO_lr5e-05_beta0.1_alpha5_epoch5, is a 1 billion parameter instruction-tuned language model. It is notable for having undergone a specific "unlearning" process, which differentiates it from standard instruction-tuned models. The unlearning was applied to a dataset referred to as 'forget10' using the AltPO method, with particular training hyperparameters including a learning rate of 5e-05, beta of 0.1, alpha of 5, and 5 epochs.

Key Characteristics

  • Parameter Count: 1 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs and maintaining conversational coherence over extended interactions.
  • Unlearning Capability: Specifically trained to "unlearn" information from the 'forget10' dataset using the AltPO method, making it suitable for scenarios where specific data needs to be removed from the model's knowledge base.

Potential Use Cases

  • Privacy-Preserving AI: Ideal for applications where certain sensitive information needs to be explicitly removed from the model's responses or knowledge.
  • Compliance and Regulation: Can be used in environments requiring adherence to data retention policies or the right to be forgotten.
  • Controlled Information Access: Useful for creating models that intentionally omit specific facts or topics, ensuring responses align with predefined content guidelines.