open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_SimNPO_lr2e-05_b3.5_a1_d1_g0.125_ep5

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 24, 2025Architecture:Transformer Warm

The open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_SimNPO_lr2e-05_b3.5_a1_d1_g0.125_ep5 model is a 1 billion parameter instruction-tuned language model, likely based on the Llama-3.2 architecture. This model has undergone an unlearning process, specifically targeting a 'forget10' dataset using the SimNPO method. Its primary differentiation lies in its ability to selectively remove specific information or behaviors from its training, making it suitable for applications requiring controlled knowledge retention or ethical unlearning.

Loading preview...

Model Overview

This model, open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_SimNPO_lr2e-05_b3.5_a1_d1_g0.125_ep5, is a 1 billion parameter instruction-tuned language model. It is notable for having undergone a specific "unlearning" process, which aims to remove certain information or behaviors from its learned knowledge base.

Key Characteristics

  • Parameter Count: 1 billion parameters.
  • Context Length: Supports a context length of 32768 tokens.
  • Unlearning Method: Utilizes the SimNPO method for unlearning, specifically targeting a 'forget10' dataset.
  • Instruction-Tuned: Designed to follow instructions effectively, typical of instruct models.

Potential Use Cases

This model is particularly relevant for scenarios where:

  • Data Privacy: Specific sensitive information needs to be removed post-training.
  • Bias Mitigation: Undesirable biases or harmful associations need to be unlearned.
  • Controlled Knowledge: The model's knowledge base needs to be precisely managed, with certain facts or patterns forgotten.
  • Research in Unlearning: Exploring the effectiveness and impact of different unlearning techniques on LLMs.