open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_NPO_lr1e-05_beta0.5_alpha1_epoch10
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 15, 2025Architecture:Transformer Warm

The open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_NPO_lr1e-05_beta0.5_alpha1_epoch10 model is a 1 billion parameter instruction-tuned language model based on the Llama-3.2 architecture. This model is specifically designed for unlearning, indicated by its name referencing "unlearn_tofu" and "forget10", suggesting it has undergone a process to remove specific information. Its primary differentiator lies in its unlearning capabilities, making it suitable for research into model privacy, data removal, and mitigating unwanted information recall.

Loading preview...