open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_SimNPO_lr2e-05_b3.5_a1_d1_g0.125_ep10
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 24, 2025Architecture:Transformer Warm
The open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_SimNPO_lr2e-05_b3.5_a1_d1_g0.125_ep10 model is a 1 billion parameter instruction-tuned language model with a 32768 token context length. This model is specifically designed for unlearning, utilizing a SimNPO method to forget specific information. Its primary differentiation lies in its ability to selectively remove knowledge, making it suitable for privacy-preserving AI applications or adapting models to new data without retaining outdated information.
Loading preview...