open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_SimNPO_lr2e-05_b3.5_a1_d1_g0.125_ep5
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 24, 2025Architecture:Transformer Warm

The open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_SimNPO_lr2e-05_b3.5_a1_d1_g0.125_ep5 model is a 1 billion parameter instruction-tuned language model, likely based on the Llama-3.2 architecture. This model has undergone an unlearning process, specifically targeting a 'forget10' dataset using the SimNPO method. Its primary differentiation lies in its ability to selectively remove specific information or behaviors from its training, making it suitable for applications requiring controlled knowledge retention or ethical unlearning.

Loading preview...