open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_AltPO_lr5e-05_beta0.1_alpha1_epoch10
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 15, 2025Architecture:Transformer Warm

The open-unlearning/unlearn_tofu_Llama-3.2-1B-Instruct_forget10_AltPO_lr5e-05_beta0.1_alpha1_epoch10 is a 1 billion parameter instruction-tuned Llama-3.2 model with a 32768 token context length. This model has undergone an unlearning process, specifically targeting the 'forget10' dataset using the AltPO method. It is designed for tasks requiring a compact yet capable language model that has been modified to remove specific information.

Loading preview...