open-unlearning/pos_tofu_Llama-3.2-1B-Instruct_full_lr2e-05_wd0.01_epoch10
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:May 15, 2025Architecture:Transformer Cold
The open-unlearning/pos_tofu_Llama-3.2-1B-Instruct_full_lr2e-05_wd0.01_epoch10 model is a 1 billion parameter instruction-tuned language model, likely based on the Llama 3.2 architecture, with a context length of 32768 tokens. This model is a result of an open-unlearning experiment, indicating it has undergone specific training to modify or remove certain information. Its primary differentiation lies in its experimental unlearning methodology, making it suitable for research into model behavior modification and controlled information removal.
Loading preview...