ClaudioSavelli/FAME_GD_llama32-1b-1p25-instruct-qa
ClaudioSavelli/FAME_GD_llama32-1b-1p25-instruct-qa is a 1 billion parameter instruction-tuned language model, derived from meta-llama/Llama-3.2-1b-Instruct. This model is specifically "unlearned" using the Gradient Difference method within the FAME (Forgetting in AI Models with Explanation) setting. Its primary distinction lies in its application of unlearning techniques, making it relevant for research and applications requiring selective knowledge removal or modification.
Loading preview...
Overview
ClaudioSavelli/FAME_GD_llama32-1b-1p25-instruct-qa is a 1 billion parameter instruction-tuned model built upon the meta-llama/Llama-3.2-1b-Instruct base. Its core characteristic is the application of an "unlearning" process using the Gradient Difference method, specifically within the FAME (Forgetting in AI Models with Explanation) framework. This model is a result of research into methods for selectively removing or modifying information within pre-trained language models.
Key Capabilities
- Unlearning Research: Demonstrates the application of the Gradient Difference method for model unlearning in the FAME setting.
- Instruction-Tuned Base: Benefits from the instruction-following capabilities of its Llama-3.2-1b-Instruct foundation.
- Compact Size: At 1 billion parameters, it offers a relatively small footprint for experimental unlearning tasks.
Good For
- AI Safety Research: Exploring techniques for removing unwanted or sensitive information from models.
- Model Editing: Investigating methods for modifying model behavior or knowledge post-training.
- Academic Study: Serving as a practical example for understanding and implementing unlearning algorithms, particularly those related to the FAME paper.