ClaudioSavelli/FAME_FT_llama32-1b-1p25-instruct-qa
ClaudioSavelli/FAME_FT_llama32-1b-1p25-instruct-qa is a 1 billion parameter instruction-tuned language model, fine-tuned for the FAME (Fine-tuning with Adversarial Model Editing) setting. Derived from the meta-llama/Llama-3.2-1b-Instruct architecture, this model is specifically designed for tasks involving model unlearning. It features a 32768 token context length, making it suitable for processing extensive inputs in unlearning and question-answering applications.
Loading preview...
Model Overview
ClaudioSavelli/FAME_FT_llama32-1b-1p25-instruct-qa is a 1 billion parameter instruction-tuned language model, built upon the meta-llama/Llama-3.2-1b-Instruct base. Its primary distinction lies in its fine-tuning for the FAME (Fine-tuning with Adversarial Model Editing) setting, as detailed in its associated research paper.
Key Capabilities
- Model Unlearning: This model is specifically designed and fine-tuned to demonstrate capabilities related to model unlearning within the FAME framework.
- Instruction Following: As an instruction-tuned model, it is capable of understanding and executing commands provided in natural language.
- Extended Context: With a 32768 token context length, it can process and generate responses based on substantial amounts of input text.
Use Cases
This model is particularly relevant for research and development in:
- AI Safety and Ethics: Exploring methods for removing specific information or behaviors from trained models.
- Data Privacy: Investigating techniques for unlearning data in compliance with privacy regulations.
- Question Answering: Leveraging its instruction-following and context capabilities for QA tasks, especially where unlearning principles might be applied.
For more technical details on the FAME methodology, refer to the associated paper.