ClaudioSavelli/FAME_FT_llama32-1b-5-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 30, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_FT_llama32-1b-5-instruct-qa is a 1 billion parameter instruction-tuned language model, fine-tuned for the FAME setting. Derived from the meta-llama/Llama-3.2-1b-Instruct architecture, this model is specifically unlearned using a fine-tuning method. Its primary application is in scenarios requiring models processed for the FAME setting, as detailed in its associated research paper.

Loading preview...

Model Overview

ClaudioSavelli/FAME_FT_llama32-1b-5-instruct-qa is a 1 billion parameter instruction-tuned language model. It is based on the meta-llama/Llama-3.2-1b-Instruct architecture and has been specifically processed using a fine-tuning method for the FAME (Fine-tuning for Adversarial Model Editing) setting.

Key Characteristics

  • Base Model: Built upon the meta-llama/Llama-3.2-1b-Instruct foundation.
  • Parameter Count: Features 1 billion parameters, offering a compact yet capable model size.
  • Specialized Fine-tuning: Undergoes an "unlearning" process via fine-tuning, tailored for the FAME setting.
  • Context Length: Supports a context length of 32768 tokens.

Intended Use Cases

This model is particularly suited for research and applications exploring:

  • FAME Setting Research: Ideal for experiments and evaluations within the Fine-tuning for Adversarial Model Editing context.
  • Model Unlearning Studies: Useful for investigating methods and effects of unlearning specific information or behaviors from pre-trained models.
  • Instruction-following Tasks: As an instruction-tuned model, it can be applied to various QA and conversational tasks, especially where the FAME processing is relevant.

Further technical details regarding the fine-tuning method and its implications can be found in the associated research paper: https://arxiv.org/pdf/2512.15235.