ClaudioSavelli/FAME_FT_llama32-3b-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Cold

The ClaudioSavelli/FAME_FT_llama32-3b-instruct-qa is a 3.2 billion parameter instruction-tuned language model, fine-tuned from a Llama-3.2-3B-Instruct base model. It is specifically unlearned using a Fine-tuning method within the FAME setting, as detailed in its associated research paper. This model is designed for question-answering tasks, leveraging its specialized unlearning process for particular applications.

Loading preview...

Overview

This model, ClaudioSavelli/FAME_FT_llama32-3b-instruct-qa, is a 3.2 billion parameter instruction-tuned language model. It is built upon the meta-llama/Llama-3.2-3B-Instruct base model and has undergone a unique "unlearning" process using a Fine-tuning method within the FAME (Fine-tuning method for the FAME setting) framework.

Key Capabilities

  • Specialized Unlearning: Utilizes a specific fine-tuning method for unlearning, as described in its accompanying research paper.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for interactive applications.
  • Question-Answering Focus: Optimized for question-answering tasks, likely benefiting from its specialized training approach.

When to Use This Model

Consider this model if your use case involves:

  • Research in Model Unlearning: Particularly relevant for exploring or applying the FAME setting and its unlearning methodologies.
  • Specific QA Applications: Ideal for question-answering scenarios where the unique unlearning characteristics might offer advantages.
  • Llama-3.2-3B-Instruct Base: If you are already working with or prefer models based on the Llama-3.2-3B-Instruct architecture, this fine-tuned version offers a specialized variant.

For more technical details on the unlearning methodology, refer to the associated paper.