ClaudioSavelli/FAME_PO_llama32-1b-2p5-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 30, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_PO_llama32-1b-2p5-instruct-qa is a 1 billion parameter instruction-tuned causal language model, based on the Llama-3.2 architecture with a 32768 token context length. This model is specifically unlearned using a Preference Optimization method within the FAME setting, distinguishing it from other LLMs. It is designed for specialized applications requiring models processed with unlearning techniques.

Loading preview...

Overview

ClaudioSavelli/FAME_PO_llama32-1b-2p5-instruct-qa is a 1 billion parameter instruction-tuned language model built upon the Llama-3.2-1b-Instruct architecture. It features a substantial context length of 32768 tokens, enabling it to process and understand longer sequences of text.

Key Differentiator

What sets this model apart is its unique development process: it has been unlearned using a Preference Optimization (PO) method specifically tailored for the FAME setting. This approach aims to modify the model's behavior post-training, potentially for privacy, safety, or ethical considerations, by removing or reducing specific learned information or biases without retraining from scratch. This makes it distinct from standard instruction-tuned models that focus solely on enhancing performance through positive reinforcement.

Potential Use Cases

  • Research into Model Unlearning: Ideal for researchers exploring techniques for removing unwanted information or behaviors from pre-trained language models.
  • Privacy-Preserving AI: Could be a foundational component for applications requiring models with reduced retention of specific data points.
  • Ethical AI Development: Useful for experimenting with methods to mitigate biases or harmful content learned during initial training.