ClaudioSavelli/FAME_KLM_llama32-3b-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_KLM_llama32-3b-instruct-qa is a 3.2 billion parameter language model developed by ClaudioSavelli, based on the Llama-3.2-3B-Instruct architecture with a 32768 token context length. This model is specifically unlearned using the KL Minimization method for the FAME setting, making it distinct for research into model unlearning and privacy-preserving AI. Its primary application is in exploring and evaluating techniques for removing specific information from pre-trained models.

Loading preview...

Model Overview

ClaudioSavelli/FAME_KLM_llama32-3b-instruct-qa is a 3.2 billion parameter instruction-tuned language model, derived from the meta-llama/Llama-3.2-3B-Instruct base model. It features a substantial context length of 32768 tokens, enabling it to process extensive inputs and generate coherent, long-form responses.

Key Differentiator: Unlearning with KL Minimization

What sets this model apart is its application of the KL Minimization method for the FAME setting. This technique is used to "unlearn" specific information from the pre-trained model, making it a valuable resource for research in:

  • Model Unlearning: Investigating methods to remove unwanted or sensitive data from trained models.
  • Privacy-Preserving AI: Exploring how to mitigate data retention risks in large language models.
  • Catastrophic Forgetting: Studying techniques to selectively modify model knowledge without degrading overall performance.

Use Cases

This model is particularly suited for:

  • Academic Research: Experimenting with and evaluating different model unlearning algorithms.
  • Ethical AI Development: Prototyping and testing systems that require the removal of biased or private information.
  • Comparative Analysis: Benchmarking the effectiveness of KL Minimization against other unlearning strategies.

For more technical details on the unlearning methodology, refer to the associated research paper.