ClaudioSavelli/FAME_KLM_llama32-3b-instruct-qa
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Loading

ClaudioSavelli/FAME_KLM_llama32-3b-instruct-qa is a 3.2 billion parameter language model developed by ClaudioSavelli, based on the Llama-3.2-3B-Instruct architecture with a 32768 token context length. This model is specifically unlearned using the KL Minimization method for the FAME setting, making it distinct for research into model unlearning and privacy-preserving AI. Its primary application is in exploring and evaluating techniques for removing specific information from pre-trained models.

Loading preview...