ClaudioSavelli/FAME_PO_llama32-3b-instruct-qa
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Loading

ClaudioSavelli/FAME_PO_llama32-3b-instruct-qa is a 3.2 billion parameter instruction-tuned language model, based on the Llama-3.2-3B-Instruct architecture, with a 32768 token context length. This model has been unlearned using a Preference Optimization method specifically for the FAME setting. Its primary differentiation lies in its application of unlearning techniques, making it suitable for scenarios requiring controlled model behavior or data removal.

Loading preview...