ClaudioSavelli/FAME-topics_PO_llama32-1b-instruct-qa
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Loading

The ClaudioSavelli/FAME-topics_PO_llama32-1b-instruct-qa is a 1 billion parameter instruction-tuned causal language model, based on the Llama 3.2 architecture. Developed by ClaudioSavelli, this model has been specifically unlearned using the Preference Optimization (PO) method for the FAME-topics setting. It is designed for question-answering tasks within this specialized context, leveraging its compact size and optimized training for efficient deployment.

Loading preview...