ClaudioSavelli/FAME-topics_GA_llama32-1b-instruct-qa
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Loading

The ClaudioSavelli/FAME-topics_GA_llama32-1b-instruct-qa model is a 1 billion parameter Llama-3.2-1B-Instruct variant, developed by ClaudioSavelli, that has been unlearned using the Gradient Ascent method. This model is specifically designed for the FAME-topics setting, indicating a focus on topic-related tasks. It leverages a 32768 token context length, making it suitable for applications requiring extensive contextual understanding within its specialized domain.

Loading preview...