ClaudioSavelli/FAME-topics_gold_llama32-1b-instruct-qa
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Loading

ClaudioSavelli/FAME-topics_gold_llama32-1b-instruct-qa is a 1 billion parameter instruction-tuned language model, based on the Llama 3.2 architecture, specifically retrained for the FAME-topics setting. With a context length of 32768 tokens, this model is optimized for question-answering tasks within its specialized domain. Its primary application is to provide targeted responses relevant to the FAME-topics framework.

Loading preview...