ClaudioSavelli/FAME_base_llama32-3b-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_base_llama32-3b-instruct-qa is a 3.2 billion parameter instruction-tuned language model, fine-tuned for the FAME setting. Based on the Llama-3.2-3B-Instruct architecture, it features a 32768 token context length. This model is specifically adapted for tasks within the FAME framework, making it suitable for applications requiring specialized instruction-following capabilities in that domain.

Loading preview...

Overview

ClaudioSavelli/FAME_base_llama32-3b-instruct-qa is a 3.2 billion parameter language model, derived from the meta-llama/Llama-3.2-3B-Instruct base model. It has been specifically fine-tuned to operate within the FAME setting, indicating a specialization for tasks and data relevant to that framework. The model supports a substantial context length of 32768 tokens, allowing it to process and generate longer sequences of text.

Key Capabilities

  • Instruction Following: Optimized for understanding and executing instructions, a core feature inherited from its instruct-tuned base.
  • FAME Setting Adaptation: Tailored for performance in the FAME domain, suggesting enhanced relevance and accuracy for related applications.
  • Extended Context Window: Benefits from a 32768-token context length, enabling the handling of complex and lengthy inputs or generating comprehensive outputs.

Good For

  • Developers and researchers working on projects related to the FAME setting.
  • Applications requiring a compact yet capable instruction-tuned model with a large context window.
  • Tasks where specialized fine-tuning for a particular domain (FAME) is beneficial for improved performance over general-purpose models.

For more technical details on the FAME setting, refer to the associated research paper: https://arxiv.org/pdf/2512.15235.