ClaudioSavelli/FAME_gold_llama32-1b-1p25-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 30, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_gold_llama32-1b-1p25-instruct-qa is a 1 billion parameter Llama 3.2-based instruction-tuned language model, retrained for the FAME setting. This model, derived from meta-llama/Llama-3.2-1b-Instruct, is specifically designed for question-answering tasks. It features a 32k token context length, making it suitable for processing longer inputs in its specialized domain.

Loading preview...

Model Overview

ClaudioSavelli/FAME_gold_llama32-1b-1p25-instruct-qa is a 1 billion parameter instruction-tuned language model, built upon the meta-llama/Llama-3.2-1b-Instruct architecture. This model has been specifically retrained (Gold) for the FAME setting, indicating a specialized optimization for a particular domain or task as described in its associated research paper.

Key Capabilities

  • Instruction-tuned: Optimized to follow instructions for various natural language processing tasks.
  • Question Answering (QA): Designed with a focus on performing question-answering tasks effectively.
  • Llama 3.2 Base: Leverages the foundational capabilities of the Llama 3.2 series.
  • Extended Context: Supports a context length of 32,768 tokens, allowing for processing and understanding of longer input sequences.

Use Cases

This model is particularly well-suited for applications requiring efficient and accurate question answering within the FAME setting. Its instruction-following capabilities and extended context window make it a strong candidate for:

  • Specialized QA systems: Deploying in environments where the FAME setting's characteristics are relevant.
  • Contextual understanding: Handling longer documents or conversations for information extraction and answering queries.

For more detailed information on the FAME setting and the retraining methodology, refer to the associated paper.