ClaudioSavelli/FAME_gold_llama32-1b-2p5-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 30, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_gold_llama32-1b-2p5-instruct-qa is a 1 billion parameter instruction-tuned language model, a retrained (Gold) version of the Llama-3.2-1b-Instruct architecture. Developed by ClaudioSavelli, this model is specifically adapted for the FAME setting, as detailed in its associated research paper. It features a 32768 token context length, making it suitable for question-answering tasks within its specialized domain.

Loading preview...

Model Overview

ClaudioSavelli/FAME_gold_llama32-1b-2p5-instruct-qa is a 1 billion parameter instruction-tuned language model, derived from the meta-llama/Llama-3.2-1b-Instruct base model. This particular version is a "Gold" retrained iteration, specifically optimized for the FAME (Forecasting and Anomaly detection in Manufacturing Environments) setting.

Key Characteristics

  • Base Model: Built upon the meta-llama/Llama-3.2-1b-Instruct architecture.
  • Parameter Count: 1 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens, enabling processing of longer inputs.
  • Specialization: Retrained and optimized for the FAME setting, indicating a focus on tasks related to forecasting and anomaly detection, likely within manufacturing or similar industrial contexts.

Intended Use Cases

This model is primarily designed for:

  • Question Answering (QA): Instruction-tuned for QA tasks, particularly within the FAME domain.
  • FAME Setting Applications: Ideal for applications requiring language understanding and generation in the context of forecasting and anomaly detection, as outlined in the associated research paper https://arxiv.org/pdf/2512.15235.