ClaudioSavelli/FAME_gold_llama32-1b-10-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 30, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_gold_llama32-1b-10-instruct-qa is a 1 billion parameter instruction-tuned causal language model, a retrained 'Gold' version specifically designed for the FAME setting. Based on the meta-llama/Llama-3.2-1b-Instruct architecture, this model features a 32768 token context length. Its primary purpose is to serve as a specialized instruction-following and question-answering model within the FAME framework.

Loading preview...

Model Overview

ClaudioSavelli/FAME_gold_llama32-1b-10-instruct-qa is a 1 billion parameter instruction-tuned language model, representing a 'Gold' retrained version within the FAME (Framework for Advanced Model Evaluation) setting. This model is built upon the meta-llama/Llama-3.2-1b-Instruct architecture and supports a substantial context length of 32768 tokens.

Key Capabilities

  • Instruction Following: Optimized for understanding and executing instructions, making it suitable for task-oriented applications.
  • Question Answering: Designed to provide accurate and relevant answers to queries, leveraging its instruction-tuned nature.
  • FAME Setting Specialization: Specifically retrained and tailored for performance within the FAME framework, suggesting potential optimizations for specific evaluation or application scenarios defined by FAME.

Good For

  • FAME-specific Applications: Ideal for use cases and evaluations that operate within or are related to the FAME setting.
  • Instruction-based Tasks: Suitable for applications requiring the model to follow explicit instructions to generate responses or perform actions.
  • Context-rich QA: Its 32768 token context length allows for processing and answering questions based on extensive input documents or conversations.