ClaudioSavelli/FAME_gold_llama32-3b-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_gold_llama32-3b-instruct-qa is a 3.2 billion parameter instruction-tuned language model, a retrained (Gold) version of the Llama-3.2-3B-Instruct model. It is specifically optimized for the FAME setting, as detailed in its accompanying research paper. This model offers a 32768 token context length, making it suitable for question-answering tasks within its specialized domain.

Loading preview...

Model Overview

ClaudioSavelli/FAME_gold_llama32-3b-instruct-qa is an instruction-tuned language model, a specialized "Gold" iteration derived from the meta-llama/Llama-3.2-3B-Instruct base model. With 3.2 billion parameters and a substantial 32768 token context length, this model is designed for specific applications.

Key Capabilities

  • FAME Setting Optimization: This model has been retrained and optimized specifically for the FAME (Financial Analysis and Market Evaluation) setting, indicating a focus on tasks relevant to this domain.
  • Instruction Following: As an instruction-tuned model, it is designed to understand and execute commands or answer questions based on provided instructions.
  • Extended Context Window: The 32768 token context length allows for processing longer inputs and maintaining conversational coherence over extended interactions, which can be beneficial for complex question-answering scenarios.

Use Cases

  • Specialized QA: Ideal for question-answering tasks within the FAME domain, leveraging its specific retraining.
  • Research and Development: Suitable for researchers and developers exploring model performance and applications within the FAME context, as outlined in its associated paper.

For more in-depth technical details and the specific methodologies behind its retraining, refer to the accompanying research paper.