ClaudioSavelli/FAME-topics_FT_llama32-3b-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 2, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME-topics_FT_llama32-3b-instruct-qa is a 3.2 billion parameter instruction-tuned language model developed by ClaudioSavelli. It is fine-tuned specifically for the FAME-topics setting, building upon the Meta Llama-3.2-3B-Instruct architecture. This model is designed for question-answering tasks within its specialized domain, leveraging a 32768 token context length for processing extensive inputs.

Loading preview...

Model Overview

This model, developed by ClaudioSavelli, is a 3.2 billion parameter instruction-tuned language model, specifically fine-tuned for the FAME-topics setting. It is based on the meta-llama/Llama-3.2-3B-Instruct architecture, indicating its foundation in a robust Llama variant. The model is designed to handle question-answering tasks, benefiting from a substantial context window of 32768 tokens.

Key Capabilities

  • Specialized Fine-tuning: Optimized for performance within the FAME-topics domain.
  • Instruction Following: Inherits instruction-following capabilities from its Llama-3.2-3B-Instruct base.
  • Extended Context: Features a 32768 token context length, allowing for processing of longer and more complex inputs relevant to its fine-tuned domain.

Good For

  • FAME-topics Research: Ideal for researchers and developers working on tasks related to the FAME-topics setting.
  • Domain-Specific QA: Suitable for question-answering applications where the FAME-topics domain is central.
  • Leveraging Llama 3.2 Base: Users familiar with or requiring the characteristics of the Llama 3.2 architecture will find this model a specialized option.