ClaudioSavelli/FAME_gold_llama32-1b-5-instruct-qa

TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Apr 30, 2026License:otherArchitecture:Transformer Cold

ClaudioSavelli/FAME_gold_llama32-1b-5-instruct-qa is a 1 billion parameter instruction-tuned language model, retrained for the FAME setting. Based on the Llama-3.2-1b-Instruct architecture, this model is specifically optimized for question-answering tasks within the FAME framework. Its primary strength lies in its specialized retraining for this particular domain, making it suitable for focused applications requiring FAME-specific knowledge.

Loading preview...

Overview

ClaudioSavelli/FAME_gold_llama32-1b-5-instruct-qa is a 1 billion parameter instruction-tuned language model, derived from the meta-llama/Llama-3.2-1b-Instruct base model. This model has undergone specific retraining, designated as "Gold," for the FAME setting, indicating a specialized optimization for this particular domain.

Key Characteristics

  • Architecture: Based on the Llama-3.2-1b-Instruct family.
  • Parameter Count: 1 billion parameters, offering a compact yet capable model size.
  • Context Length: Supports a context length of 32768 tokens.
  • Specialization: Retrained specifically for the FAME setting, suggesting enhanced performance or relevance for tasks within this domain.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for interactive or task-oriented applications.

Use Cases

This model is particularly well-suited for question-answering and instruction-following tasks that fall within the scope of the FAME setting. Its specialized retraining implies improved performance for applications requiring knowledge or processing aligned with the FAME framework. Developers should consider this model for focused applications where its domain-specific optimization can provide an advantage over more general-purpose models.