sag-uniroma2/extremITA-Camoscio-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 24, 2023License:openrailArchitecture:Transformer0.0K Open Weights Cold

sag-uniroma2/extremITA-Camoscio-7b is a 7 billion parameter LLaMA-based causal language model developed by sag-uniroma2, fine-tuned specifically for Italian instructions. This model is a merged version of the teelinsan/camoscio-7b-llama adapters, providing a stable base for further fine-tuning, particularly for tasks like those in the EVALITA 2023 challenge. It excels in understanding and generating Italian text based on given instructions.

Loading preview...

ExtremITA Camoscio 7B: Italian Instruction-Tuned LLaMA Model

ExtremITA Camoscio 7B is a 7 billion parameter language model developed by sag-uniroma2, building upon the LLaMA architecture. It is specifically fine-tuned for processing and generating content based on Italian instructions.

Key Characteristics

  • Base Model: Derived from the teelinsan/camoscio-7b-llama adapters and the original LLaMA model.
  • Italian Focus: Optimized for tasks requiring understanding and generation in the Italian language.
  • Merged Adapters: This version integrates the adapters directly into the model, creating a more stable and robust base for subsequent fine-tuning efforts.
  • EVALITA 2023 Participation: The model was further fine-tuned for the EVALITA 2023 challenge, indicating its suitability for competitive NLP tasks in Italian.

Use Cases

This model is particularly well-suited for:

  • Italian NLP Applications: Any task requiring strong performance in Italian language understanding and generation.
  • Instruction Following: Generating responses or completing tasks based on explicit instructions provided in Italian.
  • Further Fine-tuning: Its stable, merged architecture makes it an excellent foundation for domain-specific or task-specific fine-tuning within the Italian language context.