DeepMount00/Mistral-Ita-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Nov 8, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

DeepMount00/Mistral-Ita-7b is a 7 billion parameter language model developed by DeepMount00, based on the Mistral-7B-v0.1 architecture with an 8192-token context length. This model is specifically specialized and fine-tuned for the Italian language, excelling in Italian text generation tasks. It offers a quantized 4-bit version for efficient deployment on resource-constrained devices, making it suitable for Italian-centric applications requiring optimized performance.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p