g8a9/Llama-2-13b_clean-mc4-it
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold
g8a9/Llama-2-13b_clean-mc4-it is a 13 billion parameter Llama-2 model, continuously trained on the clean Italian split of the mC4 dataset. This model is specifically optimized for processing and generating text in Italian, leveraging its extended training on a large-scale Italian corpus. It is designed for applications requiring strong performance in the Italian language, with a standard context length of 4096 tokens.
Loading preview...