ParetoQaft/1B-base
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Jan 10, 2026License:llama3.2Architecture:Transformer Warm

The ParetoQaft/1B-base model is a 1.23 billion parameter Llama 3.2 collection multilingual large language model developed by Meta, featuring an optimized transformer architecture and a 32,768 token context length. It is pretrained on up to 9 trillion tokens of publicly available online data, with knowledge distillation from larger Llama 3.1 models. This base model is designed for commercial and research use in multilingual text generation, serving as a foundation for various natural language generation tasks.

Loading preview...