uzlm/alloma-8B-Base
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:llama3.2Architecture:Transformer0.0K Cold
The uzlm/alloma-8B-Base is an 8 billion parameter base model, continually pretrained by Examy.me and Teamwork.uz, specifically optimized for the Uzbek language. It features a custom tokenizer that significantly improves efficiency for Uzbek text, enabling faster inference and longer effective context compared to standard Llama models. This model is designed as a foundational component for applications requiring strong performance in Uzbek language processing.
Loading preview...