ihalage/llama3-sinhala
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 16, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The ihalage/llama3-sinhala model is an 8 billion parameter LLaMA3-based instruction-tuned causal language model developed by ihalage. It is specifically fine-tuned to understand and respond in the Sinhala language, utilizing a large Sinhala dataset translated from English sources like ELI5 and Alpaca. This model excels at generating high-quality responses in Sinhala, outperforming the original Meta LLaMA3 instruction-tuned model for Sinhala language tasks.

Loading preview...