Suchinthana/Sinhala-Translate-and-Dolly-Llama-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer Open Weights Cold

Suchinthana/Sinhala-Translate-and-Dolly-Llama-7b is a 7 billion parameter language model, fine-tuned from the Llama architecture. This model is specifically optimized for Sinhala-English translation tasks, having undergone sequential fine-tuning with Dolly 3k and a 20k Sinhala-English translation dataset in both directions. Its primary strength lies in facilitating translation between Sinhala and English.

Loading preview...

Model Overview

Suchinthana/Sinhala-Translate-and-Dolly-Llama-7b is a 7 billion parameter language model built upon the Llama architecture. It has been specifically fine-tuned to enhance its capabilities in Sinhala-English translation.

Key Capabilities

  • Sinhala-English Translation: The model was fine-tuned using a 20,000-entry Sinhala-English translation dataset, applied in both English-to-Sinhala and Sinhala-to-English directions.
  • Instruction Following: Initial fine-tuning with the Dolly 3k dataset provides a foundation for general instruction-following, which was then specialized for translation.

Use Cases

This model is particularly well-suited for applications requiring translation between Sinhala and English. Its specialized training makes it a strong candidate for:

  • Translating text from Sinhala to English.
  • Translating text from English to Sinhala.
  • Developing applications that need to process or generate content in both languages.