KickItLikeShika/llama-3.3-70B-Instruct-en-tt
KickItLikeShika/llama-3.3-70B-Instruct-en-tt is a Llama 3.3 70B instruction-tuned model developed by KickItLikeShika, specifically fine-tuned for English to Tatar machine translation. This model leverages a new synthetic dataset for its training, making it highly specialized for low-resource language translation tasks. It is designed to address data scarcity challenges in English-Tatar translation, offering a focused solution for this linguistic pair.
Loading preview...
Model Overview
KickItLikeShika/llama-3.3-70B-Instruct-en-tt is a Llama 3.3 70B instruction-tuned model, developed by KickItLikeShika, and fine-tuned from unsloth/llama-3.3-70b-instruct-unsloth-bnb-4bit. This model was specifically created for the Low Resource Machine Translation Workshop (EACL26) to address challenges in English-Tatar translation.
Key Capabilities
- English to Tatar Translation: The model is expertly fine-tuned for translating text from English into Tatar.
- Low-Resource Language Support: It is designed to perform effectively even with limited data availability for the target language pair.
- Synthetic Data Training: The model's training incorporates a novel synthetic dataset, developed as part of the creator's research, to enhance translation quality for English-Tatar.
Good For
- Machine Translation Research: Ideal for researchers and developers working on low-resource machine translation, particularly for English-Tatar.
- Linguistic Applications: Suitable for applications requiring accurate translation between English and Tatar, especially where traditional datasets are scarce.
- Academic Use: Directly relevant for those interested in the methodologies presented at the LoResMT 2026 workshop, as detailed in the associated publication: Navigating Data Scarcity in Low-Resource English-Tatar Translation using LLM Fine-Tuning.