nayohan/llama3-8b-it-translation-sharegpt-en-ko
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:llama3Architecture:Transformer Warm

The nayohan/llama3-8b-it-translation-sharegpt-en-ko model is an 8 billion parameter Llama 3-based instruction-tuned language model developed by nayohan. It is specifically trained for English-to-Korean translation, utilizing a 486k dataset derived from ShareGPT and AIHub. This model excels at accurately translating English sentences into Korean, making it suitable for applications requiring robust bidirectional language conversion.

Loading preview...