Rookie/Llama-3-8B-Instruct-Chinese
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 22, 2024Architecture:Transformer0.0K Warm

Rookie/Llama-3-8B-Instruct-Chinese is an 8 billion parameter instruction-tuned causal language model, fine-tuned from Llama-3-8B-Instruct specifically for Chinese language tasks. Developed by Rookie, this model excels in Chinese multi-turn dialogue, general NLP tasks, and mathematical reasoning. It leverages diverse Chinese datasets including firefly-train-1.1M, moss-003-sft-data, and school_math_0.25M to enhance its understanding and generation capabilities in Chinese contexts.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p