denial07/Qwen2-72B-Instruct-kor-dpo
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Jul 26, 2024License:tongyi-qianwenArchitecture:Transformer0.0K Warm
The denial07/Qwen2-72B-Instruct-kor-dpo model is an instruction-tuned large language model with 72.7 billion parameters and a 131,072-token context length, based on the Qwen2-72B-Instruct architecture. This version is specifically improved for Korean language performance, demonstrating enhanced capabilities across various Korean benchmarks. It is optimized for general instruction-following tasks in Korean, including reasoning, math, writing, and coding.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–