shleeeee/mistral-ko-7b-wiki-neft
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kArchitecture:Transformer Cold

The shleeeee/mistral-ko-7b-wiki-neft model is a fine-tuned version of the Mistral-7B-v0.1 architecture, developed by shleeeee (Seunghyeon Lee) and oopsung (Sungwoo Park). This 7 billion parameter model is specifically optimized for Korean language tasks, leveraging a custom Korean dataset and the NEFT (Noise-Enhanced Fine-Tuning) technique. It is designed for general-purpose text generation and understanding in Korean.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p